I sent the following to Mark Slight as a personal Substack message, though I think he’s right that it would a shame to keep discussions between us private. So hopefully we’ll have more to say here below:
Hey Mark, I wanted to reply to you personally because I know that Pete’s has had enough of me for a while. Furthermore you simply can’t know how strong my convictions happen to be that Turing and science fiction in general has gotten people who would otherwise be my allies, to instead believe something magical. Furthermore I consider this bit of magic to be just the tip of the iceberg regarding how flawed academia happens to be. So understand that you will not change my mind in this respect. I in turn also understand that our conversations will not change your mind. The only thing that would (hopefully) alter our perspectives is advancement in science itself. Fortunately however discussions might still sharpen our games. And they can be great fun too! So I’ll quickly go through that last comment you left for me at Pete’s:
Mark Slight
All Substance No Essence
3hEdited
Eric. I think you are fundamentally misunderstanding what functionalists are saying. First, I'll try to explain the Cartesian Materialism fallacy you have fallen into (and many intelligent people with you):
Let's say you're right. It's somehow confirmed that the EMF is where consciousness is located. It's what is "informed". Then my question is -- would you consider the problem solved? [Me: No, but this should still be one of the most important scientific discoveries ever.] HOW is this a theory of consciousness? [Me: This would be its substrate, or effectively what it’s made of.] WHY is a physical electromagnetic field an experiencer? [Me: This would be because of causality.] WHAT is an experiencer? [Me: You’re an experiencer.] You have only postponed the problem! [Me: Actually I think such a discovery would permit science to discard a vast hoard of bullshit from science.] Figuring out that the brain is the organ of cognition was crucial, but it did NOT explain the existence of consciousness. [Me: I know.]
This mistake is what Dennett pointed to time and again. This is Cartesian Materialism. You CANNOT explain consciousness by pointing to a particular physical LOCATION or PROPERTY or SUBSTRATE and exclaim that this explains the existence of an experiencer. That is NOT a theory of consciousness! [Me: The empirical validation of EMF consciousness would be a huge start, and very much like Newton’s discoveries regarding gravity. He left the substrate question of gravity for Einstein to discover however, so in this sense I guess such a discovery would go beyond.]
As Dennett said: whatever the correct theory of consciousness is, it must be a story in which you are not a character. If you are, it is not a theory of consciousness. This is hard to accept. But it's an inescapable conclusion. [Me: If I am conscious and science determines that consciousness resides as a neurally produced field of radiation, then I should exist in that form too.]
To your misunderstanding of functionalism. Functionalists don't believe that marks on the second piece of paper are the pain and that these marks are felt without an experiencer. That's not the theory at all. [Me: Of course functionalists don’t like my thought experiment given that it makes their position look silly. But other functionalists (like Mike) have instead bitten the bullet here. You and Pete are the only two so far who seem to still be trying to find a way out.]
Let's try another approach regarding functionalism. Thumb pain. Can you tell me something about thumb pain over and above what thumb pain DOES? [Me: No “does” seems to cover it.] When computational functionalist makes this claim they are not merely saying that the exterior behaviour is all there is to it. We are talking about the influence that the pain stimuli has on everything that is being modeled. It is the influence in the "virtual world", so to speak.
Sorry for all the capital letters. Can't do italics or bold!!
The following would be Mark’s reply, as well as my response. So he can respond to my response below if he likes:
Hey Eric! Thanks for writing. I'm sure Pete wouldn't mind if you and I were replying to each other. Increases the comment count! But we can do it here if you prefer.
Yes I agree it's very unlikely either of us will change the other's mind. I can tell this is very deeply ingrained in you by now!
I'm very curious how Mike bit the bullet. I'm not so sure he really did? I suspect one of you misunderstood the other!
There's so much to say but I would largely be repeating myself, as we both are. I'll try to focus on a few things where perhaps one can understand the other better.
1. How is it you consider functionalism correct (by definition, or otherwise), yet computational functionalists believe in magic? How do these two statements fit together?
2. How is your theory anything like Newton's or Einsteins gravity? Their theories, as every other physics theory, are theories by virtue of presenting new mathematical relationships. They describe how matter and gravity relate. GR says that gravity is not a traditional force, but that matter curves spacetime (as you already know). However, they are completely silent on fundamental entities. They only describe mathematical relationships.
Your theory, as far as I can tell, posits a new fundamental entity. It posits an experiencer. It doesn't say anything about what an experiencer is (at least not any more than pointing at a DNA molecule explains what a human explains what a DNA molecule is). It just says that experiencers exist, and it posits a physical phenomenon as a substrate.
The computer analogy would be that you posit that an app as fundamental. You posit the instructions reside in the long term memory substrate, and that the running app evolves dynamically in the RAM and CPU substrate. There the app is causally informed by inputs and so forth.
Is this a theory of how computers work and what apps are?
3. As I've pointed out twice before, you cannot have magnetic field fluctuations without corresponding ion movement and thereby changes to neuronal activity. This seems like an obvious problem with your view. Don't you agree?
Cheers!
- - - - - - - -
Hi Mark,
Mike can of course respond if he likes. I’ll notify him that his name has come up. But my interpretation is essentially that even early on he could tell that my thought experiment was effectively sound. Therefore in a hypothetical sense I think he decided not to challenge the premise itself. He might see this somewhat like Dennett’s “systems response” to Searle’s Chinese room thought experiment.
On 1, or how I could consider functionalism to be true by definition as well as magic, that’s a great question! Here I just mean in a literal sense one could always resort to an associated circularity. For example if there were someone standing next to me who seemed identical to me, a functionalist could say there are functionally now two of me. If I complain they could say that they only mean this in a functional sense. So are all the organs the same? Are we thinking the same thoughts? There is no end to how far “functionality” can literally be taken thus making such functionalist statements true by definition. In practice however it’s just a title that people call themselves and so has no binding power. People who generally call themselves this can certainly still be tricked about how causality works. That’s exactly what I’m proposing.
On 2, or how gravity might be similar to EMF consciousness, also a great question! Gravity was of course extremely important to reduce in a historical sense. It used to be thought that the heavens and the Earth functioned by means of different stuff. In a massive reduction however, Newton demonstrated that the stuff out there works the same as the stuff here. Einstein got more technical still. Similarly the empirical validation of EMF consciousness should effectively destroy the common conception of “spirit” to instead reduce us into worldly “EMF stuff” rather than standard “magic stuff”. This theory would not invoke a fundamental entity any more than gravitational theory. But I guess I do consider my observations regarding information to be essentially fundamental. This is to say that information should only exist as such to the extent that it goes on to inform something causally appropriate.
On 3 regarding how EM fields inherently alter neural function, here you seem to be getting into the discussions I’ve been having with Suzi about my potential testing proposal for McFadden’s theory. But no, the causal relationship between fields and neurons doesn’t present a problem for the theory. Actually the theory mandates such a relationship so that the field would not be inherently epiphenomenal. I mean to get into that in my next post.
I'm not clear exactly what I'm biting the bullet on in this discussion. But if we're talking about Eric's assertion about the card processing thing, then I've always accepted it with a crucial caveat: that it's the overall processing that matters, not the static marks on the cards (or paper, or whatever it is). The static marks are just a snapshot of the process at a certain stage. I would also note that we're likely talking about billions or trillions of cards, at least.
Hopefully I'm speaking to the right bone of contention?
Ugh, I hate the way Substack breaks up commenting threads. It's very similar to Twitter/Bluesky threading.
But from what I can glean, Pete is considering your scenario without my caveat. He's basically saying that same thing in this comment that I am with the caveat.
"There’s no way of deriving from the definition of functionalism that some inert stack of printouts would allegedly suffice for an experience of pain. The closest you could get is that the process of printing them out or the process of scanning them in would allegedly suffice for an experience of pain."
Your caveat Mike, is essentially an extra thing that you like to say. There’s nothing in my standard description which doesn’t already presuppose it. Would the correct marks on paper that are algorithmically processed to create the correct other marks on paper, create something that experiences the same thing that you do when your thumb gets whacked? You say “yes”. I say this would violate causality because the output paper would only potentially be informational. Specifically it would need to inform something causally appropriate that itself would exist as such an experiencer. You deny this to thus deadlock the matter. Will Mark or Pete ever agree with you that the proper marks on paper that are algorithmically processed to create the proper other marks on paper, will thus create an experiencer of thumb pain? I don’t know.
As usual you don't seem to understand how crucial that caveat is. The truth is, without it I don't accept your scenario. I'm on the same page as Mark and Pete.
You make some very good points here; and I completely agree with you. It has always somewhat irked me whenever someone says "the brain is like a computer." Of course, if you compare their tendency to receive inputs, process, then output different information, they would have some commonality on the surface level; that's about all they have in common though. Such comparisons quickly break down once you probe any deeper than that. I once said in another comment that the brain is closer to an FPGA, rather than an actual computer. The arrangement of logic gates in an FPGA do work in a similar(ish) way to the way neurons work. However, this is still a weak comparison; an FPGA is no closer to becoming sentient, or "human."
The brain does not have any equivalent concept to software like a computer. In a computer, a keystroke will trigger an interrupt, which will run a piece of code within a kernel driver. The kernel may then execute another subroutine to send that keystroke to the appropriate user-mode process. This is nothing like how sensations work in the brain. No software, no interrupts, no drivers, no kernel, no processes; just a domino-effect of neurons firing.
The brain also has no concept of AND, NOT, OR, XOR logic in the same way as a CPU or FPGA does. They aren't the same thing, and they can't be the same thing. That makes such concepts as mind-uploading impossible in my view. I doubt I'll be preserving my consciousness on a hard drive... ever.
I also like the question you brought up of giving human-rights to a computer. It's not one that I thought of before, but an interesting one to ask. The idea does seem far-fetched to me though.
I also appreciate the narration that you did for this article. Overall very nice work articulating some very good points. Thanks for sharing!
Thanks for taking a look Michael! Yeah I don’t mean to get too technical on what should and shouldn’t qualify as “a computer”. To me it’s just a word that seems relatively appropriate given the logic based processing of neural input that leads to output function. Fortunately FPGA came up in a search as “field programable gate array”. Otherwise I’d have been clueless. But I’ll stick with the computer analogy since it’s a standard term. At some point I’ll even stretch this analogy further by referring to consciousness as a kind of computer in itself, or a value driven loop within brain function. While the computers that we build are driven by means of electricity, and brains are driven by means of complex electrochemical processes, I’ll present consciousness as something that’s driven by means of the desire to feel good rather than bad. I’m in no hurry though. It’s easy to set this stuff aside when I’m also having fun at the blogs of others.
I'm quite interested in the nature of sentience and consciousness; that's something that's boggled my mind for my entire life. I'm looking forward to your next articles on the subject.
Suzi responded to my comments at her bog site. You might be interested in our discourse but then again; maybe not? Feel free to comment because I know you are not shy Eric.
This comment is in reference to our previous conversation via that chat function. Since you appear to agree that information has to be constructed and built before it can be processed by the brain, I would like you to consider the following questions.
Of the four forces of nature, would you agree that gravity is the lynch-pin of structure? If so, would you be willing to entertain that there has to be a system in the brain that constructs and builds information from the raw sensor stimuli before that information can be processed? And if you agree with this assessment would you also agree that this system would be the lynch-pin of information processing itself?
You appear not to have a problem with the self-caused cause of gravity in physics. Fair enough, we can still do physics right? But here’s the problem as I see it. Unlike physics that recognizes gravity as a lynch-pin to structure even though the actual cause is unknown, the so-called pundits of consciousness research do not even recognize let alone acknowledge that there is a system in the brain that functions as a lynch-pin as well. Furthermore, this system is not only essential, it is unequivocally required for converting the raw sensory stimuli into semantics ladened information. This conversion from raw sensory stimuli to semantics ladened information has to occur first in a hierarchy before information can processed. No information to process = no information processing.
Well, I did forgot that one can always default to magic right?
Fortunately Lee also asked me this question over at Suzi’s discussion of Searle’s Chinese room essay, so I responded over there. Here’s the link to my response (which will hopefully go through to the end of her comment section where we got into that):
Eric! What is the "you" who is the experiencer of thumb pain in your view? I think I've said it before, but I'll say it again: I think you're a Cartesian Matetialist! While rejecting dualism you still model the mind as a mental subject "you" experiencing mental objects "thumb pain".
Computational functionalism cannot point to the experiencer of thumb pain, nor the object of thumb pain. Nor should it, because they exist merely as constructs. These constructs, it explains beautifully. No magic required.
Hey Mark, thanks for getting in on this! I realize that you consider yourself part of the group that I argue has accidentally accepted a magical belief. This is certainly contentious stuff given how embarrassing it would be for your side if generally validated. My naturalism however mandates that I can’t just let this go, and even if the people who would otherwise be my strongest allies, thus consider me an enemy.
One thing that I can certainly say about myself is that I’m a devout nominalist. This is to say that I consider all words to be constructs. But it’s with these constructs that we create the science that’s sometimes used to develop effective models of the reality that exists beyond any constructs. While all models are inherently word based, or constructs built from more constructs, some of them are consistent with causality while others violate it. My argument is that computational functionalism violates it.
There are two essential ways to take this argument. One would be to decide that information processing in itself must create what you feel when your thumb gets whacked. Of course you know who you are, but this perspective also mandates that an unidentified entity would feel the same pain if the correct marks on paper were algorithmically processed to create the correct other marks on paper. Who would this experiencer be that feels the same thing you do when your thumb gets whacked? We may as well call it a spirit. Furthermore causality mandates that this isn’t actually how computers work. Instead we find that processed information will only exist as such to the extent that it goes on to inform something causally appropriate. So this particular construct does seem magical, and of course that’s already suggested by the unidentified spirit experiencer in the marked paper scenario.
The other option would be to agree that in some sense processed brain information must go on to inform something causally appropriate to exist as an experiencer of thumb pain. That’s essentially where I was in December 2019 when I came across the possibility that consciousness might exist as a neurally produced electromagnetic field. I don’t know of any other consciousness proposals on the market that would be possible to empirically test, and specifically because I’m not aware of any others that identify a measurable element of our world that also exists as consciousness. Let me know if you know of any such proposals. They tend to strike me as unfalsifiable crap. Regardless next time I mean to get into some specifics of this proposal, as well as how I think it could be effectively tested to determine an answer one way or the other.
Okay Mark, sounds fun! Technically I wouldn’t call it “our” reality but rather “mine”. You’re an element that I perceive in mine (and suspect that I’m an element that you perceive in yours). Is my reality compatible with being in a simulation? No, as I define the terms I don’t consider the two compatible. A simulation would be a model or construct of something else and so not the thing in itself. To me however, I’m the me in itself that exists rather than a model of something else. And of course you could say that maybe a god produced me as a model of something else. Well sure, in that sense I could be “in a simulation”. Here the god would be creating something that values as a simulation or model of something else. But if we go in talking about the me by which I know of my existence rather than something that a god creates as a model of something else, then no I can’t be in a simulation. And how do I know that I exist at all? I covered that in my last post, or “I value, therefore I am”.
Hmmm. Maybe we we don't have the same idea of what a simulation is, or I have the wrong idea. By simulation, I mean like for example game of life. It's not simulating something else.
What I'm asking is essentially if you think the standard model of particle physics applies to your brain. If you think it does (as I believe you have said before), then it makes no difference to your behaviour if the physics in which you are instantiated is the base layer of reality, or if the physics in which you are instantiated is itself instantiated in some more fundamental layer. Right?
Okay Mark, and I do appreciate your dedication and clear sincerity. Unfortunately I haven’t done much with the “game of life” stuff. The standard model of physics would be a model from which to potentially grasp how causality works — I presume scientists haven’t determined brain function well enough to grasp that it conflicts with their understandings. Hopefully that helps! Do I consider the physics by which I’m instantiated to be the base layer of reality, or rather would it be possible for me to exist by means of a more basic layer still? And because you mentioned “physics” I presume you mean naturally rather than supernaturally, or as I like to say, “systemic causality”.
It’s difficult to know where I lie in respect to more basic causal layers of existence since all I really have in the end is my consciousness itself. I merely presume that my existence resides by means of the causal dynamics that create such existence. If I’m correct then this perception ought to be considered pretty basic to me at least. But maybe there are more layers? I guess I’d know if I was creating something. But I might not know if I was being created by something by means of a realm (edit: no not “a realm” but rather “causality”) beyond my perceptions. Does that suffice for you to make your point? (And I’m off tomorrow except for an early dentist appointment, so if it’s getting too late over there then get some sleep!)
I agree it's difficult to know if you're in the base layer or not, I think it's impossible to know. How could you ever?
I have taken you earlier to say that there is nothing in the electromagnetic field theory of consciousness that requires modifications of our models of physics. Is that correct? In fact, electromagnetic fields are part of the Standard Model. Or is it a strong emergency theory?
The most important point is that the theory you subscribe to is a physicalist theory (right?).
Eric, this is our old debate. I'll try to tackle this in a way maybe we haven't exhausted already.
"For example when you press a key on your keyboard, this input information should not only be algorithmically processed, but it will need to inform causally appropriate instruments such as computer screens, speakers, memory, or whatever, in order for your key press to actually do anything."
This seems like the core of your argument. My question is, if the processor outputting to screens, speakers, or memory is sufficient for "inform causally appropriate instruments", why isn't the brain outputting to the muscles that enact movement, talking, eye movement, or any other form of behavior? If outputting to memory meets the standard, why don't neural circuits outputting to each other through synapses, or even recurrently to themselves? If you see differences here, then maybe expanding on exactly what makes something "appropriate" vs not appropriate.
For me, the important thing is to be causal throughout. I don't think trying to prejudge the right kind of causality is a good strategy. The right kind is whatever provides the capabilities that lead us to take the intentional stance toward a system, to intuitively see a fellow being there.
Along those lines, I think Turing was on the right track. But the five minute test that people pushed for so many years is based on a few throwaway remarks he makes in his 1950 paper. The real test is over an extended period of time. If something can pass the test for the majority of reasonably sophisticated subjects for weeks and months, then any fussiness we might have about whether it's "real" intelligence seems increasingly irrelevant.
Consider anyone you know who is conscious. How do you know that they're conscious? What about them makes you think they're conscious? And what would have to be different to make you doubt it? It seems like we all conduct our own Turing tests everyday.
I'm surprised there isn't an automatic text reading service for posts yet. This technology has been around for a while. (I think you once shared a service with me that you use to automatically read posts to you.) It seems easier than the AI art generation now available. I know no one wants to hear my often raspy voice reading my stuff aloud.
Yes Mike, we’ve been discussing this for many years. It was of course conversations with you that incited me to develop my thumb pain thought experiment itself. Once informed about the EMF consciousness proposal, would I have otherwise grasped its plausibility to indeed take that dive? Or rather would I have quickly dismissed it as more of the same nonsense that I saw in general? Hard to say. I doubt that I’d have been able to make several of the realizations that I’ve made without your ideas challenging mine though. In any case I am grateful for all the time you’ve spent with me.
Also yes, my principle argument is that in a causal world information should only be said to exist as such to the extent that it goes on to inform something causally appropriate. Therefore marked paper algorithmically processed to create more marked paper should not create an experiencer of thumb pain alone. Instead it should need to inform something causally appropriate that itself would exist as such an experiencer. Of course in the past you’ve bitten the bullet here to say that you believe an experiencer of thumb pain would result given such processing alone. Your response now however is interesting for suggesting a different tack where you agree that consciousness should require processed information to inform something causally appropriate. If that’s true then great!
From there you ask what’s wrong with the brain informing standard instruments like muscles or even recurrent neural circuits to exist as something conscious? As long as a specific proposal is made about something that could tangibly exist in a phenomenal capacity, then nothing’s wrong with that. The thing is however, I don’t know of any proposals on the market today which meet this “falsifiable” condition, that is except for McFadden’s. Do you know of any? Notice that one can’t say “If humans perceive something to function consciously for several months (or whatever), then it will be conscious”. That would be no more possible to test empirically than “the proper information processing in itself”. Next time I plan to get into a potential way of testing McFadden’s proposal that I personally developed, so that might also help illustrate what I mean by “empirically test”.
On blogging in general, the Substack app itself (rather than Substack online) does have a feature that generates a pretty good audio read out for their newsletters. Unfortunately it doesn’t seem available for less popular or simply new writers. The app that I sometimes use to read things in general is called Speechify. It’ll read anything back if you either paste or upload it, and the free version does seem adequate. It doesn’t sound nearly as good as the generators they use on Substack though. Given how many free services Google gives people simply to help their search engine business (like NotebookLM and Maps), you’d think they’d try to corner the market on quality text reading generation too. Seems like an unmet need.
"Your response now however is interesting for suggesting a different tack where you agree that consciousness should require processed information to inform something causally appropriate. "
My problem is that this statement is too ambiguous for me to either agree or disagree with it. Until I know what you mean by "causally appropriate", I really can't have a stance toward it. What my response was actually about was pointing out the inconsistency of listing screens, speakers, memory, etc, as appropriate without considering the biological functional equivalents.
"Instead it should need to inform something causally appropriate that itself would exist as such an experiencer. "
This might give an idea of what you mean by "causally appropriate", but it doesn't fit with the screen, speakers, etc, examples, which, except for a panpsychist, aren't experiencers. I'd ask what your theory is for what makes an experiencer, but I know you'd just talk about electromagnetic fields. What I don't see in your usual pitch is why the electromagnetic field in particular should be the experiencer, as opposed to neural processing.
As I've said many times, my own theory of an experiencer is functionalist, functionality which in animals is implemented by a physical neural network. I'm open to the idea of functionality happening in another substrate, but I need data to motivate that postulate.
I wonder why Substack limits the availability of their narration. Maybe it's a resource thing. Although I see Speechify now has a browser plugin, so it can be used anywhere. That's not as convenient as actually having it in a podcast, but then I found subscribing to Tina's Substack podcasts in my podcast player a lot of work, so this seems like an area that still needs a lot of improvement.
Let’s backtrack a second Mike. I agreed with you that my principle argument against computational functionalism was that thumb pain can’t just exist by means of processed information alone but rather must inform something causally appropriate. In that case an experiencer of thumb pain should exist by means of the right marks on paper that are algorithmically processed to generate the right other marks on paper, which seems funky. Furthermore I observed that instead of you saying that I was wrong about the need for something causally appropriate to be informed, you asked why the brain outputting to muscles and such wouldn’t be causally appropriate? I supported this question as a sign of progress — perhaps you were leaving the “information processing alone” explanation behind? Of course in the marked paper scenario the resulting paper doesn’t inform muscles or anything else. So presumably no thumb pain would occur there.
In order to indeed go this way however you seem to want me to tell you what would be causally appropriate to be informed to exist as thumb pain? I’ll let science determine this however. I just want to get a process going here where something could potentially be determined scientifically. And in my next post I plan to discuss a possibility that I consider reasonable and how I think scientists could potentially assess it.
So does that seem like a reasonable plan? And are you on board in general with processed information needing to inform appropriate things like muscles and computer screens to exist as such?
Scientific evidence is always the final arbiter. But part of letting "science determine this" means heeding what it's determined so far. Your strategy seems to be deciding that the current science is wrong (or misleading) and hoping that it will come along with something new that is more in line with your preferences. In my mind, that isn't really following the science, because someone who doesn't like the current answers can always follow that strategy to avoid accepting the science.
"And are you on board in general with processed information needing to inform appropriate things like muscles and computer screens to exist as such?"
Again, I don't know what "appropriate" means, so I can't agree or disagree with your statement here. To me information processing that outputs to further information processing, including in recurrent loops, is "appropriate."
Of course for any of that to have been naturally selected, it had to make a difference in behavior, so output to muscles is part of the normal process. But we wouldn't say that someone in a locked in state loses conscious status just because they've lost all movement. In their case, their internal information processes informing other internal information processes is enough for them to still have experiences.
My strategy is to expose a non acknowledged magical element of computational functionalism. This belief posits information processing in itself to be what creates a given example of consciousness. Conversely I’m saying that in order for causality to be maintained, one step more is required — the resulting processed information must go on to inform something causally appropriate that would itself exist as a given example of consciousness. And no, I can’t say what specifically would be “causally appropriate”. Instead I leave this up for empirical determination. But it’s the magic of trying to subvert the full “input> processing> output” chain that I mean to expose here. Stopping with processing alone leads to all sorts of funky implications.
By initially asking me why I wasn’t accounting for brain processed information that incites the function of muscles and such, I figured that you were trying to conform with the presented argument. Here you’d even be able to eject funky scenarios like pain by means of marked paper processing as well all sorts of mind uploading funkiness. But now acknowledging that someone “locked in” would still be conscious without certain traditional brain output features, like being able to operate muscles, you seem to have decided against taking this path. Now you seem to have reverted to the explanation that consciousness resides by means of processing alone, or something that includes the recursive processing of processing. You even went where I dared not to call this “appropriate”.
That’s all fine, but does still leave my argument itself intact. How might the processing of information “inform something causally appropriate” if by definition it’s processing rather than something that’s informed by what’s processed? It’s this stopping one step short of an output step that I consider to violate causality here. Notice that any proposal that takes this form will inherently be impossible to disprove. Conversely any proposal which posits something tangible to exist as what brain information informs to exist as consciousness, would thus have such potential for empirical assessment. Currently I only know of one proposal on the market today which takes this final step and so could be empirically validated or refuted.
It seems clear we have different expectations on what will be necessary to explain an "experiencer," to use your term. I'm a functionalist. I'm expecting an experiencer to be a causal structure, and information processing is just distilled causation. I'm open to it needing to be a different substrate than the neural one, but I need data establishing that. Right now, I don't see any limitation in the neural one preventing it from playing that role.
To me, a theory of consciousness that needs to reference an experiencer, essentially that needs to reference consciousness itself, is missing the main explanandum. At best, it's a proposition about where the experiencer may be, but not what it is.
I sent the following to Mark Slight as a personal Substack message, though I think he’s right that it would a shame to keep discussions between us private. So hopefully we’ll have more to say here below:
Hey Mark, I wanted to reply to you personally because I know that Pete’s has had enough of me for a while. Furthermore you simply can’t know how strong my convictions happen to be that Turing and science fiction in general has gotten people who would otherwise be my allies, to instead believe something magical. Furthermore I consider this bit of magic to be just the tip of the iceberg regarding how flawed academia happens to be. So understand that you will not change my mind in this respect. I in turn also understand that our conversations will not change your mind. The only thing that would (hopefully) alter our perspectives is advancement in science itself. Fortunately however discussions might still sharpen our games. And they can be great fun too! So I’ll quickly go through that last comment you left for me at Pete’s:
Mark Slight
All Substance No Essence
3hEdited
Eric. I think you are fundamentally misunderstanding what functionalists are saying. First, I'll try to explain the Cartesian Materialism fallacy you have fallen into (and many intelligent people with you):
Let's say you're right. It's somehow confirmed that the EMF is where consciousness is located. It's what is "informed". Then my question is -- would you consider the problem solved? [Me: No, but this should still be one of the most important scientific discoveries ever.] HOW is this a theory of consciousness? [Me: This would be its substrate, or effectively what it’s made of.] WHY is a physical electromagnetic field an experiencer? [Me: This would be because of causality.] WHAT is an experiencer? [Me: You’re an experiencer.] You have only postponed the problem! [Me: Actually I think such a discovery would permit science to discard a vast hoard of bullshit from science.] Figuring out that the brain is the organ of cognition was crucial, but it did NOT explain the existence of consciousness. [Me: I know.]
This mistake is what Dennett pointed to time and again. This is Cartesian Materialism. You CANNOT explain consciousness by pointing to a particular physical LOCATION or PROPERTY or SUBSTRATE and exclaim that this explains the existence of an experiencer. That is NOT a theory of consciousness! [Me: The empirical validation of EMF consciousness would be a huge start, and very much like Newton’s discoveries regarding gravity. He left the substrate question of gravity for Einstein to discover however, so in this sense I guess such a discovery would go beyond.]
As Dennett said: whatever the correct theory of consciousness is, it must be a story in which you are not a character. If you are, it is not a theory of consciousness. This is hard to accept. But it's an inescapable conclusion. [Me: If I am conscious and science determines that consciousness resides as a neurally produced field of radiation, then I should exist in that form too.]
To your misunderstanding of functionalism. Functionalists don't believe that marks on the second piece of paper are the pain and that these marks are felt without an experiencer. That's not the theory at all. [Me: Of course functionalists don’t like my thought experiment given that it makes their position look silly. But other functionalists (like Mike) have instead bitten the bullet here. You and Pete are the only two so far who seem to still be trying to find a way out.]
Let's try another approach regarding functionalism. Thumb pain. Can you tell me something about thumb pain over and above what thumb pain DOES? [Me: No “does” seems to cover it.] When computational functionalist makes this claim they are not merely saying that the exterior behaviour is all there is to it. We are talking about the influence that the pain stimuli has on everything that is being modeled. It is the influence in the "virtual world", so to speak.
Sorry for all the capital letters. Can't do italics or bold!!
The following would be Mark’s reply, as well as my response. So he can respond to my response below if he likes:
Hey Eric! Thanks for writing. I'm sure Pete wouldn't mind if you and I were replying to each other. Increases the comment count! But we can do it here if you prefer.
Yes I agree it's very unlikely either of us will change the other's mind. I can tell this is very deeply ingrained in you by now!
I'm very curious how Mike bit the bullet. I'm not so sure he really did? I suspect one of you misunderstood the other!
There's so much to say but I would largely be repeating myself, as we both are. I'll try to focus on a few things where perhaps one can understand the other better.
1. How is it you consider functionalism correct (by definition, or otherwise), yet computational functionalists believe in magic? How do these two statements fit together?
2. How is your theory anything like Newton's or Einsteins gravity? Their theories, as every other physics theory, are theories by virtue of presenting new mathematical relationships. They describe how matter and gravity relate. GR says that gravity is not a traditional force, but that matter curves spacetime (as you already know). However, they are completely silent on fundamental entities. They only describe mathematical relationships.
Your theory, as far as I can tell, posits a new fundamental entity. It posits an experiencer. It doesn't say anything about what an experiencer is (at least not any more than pointing at a DNA molecule explains what a human explains what a DNA molecule is). It just says that experiencers exist, and it posits a physical phenomenon as a substrate.
The computer analogy would be that you posit that an app as fundamental. You posit the instructions reside in the long term memory substrate, and that the running app evolves dynamically in the RAM and CPU substrate. There the app is causally informed by inputs and so forth.
Is this a theory of how computers work and what apps are?
3. As I've pointed out twice before, you cannot have magnetic field fluctuations without corresponding ion movement and thereby changes to neuronal activity. This seems like an obvious problem with your view. Don't you agree?
Cheers!
- - - - - - - -
Hi Mark,
Mike can of course respond if he likes. I’ll notify him that his name has come up. But my interpretation is essentially that even early on he could tell that my thought experiment was effectively sound. Therefore in a hypothetical sense I think he decided not to challenge the premise itself. He might see this somewhat like Dennett’s “systems response” to Searle’s Chinese room thought experiment.
On 1, or how I could consider functionalism to be true by definition as well as magic, that’s a great question! Here I just mean in a literal sense one could always resort to an associated circularity. For example if there were someone standing next to me who seemed identical to me, a functionalist could say there are functionally now two of me. If I complain they could say that they only mean this in a functional sense. So are all the organs the same? Are we thinking the same thoughts? There is no end to how far “functionality” can literally be taken thus making such functionalist statements true by definition. In practice however it’s just a title that people call themselves and so has no binding power. People who generally call themselves this can certainly still be tricked about how causality works. That’s exactly what I’m proposing.
On 2, or how gravity might be similar to EMF consciousness, also a great question! Gravity was of course extremely important to reduce in a historical sense. It used to be thought that the heavens and the Earth functioned by means of different stuff. In a massive reduction however, Newton demonstrated that the stuff out there works the same as the stuff here. Einstein got more technical still. Similarly the empirical validation of EMF consciousness should effectively destroy the common conception of “spirit” to instead reduce us into worldly “EMF stuff” rather than standard “magic stuff”. This theory would not invoke a fundamental entity any more than gravitational theory. But I guess I do consider my observations regarding information to be essentially fundamental. This is to say that information should only exist as such to the extent that it goes on to inform something causally appropriate.
On 3 regarding how EM fields inherently alter neural function, here you seem to be getting into the discussions I’ve been having with Suzi about my potential testing proposal for McFadden’s theory. But no, the causal relationship between fields and neurons doesn’t present a problem for the theory. Actually the theory mandates such a relationship so that the field would not be inherently epiphenomenal. I mean to get into that in my next post.
Hey guys,
I'm not clear exactly what I'm biting the bullet on in this discussion. But if we're talking about Eric's assertion about the card processing thing, then I've always accepted it with a crucial caveat: that it's the overall processing that matters, not the static marks on the cards (or paper, or whatever it is). The static marks are just a snapshot of the process at a certain stage. I would also note that we're likely talking about billions or trillions of cards, at least.
Hopefully I'm speaking to the right bone of contention?
Yes you’ve got it Mike. I think maybe years ago I mentioned cards, but today it’s just marks on paper. Essentially this is in reference to a conversation I was having with Pete Mandik at his site. At a later stage he said that he didn’t think any functionalist would accept that their position mandates that thumb pain must result if the correct marks on paper were algorithmically processed to create the correct other marks on paper. So he’s not “biting the bullet”. https://open.substack.com/pub/petemandik/p/freaky-functionalist-fights-paper?r=5674xw&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=120028722
Ugh, I hate the way Substack breaks up commenting threads. It's very similar to Twitter/Bluesky threading.
But from what I can glean, Pete is considering your scenario without my caveat. He's basically saying that same thing in this comment that I am with the caveat.
"There’s no way of deriving from the definition of functionalism that some inert stack of printouts would allegedly suffice for an experience of pain. The closest you could get is that the process of printing them out or the process of scanning them in would allegedly suffice for an experience of pain."
https://petemandik.substack.com/p/freaky-functionalist-fights-paper/comment/119763153
And Mark's final comment in the thread is making the same points I've made over the years.
Sorry Eric. Don't mean to pile on.
Your caveat Mike, is essentially an extra thing that you like to say. There’s nothing in my standard description which doesn’t already presuppose it. Would the correct marks on paper that are algorithmically processed to create the correct other marks on paper, create something that experiences the same thing that you do when your thumb gets whacked? You say “yes”. I say this would violate causality because the output paper would only potentially be informational. Specifically it would need to inform something causally appropriate that itself would exist as such an experiencer. You deny this to thus deadlock the matter. Will Mark or Pete ever agree with you that the proper marks on paper that are algorithmically processed to create the proper other marks on paper, will thus create an experiencer of thumb pain? I don’t know.
Eric,
As usual you don't seem to understand how crucial that caveat is. The truth is, without it I don't accept your scenario. I'm on the same page as Mark and Pete.
You make some very good points here; and I completely agree with you. It has always somewhat irked me whenever someone says "the brain is like a computer." Of course, if you compare their tendency to receive inputs, process, then output different information, they would have some commonality on the surface level; that's about all they have in common though. Such comparisons quickly break down once you probe any deeper than that. I once said in another comment that the brain is closer to an FPGA, rather than an actual computer. The arrangement of logic gates in an FPGA do work in a similar(ish) way to the way neurons work. However, this is still a weak comparison; an FPGA is no closer to becoming sentient, or "human."
The brain does not have any equivalent concept to software like a computer. In a computer, a keystroke will trigger an interrupt, which will run a piece of code within a kernel driver. The kernel may then execute another subroutine to send that keystroke to the appropriate user-mode process. This is nothing like how sensations work in the brain. No software, no interrupts, no drivers, no kernel, no processes; just a domino-effect of neurons firing.
The brain also has no concept of AND, NOT, OR, XOR logic in the same way as a CPU or FPGA does. They aren't the same thing, and they can't be the same thing. That makes such concepts as mind-uploading impossible in my view. I doubt I'll be preserving my consciousness on a hard drive... ever.
I also like the question you brought up of giving human-rights to a computer. It's not one that I thought of before, but an interesting one to ask. The idea does seem far-fetched to me though.
I also appreciate the narration that you did for this article. Overall very nice work articulating some very good points. Thanks for sharing!
Thanks for taking a look Michael! Yeah I don’t mean to get too technical on what should and shouldn’t qualify as “a computer”. To me it’s just a word that seems relatively appropriate given the logic based processing of neural input that leads to output function. Fortunately FPGA came up in a search as “field programable gate array”. Otherwise I’d have been clueless. But I’ll stick with the computer analogy since it’s a standard term. At some point I’ll even stretch this analogy further by referring to consciousness as a kind of computer in itself, or a value driven loop within brain function. While the computers that we build are driven by means of electricity, and brains are driven by means of complex electrochemical processes, I’ll present consciousness as something that’s driven by means of the desire to feel good rather than bad. I’m in no hurry though. It’s easy to set this stuff aside when I’m also having fun at the blogs of others.
I'm quite interested in the nature of sentience and consciousness; that's something that's boggled my mind for my entire life. I'm looking forward to your next articles on the subject.
Suzi responded to my comments at her bog site. You might be interested in our discourse but then again; maybe not? Feel free to comment because I know you are not shy Eric.
Enjoy your weekend my internet friend......
Eric,
This comment is in reference to our previous conversation via that chat function. Since you appear to agree that information has to be constructed and built before it can be processed by the brain, I would like you to consider the following questions.
Of the four forces of nature, would you agree that gravity is the lynch-pin of structure? If so, would you be willing to entertain that there has to be a system in the brain that constructs and builds information from the raw sensor stimuli before that information can be processed? And if you agree with this assessment would you also agree that this system would be the lynch-pin of information processing itself?
You appear not to have a problem with the self-caused cause of gravity in physics. Fair enough, we can still do physics right? But here’s the problem as I see it. Unlike physics that recognizes gravity as a lynch-pin to structure even though the actual cause is unknown, the so-called pundits of consciousness research do not even recognize let alone acknowledge that there is a system in the brain that functions as a lynch-pin as well. Furthermore, this system is not only essential, it is unequivocally required for converting the raw sensory stimuli into semantics ladened information. This conversion from raw sensory stimuli to semantics ladened information has to occur first in a hierarchy before information can processed. No information to process = no information processing.
Well, I did forgot that one can always default to magic right?
Fortunately Lee also asked me this question over at Suzi’s discussion of Searle’s Chinese room essay, so I responded over there. Here’s the link to my response (which will hopefully go through to the end of her comment section where we got into that):
https://suzitravis.substack.com/p/searles-chinese-room/comment/115979241
Eric! What is the "you" who is the experiencer of thumb pain in your view? I think I've said it before, but I'll say it again: I think you're a Cartesian Matetialist! While rejecting dualism you still model the mind as a mental subject "you" experiencing mental objects "thumb pain".
Computational functionalism cannot point to the experiencer of thumb pain, nor the object of thumb pain. Nor should it, because they exist merely as constructs. These constructs, it explains beautifully. No magic required.
Hey Mark, thanks for getting in on this! I realize that you consider yourself part of the group that I argue has accidentally accepted a magical belief. This is certainly contentious stuff given how embarrassing it would be for your side if generally validated. My naturalism however mandates that I can’t just let this go, and even if the people who would otherwise be my strongest allies, thus consider me an enemy.
One thing that I can certainly say about myself is that I’m a devout nominalist. This is to say that I consider all words to be constructs. But it’s with these constructs that we create the science that’s sometimes used to develop effective models of the reality that exists beyond any constructs. While all models are inherently word based, or constructs built from more constructs, some of them are consistent with causality while others violate it. My argument is that computational functionalism violates it.
There are two essential ways to take this argument. One would be to decide that information processing in itself must create what you feel when your thumb gets whacked. Of course you know who you are, but this perspective also mandates that an unidentified entity would feel the same pain if the correct marks on paper were algorithmically processed to create the correct other marks on paper. Who would this experiencer be that feels the same thing you do when your thumb gets whacked? We may as well call it a spirit. Furthermore causality mandates that this isn’t actually how computers work. Instead we find that processed information will only exist as such to the extent that it goes on to inform something causally appropriate. So this particular construct does seem magical, and of course that’s already suggested by the unidentified spirit experiencer in the marked paper scenario.
The other option would be to agree that in some sense processed brain information must go on to inform something causally appropriate to exist as an experiencer of thumb pain. That’s essentially where I was in December 2019 when I came across the possibility that consciousness might exist as a neurally produced electromagnetic field. I don’t know of any other consciousness proposals on the market that would be possible to empirically test, and specifically because I’m not aware of any others that identify a measurable element of our world that also exists as consciousness. Let me know if you know of any such proposals. They tend to strike me as unfalsifiable crap. Regardless next time I mean to get into some specifics of this proposal, as well as how I think it could be effectively tested to determine an answer one way or the other.
Hey Eric!
I have a lot to say in response, but I would mainly be repeating myself, seeing as I don't think you're fully understanding my position!
I'll instead begin a bit of an inquiry, and hope that you're up for it.
First, do you believe that our reality is compatible with being in a simulation? Not likely, not plausibly, but compatible? If no, why not?
Okay Mark, sounds fun! Technically I wouldn’t call it “our” reality but rather “mine”. You’re an element that I perceive in mine (and suspect that I’m an element that you perceive in yours). Is my reality compatible with being in a simulation? No, as I define the terms I don’t consider the two compatible. A simulation would be a model or construct of something else and so not the thing in itself. To me however, I’m the me in itself that exists rather than a model of something else. And of course you could say that maybe a god produced me as a model of something else. Well sure, in that sense I could be “in a simulation”. Here the god would be creating something that values as a simulation or model of something else. But if we go in talking about the me by which I know of my existence rather than something that a god creates as a model of something else, then no I can’t be in a simulation. And how do I know that I exist at all? I covered that in my last post, or “I value, therefore I am”.
Hmmm. Maybe we we don't have the same idea of what a simulation is, or I have the wrong idea. By simulation, I mean like for example game of life. It's not simulating something else.
What I'm asking is essentially if you think the standard model of particle physics applies to your brain. If you think it does (as I believe you have said before), then it makes no difference to your behaviour if the physics in which you are instantiated is the base layer of reality, or if the physics in which you are instantiated is itself instantiated in some more fundamental layer. Right?
Okay Mark, and I do appreciate your dedication and clear sincerity. Unfortunately I haven’t done much with the “game of life” stuff. The standard model of physics would be a model from which to potentially grasp how causality works — I presume scientists haven’t determined brain function well enough to grasp that it conflicts with their understandings. Hopefully that helps! Do I consider the physics by which I’m instantiated to be the base layer of reality, or rather would it be possible for me to exist by means of a more basic layer still? And because you mentioned “physics” I presume you mean naturally rather than supernaturally, or as I like to say, “systemic causality”.
It’s difficult to know where I lie in respect to more basic causal layers of existence since all I really have in the end is my consciousness itself. I merely presume that my existence resides by means of the causal dynamics that create such existence. If I’m correct then this perception ought to be considered pretty basic to me at least. But maybe there are more layers? I guess I’d know if I was creating something. But I might not know if I was being created by something by means of a realm (edit: no not “a realm” but rather “causality”) beyond my perceptions. Does that suffice for you to make your point? (And I’m off tomorrow except for an early dentist appointment, so if it’s getting too late over there then get some sleep!)
I agree it's difficult to know if you're in the base layer or not, I think it's impossible to know. How could you ever?
I have taken you earlier to say that there is nothing in the electromagnetic field theory of consciousness that requires modifications of our models of physics. Is that correct? In fact, electromagnetic fields are part of the Standard Model. Or is it a strong emergency theory?
The most important point is that the theory you subscribe to is a physicalist theory (right?).
Eric, this is our old debate. I'll try to tackle this in a way maybe we haven't exhausted already.
"For example when you press a key on your keyboard, this input information should not only be algorithmically processed, but it will need to inform causally appropriate instruments such as computer screens, speakers, memory, or whatever, in order for your key press to actually do anything."
This seems like the core of your argument. My question is, if the processor outputting to screens, speakers, or memory is sufficient for "inform causally appropriate instruments", why isn't the brain outputting to the muscles that enact movement, talking, eye movement, or any other form of behavior? If outputting to memory meets the standard, why don't neural circuits outputting to each other through synapses, or even recurrently to themselves? If you see differences here, then maybe expanding on exactly what makes something "appropriate" vs not appropriate.
For me, the important thing is to be causal throughout. I don't think trying to prejudge the right kind of causality is a good strategy. The right kind is whatever provides the capabilities that lead us to take the intentional stance toward a system, to intuitively see a fellow being there.
Along those lines, I think Turing was on the right track. But the five minute test that people pushed for so many years is based on a few throwaway remarks he makes in his 1950 paper. The real test is over an extended period of time. If something can pass the test for the majority of reasonably sophisticated subjects for weeks and months, then any fussiness we might have about whether it's "real" intelligence seems increasingly irrelevant.
Consider anyone you know who is conscious. How do you know that they're conscious? What about them makes you think they're conscious? And what would have to be different to make you doubt it? It seems like we all conduct our own Turing tests everyday.
I'm surprised there isn't an automatic text reading service for posts yet. This technology has been around for a while. (I think you once shared a service with me that you use to automatically read posts to you.) It seems easier than the AI art generation now available. I know no one wants to hear my often raspy voice reading my stuff aloud.
Yes Mike, we’ve been discussing this for many years. It was of course conversations with you that incited me to develop my thumb pain thought experiment itself. Once informed about the EMF consciousness proposal, would I have otherwise grasped its plausibility to indeed take that dive? Or rather would I have quickly dismissed it as more of the same nonsense that I saw in general? Hard to say. I doubt that I’d have been able to make several of the realizations that I’ve made without your ideas challenging mine though. In any case I am grateful for all the time you’ve spent with me.
Also yes, my principle argument is that in a causal world information should only be said to exist as such to the extent that it goes on to inform something causally appropriate. Therefore marked paper algorithmically processed to create more marked paper should not create an experiencer of thumb pain alone. Instead it should need to inform something causally appropriate that itself would exist as such an experiencer. Of course in the past you’ve bitten the bullet here to say that you believe an experiencer of thumb pain would result given such processing alone. Your response now however is interesting for suggesting a different tack where you agree that consciousness should require processed information to inform something causally appropriate. If that’s true then great!
From there you ask what’s wrong with the brain informing standard instruments like muscles or even recurrent neural circuits to exist as something conscious? As long as a specific proposal is made about something that could tangibly exist in a phenomenal capacity, then nothing’s wrong with that. The thing is however, I don’t know of any proposals on the market today which meet this “falsifiable” condition, that is except for McFadden’s. Do you know of any? Notice that one can’t say “If humans perceive something to function consciously for several months (or whatever), then it will be conscious”. That would be no more possible to test empirically than “the proper information processing in itself”. Next time I plan to get into a potential way of testing McFadden’s proposal that I personally developed, so that might also help illustrate what I mean by “empirically test”.
On blogging in general, the Substack app itself (rather than Substack online) does have a feature that generates a pretty good audio read out for their newsletters. Unfortunately it doesn’t seem available for less popular or simply new writers. The app that I sometimes use to read things in general is called Speechify. It’ll read anything back if you either paste or upload it, and the free version does seem adequate. It doesn’t sound nearly as good as the generators they use on Substack though. Given how many free services Google gives people simply to help their search engine business (like NotebookLM and Maps), you’d think they’d try to corner the market on quality text reading generation too. Seems like an unmet need.
"Your response now however is interesting for suggesting a different tack where you agree that consciousness should require processed information to inform something causally appropriate. "
My problem is that this statement is too ambiguous for me to either agree or disagree with it. Until I know what you mean by "causally appropriate", I really can't have a stance toward it. What my response was actually about was pointing out the inconsistency of listing screens, speakers, memory, etc, as appropriate without considering the biological functional equivalents.
"Instead it should need to inform something causally appropriate that itself would exist as such an experiencer. "
This might give an idea of what you mean by "causally appropriate", but it doesn't fit with the screen, speakers, etc, examples, which, except for a panpsychist, aren't experiencers. I'd ask what your theory is for what makes an experiencer, but I know you'd just talk about electromagnetic fields. What I don't see in your usual pitch is why the electromagnetic field in particular should be the experiencer, as opposed to neural processing.
As I've said many times, my own theory of an experiencer is functionalist, functionality which in animals is implemented by a physical neural network. I'm open to the idea of functionality happening in another substrate, but I need data to motivate that postulate.
I wonder why Substack limits the availability of their narration. Maybe it's a resource thing. Although I see Speechify now has a browser plugin, so it can be used anywhere. That's not as convenient as actually having it in a podcast, but then I found subscribing to Tina's Substack podcasts in my podcast player a lot of work, so this seems like an area that still needs a lot of improvement.
Let’s backtrack a second Mike. I agreed with you that my principle argument against computational functionalism was that thumb pain can’t just exist by means of processed information alone but rather must inform something causally appropriate. In that case an experiencer of thumb pain should exist by means of the right marks on paper that are algorithmically processed to generate the right other marks on paper, which seems funky. Furthermore I observed that instead of you saying that I was wrong about the need for something causally appropriate to be informed, you asked why the brain outputting to muscles and such wouldn’t be causally appropriate? I supported this question as a sign of progress — perhaps you were leaving the “information processing alone” explanation behind? Of course in the marked paper scenario the resulting paper doesn’t inform muscles or anything else. So presumably no thumb pain would occur there.
In order to indeed go this way however you seem to want me to tell you what would be causally appropriate to be informed to exist as thumb pain? I’ll let science determine this however. I just want to get a process going here where something could potentially be determined scientifically. And in my next post I plan to discuss a possibility that I consider reasonable and how I think scientists could potentially assess it.
So does that seem like a reasonable plan? And are you on board in general with processed information needing to inform appropriate things like muscles and computer screens to exist as such?
Scientific evidence is always the final arbiter. But part of letting "science determine this" means heeding what it's determined so far. Your strategy seems to be deciding that the current science is wrong (or misleading) and hoping that it will come along with something new that is more in line with your preferences. In my mind, that isn't really following the science, because someone who doesn't like the current answers can always follow that strategy to avoid accepting the science.
"And are you on board in general with processed information needing to inform appropriate things like muscles and computer screens to exist as such?"
Again, I don't know what "appropriate" means, so I can't agree or disagree with your statement here. To me information processing that outputs to further information processing, including in recurrent loops, is "appropriate."
Of course for any of that to have been naturally selected, it had to make a difference in behavior, so output to muscles is part of the normal process. But we wouldn't say that someone in a locked in state loses conscious status just because they've lost all movement. In their case, their internal information processes informing other internal information processes is enough for them to still have experiences.
My strategy is to expose a non acknowledged magical element of computational functionalism. This belief posits information processing in itself to be what creates a given example of consciousness. Conversely I’m saying that in order for causality to be maintained, one step more is required — the resulting processed information must go on to inform something causally appropriate that would itself exist as a given example of consciousness. And no, I can’t say what specifically would be “causally appropriate”. Instead I leave this up for empirical determination. But it’s the magic of trying to subvert the full “input> processing> output” chain that I mean to expose here. Stopping with processing alone leads to all sorts of funky implications.
By initially asking me why I wasn’t accounting for brain processed information that incites the function of muscles and such, I figured that you were trying to conform with the presented argument. Here you’d even be able to eject funky scenarios like pain by means of marked paper processing as well all sorts of mind uploading funkiness. But now acknowledging that someone “locked in” would still be conscious without certain traditional brain output features, like being able to operate muscles, you seem to have decided against taking this path. Now you seem to have reverted to the explanation that consciousness resides by means of processing alone, or something that includes the recursive processing of processing. You even went where I dared not to call this “appropriate”.
That’s all fine, but does still leave my argument itself intact. How might the processing of information “inform something causally appropriate” if by definition it’s processing rather than something that’s informed by what’s processed? It’s this stopping one step short of an output step that I consider to violate causality here. Notice that any proposal that takes this form will inherently be impossible to disprove. Conversely any proposal which posits something tangible to exist as what brain information informs to exist as consciousness, would thus have such potential for empirical assessment. Currently I only know of one proposal on the market today which takes this final step and so could be empirically validated or refuted.
It seems clear we have different expectations on what will be necessary to explain an "experiencer," to use your term. I'm a functionalist. I'm expecting an experiencer to be a causal structure, and information processing is just distilled causation. I'm open to it needing to be a different substrate than the neural one, but I need data establishing that. Right now, I don't see any limitation in the neural one preventing it from playing that role.
To me, a theory of consciousness that needs to reference an experiencer, essentially that needs to reference consciousness itself, is missing the main explanandum. At best, it's a proposition about where the experiencer may be, but not what it is.