
Excellent NotebookLM podcast version given the below text:
My own lame reading of the below text:
By Eric Borg
April 30, 2025
Though I planned to get further into my psychology based models for this post, and specifically the evolution of value, I’ve decided that there’s something else that should be addressed first. Apparently the models that I’ve developed compete with a popular consciousness perspective that few consider magical, though I most certainly do consider it that way. Mind you that competing against magical ideas doesn’t bother me when they’re formally acknowledged as such. It’s when they’re presumed to not be magical that I must object, and specifically because they not only get spooky benefits, but also avoid associated stigmas. So for the models that I’ve developed to be given a fair shot I feel like I’ll need to set the record straight here.
In these efforts it should be important to identify exactly what I both do and don’t consider magical about computational functionalism. I certainly don’t object to the brain being analogized with the function of a computer. Does the brain accept input information from around the body and algorithmically process it for output function? Of course it does. If I don’t call the brain “a computer” in that specific regard then what should I call it? And it’s not that I consider functional definitions for terms to always be misleading. Whatever may be said to trap a mouse, for example, might usefully be referred to in that sense as “a mouse trap”, and even when that’s not how such things are generally thought of. When both “computer” and “functional” were combined to create a theory of how brain creates mind however, there’s one specific violation of causality that I see. Furthermore I suspect that one of our most distinguished founders of computing machines, Alan Turing, is the person who effectively incited this perspective. So I’ll briefly get into this history to better display what I consider magical about computational functionalism. Then I’ll discuss how to potentially avoid associated spookiness by means of an answer that I consider quite plausible.
Apparently back in 1949 Alan Turing was entirely correct to believe that programmable computers would incite a massive paradigm shift for humanity. He also suspected that our brains create minds by functioning as computers do, and knew that “mind” was an extremely messy affair in academia. So to potentially get around this mess he developed a simple heuristic that he called “an imitation game”, though people soon directly referred to as “the Turing test”. Essentially if a person is speaking with a computer but can’t tell if they’re instead speaking with a human, then this must be because that computer creates a humanlike consciousness which has effectively been educated to the degree that the person perceives. Why? Because of functionalism — if it functions like an educated human in a given respect, then at least in that respect it should also be such an educated human. And no, the position here does not hold that such a computer would be human in a biological sense or anything else that isn’t specifically being displayed as a human functional equivalent. But in the sense that functional equivalence is being displayed, as in the case of using the words that an educated human would use, the computer would thus be an educated human in that specific sense.
From the premise that standard computers will some day talk like humans do, the reverse logic was implicitly adopted as well. Here it became presumed that for human cognition to occur, our brains must essentially be doing what hypothetical standard computers will do some day when we’re unable to tell the difference between the words that they generate versus the words that we generate. So given this circular logic our brains must then be algorithmically manipulating input information to provide appropriate output words. Furthermore beyond just human words, how might our brains cause us to feel pain, itchiness, envy, joy and so on? Again from this circular logic it implicitly became presumed that our brains must be converting certain input information into processed information that tells our bodies what to do under those circumstances. Therefore if a computer were to also take the correct input information and use it to create the correct processed information, then theoretically it too would create something that feels pain, itchiness, envy, joy and so on.
Before I get into exactly how this position remains one step short of full causality, consider various resulting implications. If our computers are effectively becoming functionally human by means of properly processed information, then at what point should we be granting them human rights? Thus legal human beings would exist in the form of certain computers, and so they’d need to be treated as such given the legal implications of violating their rights. Furthermore great fear has emerged that these fabricated minds will figure out how to build themselves better than we build them to thus become far more intelligent than biological humans happen to be (also known as “the singularity”). Here many presume that these fabricated but super intelligent minds will take general control of our world and so our interests should become secondary considerations that might mandate the end of biological humanity. But then many also find great hope in this scenario because if human consciousness simply exists as the right incoming information converted to the right processed information, then it ought to be possible to upload the consciousness of a human to a technological computer so that human minds needn’t ever end (or more accurately in that case, be permanently erased).
The reason that I consider the above premise and its various implications to violate causality, is because it ignores a crucial step associated with the function of computers. Notice that to do what they do our computers not only process incoming information, but for that processed information to have any actual effects it must go on to inform causally appropriate instruments. For example when you press a key on your keyboard, this input information should not only be algorithmically processed, but it will need to inform causally appropriate instruments such as computer screens, speakers, memory, or whatever, in order for your key press to actually do anything. The implication is that when computational functionalism posits phenomenal existence by means of processed information that doesn’t go on to inform anything causally appropriate to exist as a given phenomenal experiencer, it remains one step short of full causality. Let’s analyze this more explicitly by means of my thumb pain thought experiment.
When your thumb gets whacked it’s presumed in science that information about this event gets neurally sent to your brain. Furthermore science tell us that such information becomes algorithmically processed by means of the logic based ways that neuron firing relates with other neuron firing — operations such as AND, OR, and NOT. But does such processing in itself constitute your thumb pain or rather something unknown that the resulting processed information informs? If the correct processing of information alone can create an experiencer of thumb pain then notice what this implies in respect to the processing of marked paper.
Let’s say that we have some paper with marks on it that highly correlate with the information that your whacked thumb sends your brain. Then let’s say that this marked paper gets scanned into a computer that algorithmically processes those marks to thus print out more paper that has marks on it that highly correlate with your brain’s neurally processed information given a strong thumb whack. Computational functionalism mandates that something associated with such marked paper that’s processed into the correct other marked paper, must then experience what you do when your thumb gets whacked. It doesn’t state exactly what would feel this pain, though the theory does mandate that an experiencer of thumb pain must then functionally result here in some capacity. My own position however is that this scenario stops one step short of the full causality displayed by actual computers — in order for an experiencer of thumb pain to result here, the second marked paper would need to inform something causally appropriate that itself would exist as an experiencer of thumb pain.
I’m sure that some computational functionalists will object to this assessment. No one told them that their position mandates consciousness by means of information processing alone, and certainly not that an experiencer of thumb pain would result when the right marks on paper are algorithmically converted to the right other marks on paper. Daniel Dennett never said that! Therefore some must presume that I’m misrepresenting what they believe, or that I’ve built a straw man argument to take down rather than their actual position. I look forward to arguments that computational functionalists actually hold a position that’s different from the one I’ve portrayed here!
Observe however that it’s widely known that computational functionalists believe there to be nothing theoretically otherworldly about uploading a human mind to a conventional computer. If that’s the case then how might one’s consciousness indeed be uploaded to reside within a computer if consciousness doesn’t ultimately just exist as bits converted to more bits? If one responds that consciousness should reside as something more, or what the resulting bits ultimately inform, then that’s actually the position that I hold as well, or the final causal step needed for computational functionalism to indeed be natural. In that case we could discuss what a brain’s processed information might inform in order for consciousness to exist. But consciousness by means of the right bits that are used to create the right other bits, or the right marks on paper that are used to create the right other marks on paper, or any other such notions, should violate causality itself because in order for anything to exist as “information” in a given sense, it will need to go on to inform something causally appropriate. There are no elements of reality that diverge from this model as far as I can tell. If a processed key stroke never informs your computer screen then such information should only potentially be informative to your screen, which is to say not at all in that sense. The media content associated with a DVD won’t be informative to a VHS machine given that the two aren’t compatible. But might a DVD inform a table leg as a shim if it were slipped under it? Sure, and specifically because such information would be causally appropriate to inform that leg. Here I believe that I’m displaying an inherent element of causality in terms of how “information” works.
Back in 2019 I wondered what my own brain’s processed information might be informing to exist as all that I see, hear, feel, think, and so on. I had no clue. But then I came across an idea from a UK biologist by the name of Johnjoe McFadden. The more I explored his answer, the more sense it made to me. He proposes that processed brain information effectively causes neurons to fire with the right sort of synchrony that thus creates an electromagnetic field which itself exists as the causal element which sees, hears, feels, thinks, and so on. Thus it may be that this is what processed brain information informs to create consciousness in general — each of us ultimately exist as neurally produced electromagnetic fields. So next time I plan to get into some of the details associated with his theory. Furthermore this should help demonstrate the difference between ideas like his which are possible to assess by means of empirical measurement, versus unfalsifiable consciousness proposals that are no more possible to empirically refute than “the right information processed into the right other information”, or even “God did it”. Indeed, I’ll also get into a proposal that I personally developed to potentially test the validity of McFadden’s theory. So I’d love feedback on that! I don’t currently know of any other consciousness proposals on the market which posit something tangible to exist as consciousness, and thus would be possible to assess empirically. I’d love to know if any others exist so that I might assess them against his.
For some who hold naturalism no less dear than I happen to, I realize the suggestion that computational functionalism stops one step short of full causality, also disparages a belief that has effectively become sacred. My argument shouldn’t even potentially be considered valid for any strong naturalists who have become highly invested in this belief. Similarly it’s hard for me to imagine how one might effectively dispute the causality void associated with “consciousness by means of the processing of information in itself”. The only thing that should actually influence partisans like us should be scientific discoveries — either McFadden’s theory will be empirically validated to thus generally illustrate the magic that Turing’s heuristic unleashed, or it will be invalidated to make way for other proposals. Furthermore even failure ought to mark progress for illustrating how science is suppose to work as opposed to how things have actually gone so far.
I consider this all to be tremendous fun! Who shall become immortalized to stand next to legends of science like Newton and Einstein? If I’m extremely wrong then such a distinction might go to Alan Turing for developing the consciousness test which incited computational functionalism. I expect however that in the coming years (rather than decades), causality will prevail by means of an empirically validated solution, and specifically the McFadden theory that I plan to discuss next time.
((A final note not otherwise included above: I’m pretty bad with technology and blogging posts in general. Just as I couldn’t portray the difference between feeling good versus bad by means of online facial photos last time (and so had to half-ass it with photos of myself), this time I had to pull a hammer out of my work truck and rest it on my thumb for a photo. Also I’m painfully aware of how pathetic my readings happen to be. I’m obviously no Rose Tyler or Suzi Travis! Apparently Substack will generate audio for my new posts if I ever become popular enough, but we’ll see about that. So it was in this frustration that I recalled how Zinbiel recently mentioned trying out the NotebookLM podcast, as well as that Suzi had done so a while back. I don’t fool myself to think that normal people are able to grasp my ideas well enough to be engaged by them, and an audio recitation from me probably doesn’t help much either. But as for the AI podcast that NotebookLM quickly generated… I think normal people could be engaged by that! So I guess I’m ironically humbled by the progress displayed by these modern LLMs. My point however does remain that Alan Turing seems to have led much of academia down a spooky path.))
Update May 25, 2025:
I’m quite gratified that philosophy professor Pete Mandik has challenged my premise, first in “Notes” and then he did a full Substack post to address my response. I don’t just say this because he happens to be a distinguished person, but rather because he’s the sort of distinguished person who I’m able to have great respect for. The bite of his animation satire, as well as his strong moral antirealism, mandate my admiration. But yes, on the issue that I’ve opened up here, we’re on opposing sides. Check out his wonderful post and our conversation!
A reading of my About page:
Post #1 Podcast:
Post #2 podcast:
Post #3 podcast:
Share this post