Computer Processing and Human Thought

November 06, 2018

Robert Epstein, the distinguished American psychologist, published in May of this year an essay arguing, in the words of its subtitle, that “Your brain does not process information, retrieve knowledge, or store memories. In short: your brain is not a computer.” He is arguing against what he calls the “IP” metaphor for the brain — the brain as information processor: this metaphor for the human brain has set back the research of neuroscience, and inhibited our ability to comprehend the real means by which we perceive, remember, and re-experience past realities in order to interpret the present. Yet I would argue that a flippant rejection of the metaphor can lead us far astray—if not in the field of neuroscience, at least in our philosophy of man’s relationship to the computer. I will not summarize the entire article here—it is well worth reading—but I will quote a section.

“Computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms. Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?”

The idea of the computer is largely rooted in a similarity to the human brain; if not to its internal working, at the very least by external similarity of effect. Both the brain and the computer yield the external appearance of memory, logical thought, and action. As a result, we see in the computer an image of ourselves. Alan Turing’s famous paper, “Computing Machinery and Intelligence,” which described the standard for “intelligence” in a machine, that the machine should be able to on an external level replicate the answers of a human witness on a questionnaire, is rooted to some degree in this IP metaphor for the brain. Even if we deny the similarity of internal working, on a physiological level, to ignore the broader, seemingly more important similarity of experience to make computers very dangerous indeed. Put more simply, interacting with a computer is far too much like interacting with a fellow human to let this fact go unnoticed in our analysis. In the passage above, Epstein, despite his learning about the physiological workings of the brain, makes a quite naive statement about the effects of human thought. We, as well as computers, contrary to Epstein, operate on symbolic representations of the world. These symbolic representations may not indeed, as he argues, be stored in the physical neurons of the brain, but nevertheless they exist, at the very least to my own consciousness. I do not directly intuit reality around me, but rather form abstractions from past experience, and reason from these past experiences to interpret the present, and make predictions about the future. Except on the level of the most basic reflexes, I operate on symbolic representations of the world. Language itself is symbolic. Abstract human thought, man’s reason that differentiates him from animals, is inseparable from words. Even in more intuitive thoughts, not yet formed into words, I still base my sensation on symbolic representation. Just as a computer may equate two symbols in a programming language by identifying that they share the same geographical address in its processor’s memory, I can identify two animals as being of the same type by identifying that they share the same conceptual elements with an abstraction of “kitten-ness” in my head (or elsewhere, depending on which side of 14th century metaphysical debates you find most compelling). It is a fallacy to assert a distinction between my own process and that of a computer simply because the computer requires a few more layers of abstraction in order to perform essentially the same tasks as I do. Epstein’s most persuasive moment comes when he asserts that we are not controlled by algorithms, unlike our digital compatriots. His argument appeals to our most fundamental desire for freedom, freedom from a fatalistic system design which we must follow as a result of our brain chemistry. If Epstein delivers us from a reductionistic fatalism based on the IP metaphor for our neurons, then very well. But nevertheless I am guided, if not by algorithms, at the very least by a plan. I act towards an end, and that end, moreover, is outside myself, just as the end of the computer is outside itself, acting at the behest of its programmer. I openly recognize my debt to Aquinas in this objection to Epstein, but it is impossible to develop a coherent theory of man’s relationship to the computer without making some reference to a broader philosophy or theology of human action. I agree with Epstein; we are not controlled by algorithms. We enjoy free will, to act in accordance with our end, or against it. But nevertheless our actions are ordered to an end, the end of our creator, just like those of a computer. Put simply, both humans and computers act on data, whether the symbolic representations of computer memory or more intuitive mental abstractions. To assert the existence of free will rather than algorithms as the motivational force behind that action is a valid distinction between humans and computers, and an important one, but it does nothing to mitigate the fundamental similarity of effect between the free thoughts of a man and the carefully crafted logical conclusions of a computer that Turing so famously argued for in his paper. Drawing this unwarranted connection from the physiological dissimilarity between the human brain and the computer processor and the actual external effects of the computer and the brain can lead us to dangerous conclusions. The computer is a tool created after the image of the human mind, and naively accepting the computer’s claim to human-like action, or perhaps worse, ignoring it, as Epstein seems to do, can lead us to a host of false assumptions about what the computer is capable of. One may perhaps legitimately criticize Turing’s thesis by pointing out its reductionistic theory of human thought. Epstein deserves praise for objecting to that. But to separate human thought and the computer completely requires us to vastly underestimate the effect that the computer has on us through its claim to pseudo-rational action.

Like this post?

I'm experimenting with a newsletter. Sign up here if you're interested.
Michael Helvey

Just want to talk? Feel free to tweet @helvetici.