The Symbolic Mind

13 May 2015 | Dr Julian Kiverstein (University of Amsterdam)

In 1976 the computer scientists and founders of cognitive science Allen Newell and Herbert Simon proposed a hypothesis they called “the physical symbol systems hypothesis”. They suggested that a physical symbol system (such as a digital computer, for example) has the necessary and sufficient means for intelligent action.”

A physical symbol system is a machine that carries out operations like writing, copying, combining and deleting on strings of digital symbolic representations. By intelligent action they had in mind the high-level cognitive accomplishments of humans, such as language understanding, or the ability of a computer to make inferences and decisions on their own without supervision from their programmers. 

Newell and Simon hypothesised that these high-level cognitive processes were the products of computations of the type a digital computer could be programmed to perform.

Newell and Simon’s hypothesis combines two controversial propositions that are worth evaluating separately. The first proposition they assert is a necessity claim along the following lines:

“Any system capable of intelligent action must of necessity be a physical symbol system.”

This is to claim that there is no other non-magical way of bringing about intelligent action other than by digital computation. Assuming that humans don’t live in a world in which intelligent action is caused by magic, it follows that the human mind must work in fundamentally the same way as a digital computer.

The second is a sufficiency claim:

“A physical symbol system (equipped with the right software) has all that is required for intelligent action. No additional ingredients are necessary.”

If this proposition is correct, it is just a matter of time before computer scientists succeed in building machines capable of intelligent action. Artificial intelligence is pretty much inevitable. All that stands in the way is the programming ingenuity of software designers. 

In the age of so-called “neuromorphic” computer chips, and “deep learning” algorithms (more on which later) this particular obstacle looks increasingly negotiable.

But is the human mind really a digital computer? Many philosophers of mind influenced by cognitive science have thought so. They have taken the mind to have an abstract pattern of causal organisation that can be mapped one-to-one onto the states a computer goes through in performing a computation. 

Since Frege, we have known how to represent the formal structure of logical thinking. Computation is a causal process that helps us to understand how mental or psychological processes could be causally sensitive to the logical form of human thinking. It gives us for the first time a concrete theory of how a physical, mechanical system could engage in logical thinking and reasoning.

The thesis that the human mind is a digital computer has, however, run into a triviality objection. Every physical system has states that can be mapped one-to-one onto the formally specified states of digital computer. We can use cellular automata for instance to model the behaviour of galaxies. It certainly doesn’t follow that galaxies are performing the computations we use to model them. 

Moreover, to describe the mind as a computer seems vacuous or trivial once we notice that every physical system can be described as a computer. The thesis that the mind is a computer doesn’t seem to tell us anything distinctive about the nature of the human mind.

This triviality objection (first formulated by John Searle in the 1980s) hasn’t gone away, but it is seen by many today as a merely technical problem, in principle solvable once we have the right theory of computation. To put it bluntly: galaxies don’t compute because they are not computers. Minds do compute because they are nothing but computational machines.

There are a number of ways to push back and resist the bold claim that the human mind is (in a metaphysical sense) a digital computer. One could hold as Jerry Fodor has done since the 1980s that the human mind is a computer only around its edges. Some aspects of the mind, for example low-level vision, or fine-grained motor control, are computational processes through and through. Other aspects of the mind, for example belief update, are most certainly not.

Other philosophers have argued that the human mind is not a digital computer, and have sought a more generic concept of computation. To think of the mind as a digital computer is to abstract away from the details of the biological organisation of the brain that might just prove crucial when it comes to understanding how minds work. 

Digital computation only gives us a very coarse grained pattern of causal organisation in which to root the mind. Perhaps however the mind has a more fine-grained pattern of causal organisation. This response amounts to tinkering with the concept of computation a little, whilst nevertheless retaining the basic metaphysical picture.

Should we agree that any system that can behave intelligently must have a causal organisation (at some level of abstraction) that can be mapped onto the physical state transitions of a computing machine?

Hubert Dreyfus, a longstanding critic of artificial intelligence, thought not. Dreyfus takes the philosophical ideas behind artificial intelligence to be deeply rooted in the history of philosophy. He lists the following as important stepping stones:

(From Hubert Dreyfus, “Why Heideggerian AI failed.”)

For Dreyfus the computer theory of the mind inherits a number of intractable problems that are the legacy of its philosophical precursors. Artificial intelligence is, and always has been a degenerating research programme. The problems to which it will never find an adequate solution lie in the significance and relevance humans find in the world. 

Dreyfus, following in the footsteps of the early twentieth century existential phenomenologists, takes human intelligence to reside in the skills that humans bring effortlessly and instinctively to bear in navigating everyday situations. For a computer to know its way about in the familiar everyday world humans inhabit, it would have to explicitly represent everything that humans take for granted in their dealings with this world. 

Human common sense (which Dreyfus calls “background understanding”) doesn’t take the form of a body of facts a computer can be programmed with. It consists of skills and expertise for anticipating and responding correctly to very particular situations. For Dreyfus, what humans know through their acculturation, and through the normative disciplining of their bodily skills, can never be represented.

Even if we were to somehow find a way around this problem by availing ourselves of the impressive logical systems that linguists and formal semanticists now have at their disposal, still a substantial problem would remain. The would-be AI programme would have to determine which of the representations of facts it has in its extraordinarily large database of knowledge is relevant to the situation in which it is acting. 

How does a computer determine which facts are relevant? Everything the computer knows might be relevant to its current situation. How does the computer identify which of the possibly relevant facts are actually relevant? 

This problem known as the “frame problem” continues to haunt researchers in AI. At least it ought to, since as Mike Wheeler recently noted “it is not as if anybody ever actually solved the problem.”

Still the tools and techniques of AI have advanced tremendously since Dreyfus first launched his critique. Today’s computer scientists and engineers are busy building machines that mimic the learning strategies and techniques of information storage found in the human brain. 

In 2011 IBM unveiled its “neuromorphic” computer chip that processes instructions, and performs operations in parallel in a similar way to the mammalian brain. It is made up of components that emulate the dynamic spiking behaviour of neurons. 

The chip is made up of hundred of such components, wired up so as to form hundreds of thousands of connections. Programming these connections creates networks that process and react to information in similar ways to neurons. 

The chip has been used by IBM to control an unmanned aerial vehicle, to recognise and also predict handwritten digits and to play a video game. These are by no means new achievements for the field of AI, but what is significant is the efficiency with which the IBM chip achieves these tasks. 

Neuromorphic chips have also been built that can learn through experience. These chips adjust their own connections based on the firing patterns of their components. Recent successes have included a programme that can teach itself to play a video game. It starts off performing terribly, but after a few rounds it begins to get better. It can learn a skill, albeit in this well-circumscribed domain of the video game.

Elsewhere in the field of AI, “deep learning” algorithms are all the rage. These algorithims employ the same statistical learning techniques as have been used in neural network research for decades. One important difference is the networks include many more layers of processing than in previous neural networks (hence the “depth” descriptor), and they rely on vast clusters of networked computers to process the data they are fed. 

The result is software that can learn from exposure to literally millions of images to recognise high-level features such as cats despite never having been taught about cats. Deep learning algorithms have achieved notable successes in finding the high-level, abstract features that are important, and the patterns that matter in the low-level data to which they are exposed. This would seem to be an important aspect of skill acquisition that Dreyfus right emphasises as being so important for human intelligence.

These developments in AI are based on the premise that the brain is a super-efficient computer. AI research can therefore make progress and get closer to building machines that work more like the human mind by discovering more about how the brain computes. 

These advances in AI would seem at first glance to provide little support for the Newell and Simon physical symbol systems hypothesis. The fact that AI researchers needed to build digital computing machines that work more like brains, shows that the human mind doesn’t work much like a digital computer after all.

These developments do however raise the ethically and politically troubling possibility that humans might after all be on the brink of engineering artificial intelligence. Wouldn’t such a result indirectly vindicate some version of the physical symbol systems hypothesis? 

Could we not argue as follows:

This conclusion would imply an important tweak and refinement to the original Newell and Simon hypothesis. It would require us to think very differently about the cognitive architecture of the mind. This matters a great deal for cognitive science. Mental processes should no longer be thought of as sequential and linear rule-like operations carried out on structured symbolic representations. 

However the basic metaphysical idea behind the computer theory of mind would still seem to survive unscathed. We can continue to think of the human mind as having an abstract causal organisation that can be mapped onto the state transitions a computer goes through in doing formal symbol manipulation.

So is the human mind essentially a computational machine? In reflecting on this question we should keep in mind the triviality objection. 

Every physical system has an abstract causal organisation which can be mapped one-to-one onto the states of a computational system. Nothing metaphysically interesting follows about what minds essentially are from this observation. 

If Dreyfus is right, serious philosophical mistakes are what have led us to the point today where we can think of the human mind as being in essence a computing machine. In particular, we ought to be suspicious of the Cartesian concept of representation the computer theory of mind is predicated on. It only makes sense to think of the brain as performing computations because it is possible to give semantic or representational interpretation of brain processes. 

Notice however that such an interpretation of brain processes in representational terms doesn’t imply that brains really do traffic in mental representations. That we tend to think of the brain in these terms may be due to our not having entirely shaken off the shackles of a highly questionable Cartesian philosophy of mind.