John Searle is a great analytic philosopher in the tradition of Bertrand Russell, and his work on AI and consciousness is particularly interesting. This talk is really insightful.
Searle is summarising arguments from his paper “Minds, Brains, and Programs” (1980) and his book Consciousness and Language (2002).
Searle makes a very interesting distinction between observer-relative and observer-independent objects. This is actually the same distinction made by Karl Popper between World 3 objects and World 1 objects.
Searle also points to two important principles:
(1) syntax is not semantics, andTuring machines are things designed to automatically process symbols by means of set syntactic rules or algorithms, but with no understanding of the symbols. Computers, then, are automated syntactical systems manipulating symbols but devoid of semantics.
(2) simulation is not duplication.
Searle also rightly notes two senses of the concept of “intelligence”:
(1) an observer-relative sense of intelligence that Turing machines can have when they automatically process symbols by means of set syntactic rules or algorithms to create output from input, andSense (2) is observer-independent, intrinsic and internal.
(2) the kind of intelligence with consciousness, with perception, sensation, and conscious experience, of the higher animal minds.
What of computation? Searle argues that computation is not intrinsic to machines. The same distinction between observer-relative and observer-independent phenomenon can be applied to computation. People can engage in observer-independent and intrinsic computation, just as Turing’s human computers. But machine computation is observer-relative. As Popper would say, what the machine does considered as a World 1 process is not computation, but is a set of mere physical World 1 processes. The machine’s functioning becomes computation in World 3 because it takes human beings to recognise it as such and our interpretation of its physical operation. It also lacks observer-independent, intrinsic conscious intelligence.
This is perhaps the weakest point of the argument, for what about natural types of information processing as in DNA? Clearly, there must be types of natural information in World 1 that have emerged by Darwinian evolution.
But that natural information by itself or computation in Turing machines is not a sufficient condition for consciousness.
Searle also reviews the biological naturalist theory of the mind, and he notes that the creation of a truly artificial intelligence like ours would be analogous to the creation of an artificial heart. It does not matter how good your simulation of a human heart is on a computer, it does not pump blood and it is not an actual heart. An artificial system that reproduces what a heart is and does in the human body needs to reproduce its causally necessary physical attributes, even if it is not organic but synthetic. It is the same with the mind. You need to duplicate the human mind, not simulate it. Whether you need the exactly same kind of physical and biological processes in the brain, or whether it can be done with different biochemical processes or even synthetic materials is currently unknown.
In essence, Turing machines and computers are as dead, as unconscious, as senseless as rocks.
By contrast, even the lower animals have some degree of observer-independent and internal conscious intelligence, and certainly the higher animals do, just as this dog at the end of the video below.
Searle, John. R. 1980. “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3: 417–424.
Searle, John R. 2002. Consciousness and Language. Cambridge University Press, New York.