The Church-Turing Thesis states that, given any effective procedure or method (or algorithm) by which the value of a mathematical function can be obtained, then that same function can also be computed and its value obtained automatically by a Turing machine (see Copeland 1997, “The Thesis and its History”).
To be “effective,” the algorithm or procedure must have a finite number of instructions in a finite number of steps, and could in principle be done by a human being doing manual calculations (see Copeland 1997, “The Thesis and its History”).
A function that can be implemented by a Turing machine by an effective method is Turing computable, and a Turing computable function can be implemented on a Turing machine with an effective algorithm to compute the function.
This is brought out very well in explicit statements by Turing in “Computing Machinery and Intelligence” (1950):
“The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer. The human computer is supposed to be following fixed rules; he has no authority to deviate from them in any detail. We may suppose that these rules are supplied in a book, which is altered whenever he is put on to a new job. He has also an unlimited supply of paper on which he does his calculations.” (Turing 1950: 436).But apparently even Turing recognised unsolvable problems that cannot be computable by Turing machines, e.g., the Entscheidungsproblem (the “Decision Problem”) of Hilbert relating to the predicate calculus.
“The reader must accept it as a fact that digital computers can be constructed, and indeed have been constructed, according to the principles we have described, and that they can in fact mimic the actions of a human computer very closely.
The book of rules which we have described our human computer as using is of course a convenient fiction. Actual human computers really remember what they have got to do. If one wants to make a machine mimic the behaviour of the human computer in some complex operation one has to ask him how it is done, and then translate the answer into the form of an instruction table.” (Turing 1950: 438).
Copeland (1997, “Misunderstandings of the Thesis”) points to a number of myths that exist about the Church-Turing Thesis:
(1) people have forgotten the sense in which Turing stated that the function had to be “effective.”So a universal Turing machine has clear limitations.
(2) although Turing did prove that a Universal Turing Machine is capable of computing any function that any specific Turing machine can compute, he did not state that(1) Universal Turing Machines can solve any procedures or explicitly stated rules of any kind,(3) the Church-Turing Thesis does not state that the human mind is equivalent to a Turing machine, since there may well be aspects of the mind that transcend Turing machines.
(2) nor did he say that a Universal Turing Machine could compute any function that any type of computer, with any kind of architecture, was capable of computing, since there are machines, either hypothetical (such as hypercomputers) or possibly even physically possible using known or unknown physical processes, that could perform functions not Turing computable.
(4) the Church-Turing Thesis does not entail that the human brain or any physical or biological phenomenon can be simulated by a Turing machine. In reality, if the human mind or a natural or biological phenomenon or process involves functions which are not effectively calculable, then they cannot be simulated in a Turing machine. (Copeland 1997, “Misunderstandings of the Thesis”).
And digital computers are a type of Turing machine.
Let us now review an important point about functions and computation.
Now a function is an operation which begins with inputs and ends with an output. The input of a function is its argument and its output a value. An algorithm is a specific procedure, method or set of fixed rules used to find the value of the function, and any one function x might have more than one algorithm for finding its value.
As long as there is an effective algorithm (in Turing’s sense of “effective”) that can be designed to give the value of a function, then the function is computable (Crane 2003: 88).
Computing a function, then, in a Turing machine involves the use of an automatic algorithm, which represents the arguments and values of a function and the intermediate states, to find value of the function. The fundamental concept here is representation.
Computing a function involves representation of information – that is, computers use symbols to represent something and then manipulate those symbols by means of algorithms. However, as John Searle has argued (Searle 1980, 1990), Turing machines engage in computation that is purely syntactic—there is no semantics. A computer, then, is an automated rule-governed symbol manipulation machine devoid of semantic understanding.
By the Church–Turing thesis, all effective computable functions that exist are computable by a Universal Turing Machine.
But there is a crucial distinction between (1) instantiating a function and (2) computing a function. When many physical processes instantiate what can be represented as a function, they do so in a manner that is physically real and that has no symbolic representations of the input or output.
Many such physical phenomena in nature are therefore not computational processes (Crane 1995: 104). For example, a person’s height can be represented with a number, but of course a person’s height is not actually a number.
Much activity in nature determined by physical and chemical laws is not representational and, therefore, not computational. A good example of this is Einstein’s famous equation e = mc2, which means that energy is equivalent to mass by the speed of light squared.
For example, when matter is converted into energy in a nuclear explosion, this is an example of a physical process in which the amount of energy released can be represented as the value of a function whose arguments are the mass of the matter converted times the speed of light squared.
But the explosion has performed no computation, since nothing has been explicitly represented. In a computer, however, the nuclear explosion can be simulated through computing a function. The computer would represent the arguments as numbers that each represent quantities (the mass, energy, and speed of light), and through a system of syntax (in this case a mathematical operation) effected through an algorithm, it would find the value as another symbol that represents a quality of a natural phenomenon. Computation has occurred because explicit symbolic representation is present.
Some natural systems do, however, seem to perform computation or, at least, involve information processing. For example, the human mind.
But here the human mind has traits that go well beyond mere syntactic information processing without semantics, e.g., sensation, perception, consciousness, emotion, intuition – that is to say, the view that these are not Turing computable is to be taken very seriously indeed.
When we turn to the actual biology and functioning of the brain, it isn’t like a digital computer.
The brain seems to be an integrated system resembling, if anything, an analogue computer (where there is no separation between hardware and software), not a digital computer. Information in the brain is embedded in matter, and so cannot be extracted. In other words, information in human brains is substrate-dependent.
If consciousness requires physically necessary processes or substrates in our brain biology, just as the biological naturalist theory of the mind argues (and there are plenty of advocates of this from philosophers, cognitive scientists to neuroscientists), Turing machines can never be conscious.
To see this, a simple analogy suffices: you can model human digestion in a digital computer simulation, but this is clearly not the physical process of digestion. Turing machines cannot reproduce digestion, nor could they produce consciousness, if consciousness is causally dependent on physically necessary biology, neurophysiology and physics in the brain.
Copeland, B. Jack. 1997 (rev. 2002). “The Church-Turing Thesis,” in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy
Crane, Tim. 2003. The Mechanical Mind: A Philosophical Introduction to Minds, Machines, and Mental Representation (2nd edn.). Routledge, London and New York.
Searle, John. R. 1980. “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3: 417–424.
Searle, J., 1990, “Is the Brain’s Mind a Computer Program,” Scientific American 262.1 (January): 20–25.
Searle, John R. 2002. Consciousness and Language. Cambridge University Press, New York.
Turing, Alan. 1936. “Computable Numbers, with an Application to the ‘Entscheidungsproblem,’” Proceedings of the London Mathematical Society 42.2: 230–267, with 43 (1937): 544–546.
Turing, Alan M. 1969 . “Intelligent Machinery” [National Physical Laboratory Report, 1948], in B. Meltzer and D. Michie (eds.), Machine Intelligence 5. Edinburgh University Press, Edinburgh. 3–23.
Turing, Alan M. 1950. “Computing Machinery and Intelligence,” Mind 59.236: 433–460.