Useful Pages

Thursday, March 31, 2016

Alan Turing’s “Computing Machinery and Intelligence”

Ken B, take note!

Since I am getting flack for being skeptical about the Turing test, let me review Alan M. Turing’s original paper “Computing Machinery and Intelligence” (1950).

As a matter of pure historical interest, one of the first people to imagine intelligent machines was the 19th century novelist Samuel Butler in the novel Erewhon (London, 1865), which is actually cited by Turing in his bibliography of this paper (Turing 1950: 460). A case of life imitating art?

Anyway, I divide my post below into two parts:
I. Turing’s Paper “Computing Machinery and Intelligence”: A Critical Summary

II. Critique of the Turing Test.
Turing’s paper is based on the type of truly embarrassing and crude behaviourism that was fashionable in the 1940s, and, oh my lord, it shows.

So what was behaviourism? I quote from the Internet Encyclopedia of Philosophy:
“Behaviorism was a movement in psychology and philosophy that emphasized the outward behavioral aspects of thought and dismissed the inward experiential, and sometimes the inner procedural, aspects as well; a movement harking back to the methodological proposals of John B. Watson, who coined the name. Watson’s 1913 manifesto proposed abandoning Introspectionist attempts to make consciousness a subject of experimental investigation to focus instead on behavioral manifestations of intelligence. B. F. Skinner later hardened behaviorist strictures to exclude inner physiological processes along with inward experiences as items of legitimate psychological concern.”
Hauser, Larry. “Behaviorism,” Internet Encyclopedia of Philosophy
http://www.iep.utm.edu/behavior/
Ouch! The very essence of behaviourism was to abandon the study of internal mental or biological states of human beings and human consciousness to focus on outward “behavioural manifestations of intelligence.” Behaviourism had no interest in the internal explanation of human mental states and intelligence, and, instead, focused on external signs of it.

Behaviourism led to some real intellectual disasters in 20th century social sciences and philosophy. It shows up all over the place.

For example, B. F. Skinner’s Verbal Behavior applied the behaviourist paradigm to linguistics in a deeply flawed manner, which was brought out by Noam Chomsky’s now famous 1959 review of that book (Schwartz 2012: 181).

Even the analytic philosopher Willard Van Orman Quine’s misguided attempt to deny a valid distinction between analytic and synthetic propositions is a legacy of crude verbal behaviourism.

I. Turing’s Paper “Computing Machinery and Intelligence”: A Critical Summary
Turing divided his paper into the following sections:
(1) The Imitation Game
(2) Critique of the New Problem
(3) The Machines concerned in the Game
(4) Digital Computers
(5) Universality of Digital Computers
(6) Contrary Views on the Main Question
(7) Learning Machines.
Let us review them one by one.

(1) The Imitation Game
Turing proposes to answer the question: “Can machines think?” He rightly notes that any sensible discussion should begin with a definition of “machine” and “think” (Turing 1950: 433).

Unfortunately, no such definition is given. Instead, Turing rightly notes that a serious definition would not simply rely on an opinion poll of what people think, but then the issue is thrown aside. In place of a definition, Turing proposes the “Imitation Game.”

In this game, we have three players, A, B, C.

A is a machine, B a human being, and C a person who asks questions indirectly of A and B (who remain hidden from C). If C asks questions of A and B at length and cannot determine who the computer is and who the human is, then the machine is deemed to have passed the test (Turing 1950: 433–434).

At this point, it should be obvious that the “Imitation Game” is the Turing Test.

(2) Critique of the New Problem
Since the machine or computer remains hidden, Turing emphasises that this is a test of behaviour or “intellectual capacities,” not, say, external appearance (Turing 1950: 434).

The “best strategy” for the machine is “to try to provide answers that would naturally be given by a man” (Turing 1950: 435).

At this point, Turing raises the crucial question:
“May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.” (Turing 1950: 435).
Just look at how Turing flips this question off.

Since Turing has no interest in defining “intelligence” and commits himself to a crude behaviourist test, his nonchalant attitude here is understandable.

But it remains a devastating problem with his whole approach and, moreover, undermines the worth of the Turing test.

For the question Turing raises – can a machine “carry out something which ought to be described as thinking but which is very different from what a man does?” – cannot be evaded. It is at the heart of the problem.

Let us propose two preliminary definitions of “intelligence” as follows:
(1) information processing that takes input and creates output that allows external behaviour of the type that animals and human beings engage in, and

(2) information processing that takes input and creates output that is accompanied by the same type of consciousness that human beings have.
Now a behaviourist would be interested in (1), but can ignore (2).

But (2) is the philosophically and scientifically interesting question.

(3) The Machines concerned in the Game
Turing now clarifies that by “machines” he means an “electronic computer” or “digital computer”: he only permits digital computers to be the machine in his Turing Test (Turing 1950: 436).

It is also explicit in the paper as well that Turing envisaged future computers as being sophisticated enough to pass the test, not the computers of his own time (Turing 1950: 436).

(4) Digital Computers
In this section, Turing explains the nature and design of digital computers.

But to make a computer mimic any particular action of a human being, the instructions for that action have to be carefully programmed (Turing 1950: 438).

(5) Universality of Digital Computers
Turing discusses discrete state machines (Turing 1950: 439–440), and points out that digital computers can be universal machines in the sense that one computer can be specifically programmed to compute a vast array of different functions (Turing 1950: 441).

Turing now returns to his main question:
“It was suggested tentatively that the question, ‘Can machines think?’ should be replaced by ‘Are there imaginable digital computers which would do well in the imitation game?’ If we wish we can make this superficially more general and ask ‘Are there discrete state machines which would do well?’ But in view of the universality property we see that either of these questions is equivalent to this, ‘Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?’” (Turing 1950: 442).
(6) Contrary Views on the Main Question
Unfortunately, Turing’s answer to the question whether computers can think is just as nonchalant as in section 1:
“It will simplify matters for the reader if I explain first my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” (Turing 1950: 442).
The very question “Can machines think?” is dismissed by Turing as “too meaningless to deserve discussion.”

There is not even any attempt to properly delineate or define the sense in which the word “intelligence” might be understood.

For Turing, all that matters is that the computer can successfully play the imitation game.

Turing then turns to objections which might be proposed to the idea that computers can think:
(1) The Theological Objection
This is the objection that human beings have an immaterial essence or soul that makes them intelligent. Turing dismisses this.

(2) The ‘Heads in the Sand’ Objection
This is really nothing more than the objection that machines being able to think is a horrible idea. Turing again responds that such an emotional response will not do.

(3) The Mathematical Objection
Turing considers the limitations of discrete-state machines in relation to Gödel’s theorem.

Turing replies to this by pointing out that the Imitation Game’s purpose is to make a computer seem like a human, so that incorrect answers or the inability to answer logical puzzles wouldn’t necessarily be a problem.

(4) The Argument from Consciousness
It is here we come to what should be the most interesting part of the paper.

Turing quotes a critic who rejects the idea that machines can ever equal the human brain:
“This argument [sc. from consciousness] is very well expressed in Professor Jefferson’s Lister Oration for 1949, from which I quote. ‘Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.’” (Turing 1950: 445–446).
Now this goes too far in demanding that the machine needs to directly experience human emotion, because it is, I imagine, possible for a human being to not feel emotion from brain disorder but still be conscious and intelligent.

Nevertheless, the demand that a machine would have to be fully conscious like us to be the equal of the conscious intelligent human mind is sound.

What is Turing’s response? Turing says that Jefferson demands that “the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking” – that is, we need solipsistic proof (Turing 1950: 446).

But that is a straw man and misrepresentation of what Jefferson said. Jefferson said not that he would need to be the machine to be convinced that it is the equal of the human brain, but that it must have the same conscious life as the human brain. Turing’s dismissal of this will not stand.

Turing then argues that if a computer could actually write sonnets and answer aesthetic questions about it, then it could pass the Turing Test and meet Jefferson’s demand (Turing 1950: 447).

(5) Arguments from Various Disabilities
Here Turing replies to critics who argue that, no matter how sophisticated a computer is, there must always be something new that they cannot do that humans can do.

Turing responds to this by saying that computers with more memory and better and better programs will overcome such an objection (Turing 1950: 449).

(6) Lady Lovelace’s Objection
This stems from objections made by Lady Lovelace to Babbage’s Analytical Engine.

Lovelace essentially said that such machines are bound by their programs and cannot display independent or original behaviour or operations (Turing 1950: 450).

Turing counters by arguing that sufficiently advanced computers might be programmed to do just that by learning (Turing 1950: 450).

(7) Argument from Continuity in the Nervous System
Here we get an interesting objection: that human brains have nervous systems and so what they do arises from a different biological substrate from that of electronic computers:
“The nervous system is certainly not a discrete-state machine. A small error in the information about the size of a nervous impulse impinging on a neuron, may make a large difference to the size of the outgoing impulse. It may be argued that, this being so, one cannot expect to be able to mimic the behaviour of the nervous system with a discrete-state system.

It is true that a discrete-state machine must be different from a continuous machine. But if we adhere to the conditions of the imitation game, the interrogator will not be able to take any advantage of this difference.” (Turing 1950: 451).
So Turing here dismisses the objection by saying that this doesn’t matter provided that the computer can fool an interrogator.

But this behaviourist obsession only with external output will not do. If we wish to answer the question “can computers attain the same conscious intelligence as people,” there must be a serious attempt to answer it. There is none.

(8) The Argument from Informality of Behaviour
The argument here is that human behaviour encompasses a vast range of activities and choices that cannot be adequately listed or described in rule books or programs.

Turing replies that nevertheless overarching general rules of behaviour are sufficient to make machines appear as humans (Turing 1950: 452).

(9) The Argument from Extra-Sensory Perception
In a quite bizarre section, Turing raises the possibility that human beings have extra-sensory perception, such as telepathy, clairvoyance, precognition and psycho-kinesis (Turing 1950: 453), that machines can never have.

Turing even states that “the statistical evidence, at least for telepathy, is overwhelming” (!!) (Turing 1950: 453). This would pose problems for the Turing Test apparently, though Turing’s argument is rather confused.

One solution, Turing suggests, is that we would need to have a “telepathy-proof room” in Turing Tests! (Turing 1950: 453).
(7) Learning Machines
In the final section, Turing suggests that the human mind is fundamentally a mechanical system, though not a discrete-state machine (Turing 1950: 455).

Turing thought that by the year 2000 there would be a definitive answer to question whether machines can regularly pass the Turing test, given a tremendous increase in memory and complexity of programming (Turing 1950: 455). Turing suggests that the first step is to create a computer that can simulate the answers of children and be a type of learning machine (Turing 1950: 456–459).

II. Critique of the Turing Test
Let us return to the two definitions of “intelligence” as follows:
(1) a quality in human digital computers in which information processing takes input and creates output that allows action or sentences, or external behaviour of the type that animals and human beings engage in, and

(2) information processing that takes input and creates output that is accompanied by the same type of consciousness that human beings have.
If we define “intelligence” in sense 1, then of course you can say that computers are “intelligent,” and not just computers that pass the Turing Test.

But if we focus on sense (2) – which is the philosophically and scientifically interesting question – it is not at all clear that computers have or can ever have the same *conscious* intelligence that human beings have.

Now the behaviourist Turing Test obviously influenced the functionalist theory of the mind, which holds that functional mental states may be realized in different physical substrates, e.g., in a computer. This freed functionalist psychologists and artificial intelligence researchers from a strict dependence on neuroscience. Since, for the functionalists, mental states are abstract processes that can be created in multiple physical systems, AI researchers using functionalism were free to study mental processes in a way that did not reduce their disciples to mere study of brain neuroscience and its physics and chemistry.

Those AI researchers following Turing adopted a top-down “Good Old Fashioned AI” (GOFAI) (or symbolic AI) research program and really thought that they could create artificial intelligence as rich as human intelligence. But their attempts to create a human-level artificial intelligence ended in miserable failure, and manifestly did not succeed. Of course, we did get some very useful technology out of their research, but none of these computers can remotely approach the full level of human intelligence.

Where did AI go wrong?

Let me start to answer that question by reviewing John Searle’s Chinese Room argument.

This was first presented in 1980, and Searle imagines a room in which he is present, with an opening through which paper can be pushed inside or outside. Searle receives papers through the slot in the room. On the paper are symbols which Searle can look up in a complex system that allows him to match the symbols and then to write another set of symbols on a paper which he can then pass out of the room. The system is a complex set of algorithms that provide him with instructions for producing new symbols. The symbols that Searle manipulates are in fact written Chinese and the responses will be intelligent answers to Chinese questions. Searle argues that, because he does not understand Chinese, a mere process that manipulates symbols with a formal syntax can never actually understand the meaning of the symbols. An understanding of real meaning, in other words, is impossible for such a system. We need a human mind with consciousness and intentionality for that.

This argument deployed by Searle is directed against Strong AI’s classical computational theory of mind, not just against Turing’s “Imitation Game,” for as we have seen Turing did not even care or address the question whether machines were conscious like human beings. According to the classical computational theory of mind, a sufficiently complicated program run on a digital computer could do what the human mind does, and this program itself would also be a mind, with the same cognitive states (consciousness, sensation etc.) that a human mind has.

Searle believes that his Chinese room argument shows that Strong AI is completely mistaken, and that mere algorithmic manipulation of symbols with syntax can never produce a conscious mind with perception, sensation and intentionality. Searle believes that he has shown this because the symbols manipulated in the Chinese room could be any type of information (e.g., text of any language or auditory or visual information), yet the person or system that manipulates them does not understand their meaning. In Searle’s view, the computation is purely syntactic—there is no semantics (Boden 1990: 89). So Searle argues that, if the person manipulating the symbols does not understand their meaning, then no computer can either if it only uses a rule-governed symbol manipulation (Searle 1980: 82).

Searle also criticises the Turing Test in the Chinese Room argument, since the room, if one imagined it as a computer, could pass the Turing test but still have no understanding. The mere appearance of understanding, then, is no proof of its existence. This, Searle argues, is a serious flaw in Strong AI, because the Turing Test is deeply dependent on a mistaken behaviourist theory of the mind (Searle 1980: 85). Turing might reply that he does not even care if the computer has understanding or consciousness, and so is not even concerned with the question whether a computer can attain “intelligence” in sense (2) above.

Searle has also criticised the connectionist theory in a modified version of the Chinese Room argument, which he calls the Chinese Gym (Searle 1990: 20–25).

The responses to Searle are various, but connectionist critics of Searle argue that we cannot predict what emergent properties might arise in computers with vast amounts of parallel processing and vector transformations (Churchland 1990: 30–31).

But, of course, unless science properly understands precisely how consciousness emerges from the brain, there is no definitive answer.

For John Searle, the biological naturalist theory of the mind is the best one we have, and human minds are a type of emergent physical property from brains and necessarily causally dependent on the particular organic, biological processes in the brain. Synthetic digital computers cannot attain consciousness because they lack the physically necessary biological processes.

Another point is that, if we say that the human brain involves information processing, then surely it must be the case that a very different type of information processing goes on in the brain compared with that which occurs in digital computers, and clearly many aspects of the human mind are not computational anyway.

Finally, if a computer passes a Turing Test, then all it demonstrates is that it is possible to simulate the verbal behaviour of human beings. It does not follow a computer can have conscious intelligence in sense (2) above.

My final point is mainly flippant.

Recently, another amusing chapter in the story of AI happened: Microsoft’s AI Chatbot called “Tay.”

Microsoft launched its chatbot Tay on Twitter, an AI program that writes tweets by learning from people who chat to it, and so supposedly learning to write Tweets and simulate conversations that sound more and more like those of a real human being.

It seems that certain people trolling this bot began frequently talking to it about highly – shall we say? – controversial things, and influencing the type of Tweets it was writing.

Tay was supposed to act like a teenage girl on Twitter. How did that experiment in AI go?

Within a day, Tay started spewing forth Tweets:
(1) that denied the truth of the Holocaust;

(2) that expressed support for Nazism and asserted that Hitler did nothing wrong, and

(3) that called for a genocide.
Not exactly another success for AI!

This is all described by the YouTube personality Sargon of Akkad in the video below.

N.B. Some bad language in the video and things in bad taste!



BIBLIOGRAPHY
Boden, M. A., 1990, “Escaping from the Chinese Room,” in Boden, M. (ed.), The Philosophy of Artificial Intelligence. Oxford University Press, Oxford. 89–104.

Chomsky, Noam. 2009. “Turing on the ‘Imitation Game,’” in Robert Epstein, Gray Roberts and Grace Beber (eds.), Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer, Dordrecht. 103–106.

Churchland, Paul, Churchland, Patricia, 1990, “Could a Machine Think,” Scientific American 262.1 (January): 26–31.

Churchland, Paul M. 2009. “On the Nature of Intelligence: Turing, Church, von Neumann, and the Brain,” in Robert Epstein, Gray Roberts and Grace Beber (eds.), Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer, Dordrecht. 107–117.

Hauser, Larry. “Behaviorism,” Internet Encyclopedia of Philosophy
http://www.iep.utm.edu/behavior/

Schwartz, Stephen P. 2012. A Brief History of Analytic Philosophy: From Russell to Rawls. Wiley-Blackwell, Chichester, UK.

Searle, J., 1980, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3: 417–458.

Searle, J., 1990, “Is the Brain’s Mind a Computer Program,” Scientific American 262.1 (January): 20–25.

Turing, Alan. 1936. “Computable Numbers, with an Application to the ‘Entscheidungsproblem,’” Proceedings of the London Mathematical Society 42.2: 230–267, with 43 (1937): 544–546.

Turing, Alan M. 1950. “Computing Machinery and Intelligence,” Mind 59.236: 433–460.

42 comments:

  1. Wow. I gave up. Maybe I'll try later. So I am not responding here to the whole thing, just two atrocities that pop put.

    Let's take two parts you highlighted in yellow. You have botched them.

    "as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.”

    You seem not to understand the difference between necessary and sufficient conditions all of a sudden. Turing is saying "now maybe machines can think in ways so odd we won't be
    able to match it up with human thinking, but I am investigating a sufficient condidtion, and suggest the imitation game as one"
    He is not troubled because if the machine passes the imitation game because that will be sufficient.

    Then you wrote:
    "The very question “Can machines think?” is dismissed by Turing as “too meaningless to deserve discussion.”

    Wow is this awful. The entire paper is addressed to trying to answer if machines can think and how we might know, so it should be obvious you are distorting something. And you are.
    You are groslly misrpresenting this sentence which, unlike you, I quote in full:
    "The original question, "Can machines think?" I believe to be too meaningless to deserve discussion."
    Turing presents the imitation game as a reformaulation of this question, sharpening it, and giving a way to actually address it concretely. That is why he talks about the "original" question not, as you write, the "very" question. The IG is presented as a better formulation of that original question made with an eye to providing a sufficient condition.

    Here is Turing:
    I propose to consider the question, "Can machines think?" ... We now ask the question, "What will happen when a machine takes the part of A in this
    game?" Will the interrogator decide wrongly as often when the game is played like this as
    he does when the game is played between a man and a woman? These questions replace
    our original, "Can machines think?"

    That's all on page 1 of the paper!


    ReplyDelete
  2. “You seem not to understand the difference between necessary and sufficient conditions all of a sudden. Turing is saying "now maybe machines can think in ways so odd we won't be
    able to match it up with human thinking, but I am investigating a sufficient condition, and suggest the imitation game as one"


    You are saying that Turing thought that a computer passing the Imitation Game (and let’s say repeatedly) is a sufficient condition for it being regarded as intelligent in some interesting sense, but in a sense different from the likely different sense in which human beings are intelligent? (since “May not machines carry out something which ought to be described as thinking but which is very different from what a man does?).

    Geez, that is actually what I argued in the post.

    See my 2 definitions of intelligence in part II:

    (1) a quality in human digital computers in which information processing takes input and creates output that allows action or sentences, or external behaviour of the type that animals and human beings engage in, and

    (2) information processing that takes input and creates output that is accompanied by the same type of consciousness that human beings have.
    -------------
    If we define “intelligence” in sense 1, then of course you can say that computers are “intelligent,” and not just computers that pass the Turing Test.

    ReplyDelete
  3. "The entire paper is addressed to trying to answer if machines can think and how we might know, so it should be obvious you are distorting something."

    I am not distorting it. Turing's test for whether a machine is intelligent is simply the Imitation game, but as I point out the philosophically and scientifically interesting question is: what explains human conscious intelligence.

    This is a sense of intelligence in sense (2) as above in first comment.

    Turing is quite clear he doesn't care about this, using, as he does, a behaviourist method.

    ReplyDelete
  4. I confess I am puzzled by this logic: Tay failed its Turing test, therefore the Turing test is garbage. Don't get me wrong; it's better than your main arguments, it's better than the Chinese Room argument, but it's still pretty weak.

    ReplyDelete
    Replies
    1. Ken B, Ken B -- all those comments on Tay, which was a type of funny failure, are mostly flippant, facetious.

      Delete
    2. Are you even properly reading the post properly? "My final point is mainly flippant."

      Delete
    3. And the Chinese room argument is really effective, despite what you say. I used to think like you, but I see its power now.

      It shows mere algorithmic manipulation of symbols with syntax does not seem to lead to semantic understanding, i.e, these universal Turing machines engaged in syntactic computation can derive no semantics.

      Delete
    4. The kind reading is that it's all flippant. But some might detect flippancy in the tone of my comment.

      Delete
    5. The fact that you seem to have given up any serious critique is suggestive.

      Let's try one more time.

      I propose 2 definitions of intelligence:

      (1) a quality in human digital computers in which information processing takes input and creates output that allows action or sentences, or external behaviour of the type that animals and human beings engage in, and

      (2) information processing that takes input and creates output that is accompanied by the same type of consciousness that human beings have.
      -----------------
      Do you see no merit in these definitions? Don't you see how a system could be "intelligent" in sense 1 but not in sense 2?

      Delete
  5. LK, you have admitted that your argument is that thought depends entirely on the nature of the substrate. As I and others have noted, this is an unproven assumption, and is in fact a prejudice. Hence my mockery of Lene Kallahan. And you are in precisely that position: you say that *this* substrate will do and no other. So does Lene Kallahan, he just uses a different criterion to define it. You advance no proof for yours, nor he for his.
    Now let's imagine you wanted to refute Lene. I submit your argument would boils down to "you can't tell them apart except by begging the question." This is precisely the problem all racists run into as well. The despised group seems just like us according to all criteria except the one defining the group.
    Since I have pointed this out I am hardly retreating from serious critique.

    Let me get more serious. The Chinese Room argument appeals only to those who lack any understanding of formal systems and their semantics. It involves a deep confusion about the nature of what an algorithm is. In particular a complete lack of understanding of emulation and layers in computation. This point is explained here. http://www.jimpryor.net/teaching/courses/mind/notes/searle.html The systems reply et seq.

    As many have noted Searle's argument, were it sound, would prove too much. All symbols that are manipulated are physical arrangements. If such manipulations cannot explain semantics then neither can physical manipulations within and by neurons.

    As for your questions, how do you tell you are interacting with a system that meets 2? Surely you don't insist all human bodies are capable of it? Elgar doesn't even answer the door. So it's not being a human, you also look to behaviour. Are you then a behaviourist? Of course not; nor is Turing, nor is any cognitive scientist.

    ReplyDelete
    Replies
    1. "I and others have noted, this is an unproven assumption, and is in fact a prejudice."

      Umm, Ken B, it is not a "prejudice", but a *serious scientific hypothesis* that has been proposed.

      Of course it is "unproven" in the sense that we do not have overwhelming empirical evidence in favour of it as yet (in the same way, say, that we have overwhelming empirical evidence that the earth revolves around the sun).

      But a naturalistic, rational and possible hypothesis, consistent with Searle's biological naturalist theory of the mind, is the Relativistic Theory of the Mind of Ronald Cicurel and ‎Miguel A. L. Nicolelis, based on the Conscious Electromagnetic Information (CEMI) theory of Johnjoe McFadden.

      This is that our internal brain neuronal electromagnetic fields, produced by neurons and nerve bundles or white matter, are the fundamental cause of consciousness. These complex patterns of neuronal electromagnetic fields have to considered as a global/aggregate phenomenon in the brain, and they seem to be capable of analogue-like information processing and even of modifying our neural networks.

      The human brain that produces the mind is an integrated system and in that respect resembles an analog computer (where there is no separation between hardware and software), not a digital computer. A digital computer or Turing machine cannot reproduce or create a mind like ours. And this is before we get to a biological explanation of emotions, etc., which manifestly computers cannot experience.

      See:
      McFadden, J. 2002. "The Conscious Electromagnetic Information (Cemi) Field Theory – The Hard Problem made Easy?,” Journal of Consciousness Studies 9.8): 45–60.

      McFadden, J. 2002. “Synchronous firing and its influence on the brain's electromagnetic field – Evidence for an electromagnetic field theory of consciousness,” Journal of Consciousness Studies 9.4: 23–50.

      Cicurel, Ronald and ‎Miguel A. L. Nicolelis. 2015. The Relativistic Brain: How it Works and Why it Cannot Be Simulated by a Turing Machine. Kios Press, Natal, Montreux, Durham, São Paulo.
      ---------------
      Buy the Cicurel and Nicolelis book if you want a concise but fascinating account of this.

      Delete
    2. "You advance no proof for yours, nor he for his."

      Just gave you citations with the evidence above in serious publications by scientists, including a well respected neuroscientist.

      Delete
    3. "If such manipulations cannot explain semantics then neither can physical manipulations within and by neurons. "

      You fail to consider human **consciousness** as the basis of semantics, which is emergent property from a biological system.

      Geez, it is precisely that we have no good reason to think Turing machines can be consciousness that Searle's argument hits home, since what are they but machines engaged in mere algorithmic manipulation of symbols with syntax?

      Delete
  6. Let's take a break from Turing and turn to Searle's words:


    Searle says
    that in this version, again, John “understands
    nothing of the Chinese, and a fortiori neither
    does the system, because there isn’t anything
    in the system that isn’t in him”

    (This from his suggestion that John could memorize the books and thus the system would be within John.)
    Neither does John' brain understand English because none of the neurons in his brain do, and there is nothing in his brain not in his neurons.

    Neither does LK's argument prove anything because there is nothing in them not in the pixels of his reply and no pixel proves his claims.

    Neither does the economy have aggregate demand because ...

    You see the point. Searle's argument is unsound, and it is effectively a blank denial that emergence is possible. And the argument he is making that relies on this obvious fallacy has other problems, viz the misunderstanding of emulation and layers.

    ReplyDelete
    Replies
    1. Neither does John' brain understand English because none of the neurons in his brain do, and there is nothing in his brain not in his neurons.

      His understanding of English happens in the emergent property that is his conscious mind where he **already understands English words semantically** having learned them. His mind emerges from the activity of 100 billion neurons and internal brain neuronal electromagnetic fields.

      By contrast, when he manipulates Chinese symbols when he has no semantic understanding of them his behaviour is analogous to a digital computer, which engages in mere algorithmic manipulation of symbols with syntax, without semantics.

      Delete
    2. KenB is absolutely correct in what he is saying here. If speaking (and understanding) English is an emergent property of the activities of a brain then Searle has setup a system where speaking (and understanding) Chinese is an emergent property of the system.

      Say the Chinese room operates as a simulation on the Neuron level (including hormones) of a native chinese speakers brain then how can it be argued the Chinese room is any less conscious? The distinction is just an arbitrary separation of biological systems. You can only apply that distinction having shown that some property of biological systems can't actually be simulated.

      Delete
  7. LK:
    "The human brain that produces the mind is an integrated system and in that respect resembles an analog computer (where there is no separation between hardware and software), not a digital computer. A digital computer or Turing machine cannot reproduce or create a mind like ours. And this is before we get to a biological explanation of emotions, etc., which manifestly computers cannot experience."
    Point 1. Circular.
    Point 2. The implication that in digital computers there is a logical distinction between hardware and software is false. *Any hardware can be emulated in software; any software can be emulated in hardware.*
    I said earlier that critics did not understand emulation and voila! you prove me right.

    ReplyDelete
    Replies
    1. (1) are you telling me you think human emotions are not linked to hormones, particularly brain chemicals? That Turing machines can experience emotions?

      (2) how is it circular?

      (3) no "logical distinction between hardware and software" in digital computers?

      So what about analogue systems?

      Delete
  8. How do we know other humans are conscious, LK? I believe this only because I know *I* am conscious, and I don't believe myself to be unique among humanity. Given that this is so, is there any evidence that should convince me that a mechanical brain were conscious?

    ReplyDelete
    Replies
    1. On other people being conscious, yes, you have rational and good reasons to think so on inductive arguments/inference to the best explanation, given the empirical evidence we now have about world, evolution, brains and minds. It can only be a posteriori and probabilistic proof, however.

      If you are looking for a priori or 100% apodictic proof, this is not possible, and people who demand it are being ridiculously unrealistic.

      Delete
    2. Sop here you admit to the prejudice. No-one is talking about logical proof. That is in fact one of Callahan's distoprtions, the absurd claim Turing claimed he had a 100% proof. But look back to what I had Lene Kallahan say about how he 'knew' people were conscious. I would say I passed the LK Turing test!!

      Delete
    3. What are you talking about, Ken B?

      What prejudice?

      *All* our scientific theories are known only a posteriori and their truth is probabilistic only. *All* in empirical knowledge is like that.

      If you defend the computational theory of mind, or I defend Conscious Electromagnetic Information (CEMI) Theory of Mind, these can only be known a posteriori and their truth is probabilistic only.

      Delete
    4. What prejudice? Are you kidding me? You assert you know people can think. You do this on the basis of their similarity to yourself in certain regards. Turing suggests machines might think. He suggests this is so based on their similarity to yourself in certain regards. Impossible you cry! No test of similarity can convince me! It doesn't matter what they can do and how impossible it might be for me to distinguish them they still can't think!
      How is that not "preconceived opinion that is not based on reason or actual experience."

      Delete
    5. (1) "You assert you know people can think."

      I assert it is the most probable explanation that other people are conscious as I am, not that it is necessarily true.

      The evidence is not just that they have the same external behaviour as I do and speak as I do, but that on evolutionary, biological, neurophysiological grounds, we have infer that the same process going on in brain that makes my conscious makes them conscious too.

      By contrast, I have no such good
      evolutionary, biological, and neurophysiological grounds for saying Turing machines are conscious like me.

      This is not a prejudice, nor it is irrational. It is sound and legitimate inductive argument.

      (2) "Turing suggests machines might think. He suggests this is so based on their similarity to yourself in certain regards. Impossible you cry!"

      FALSE, Ken B. This makes me think you haven't even read the post.

      I said **explicitly** that if we define intelligence as:

      (1) a quality in machines or human digital computers in which information processing takes input and creates output that allows action or sentences, or external behaviour of the type that animals and human beings engage in
      -----
      then, YES, I accept they are intelligent in that sense.

      It is on definition (2) of intelligence on which we have no good grounds to think Turing Machines can attain the same conscious intelligence as human beings.

      (3) "test of similarity can convince me! "

      Of course, it could. If we examine an entity that seems to be like a human being and said: look, here is the same physical, biological, and neurophysiological activity going on its brain that we think causes human consciousness, then, yes, you then have some empirical evidence to support the view that the entity is consciously intelligent like us. E.g., if we ever found aliens on other worlds with highly evolved minds like ours we could test them in this way. But Turing machines fail the test.

      Delete
  9. LK, you argue the Rothbardians are not really capitalists, because of their absurd ideas on banking and credit, and because of their opposition to many policies which allow capitlaism to flourish. You maintain this in the face of their angry denials and strident assertiosn that they are so capitalists.

    You are an evolution denier.

    The assumption which you slip into your discussion of the Chinese Room, that symbolic manipulation can never give rise to meaning, is incompatible with evolution if we grant that certain evolved creatures deal with meaning.

    The reason is quite simple. All symbols are physical, symbolic processing is a physical process; all information is physical, information processing is a physical process. Humans, with their symbolic processing units, evolved from creatures with simpler and simpler brains. Not even you will deny that at some stage it would be easy to model the simplest biological information processing systems with machines.

    Now if meaning is an emergent property of a system it is none the less a property of the system, and not pixie dust added to it. This implies that at some stage these simpler symbolic processing units did not have it but as they evolved they deveopled it. Yet at no point did they cease to be symbolic processing units nor gain any new capacity; they simply became vastly more complex. But if meaniong arose it arose out of symbolic processing. Let me restate for clarity: what evolved brains do is process representations of information. To deny meaning can arise from symbolic manipulation is to deny any evolved brain can deal with meaning. To assert the human brain does is to deny it evolved.

    ReplyDelete
    Replies
    1. "Yet at no point did they cease to be symbolic processing units nor gain any new capacity; they simply became vastly more complex. But if meaning arose it arose out of symbolic processing. Let me restate for clarity: what evolved brains do is process representations of information. To deny meaning can arise from symbolic manipulation is to deny any evolved brain can deal with meaning."

      There is no contradiction here, Ken B. We evolved from creatures with no consciousness, yes.

      But we have every reason to think that the biological evolution of brains began with
      a simple integrated system resembling an analogue computer and where information is
      embedded in matter to some extent from the beginning **with some very low level of sensation or perception which by nature provided meaning/ semantics.**

      Information in brains is substrate-dependent from the beginning and had some crude level of meaning, NOT like a Turing machine which only ever manipulates symbols by syntax without semantics and without conscious perception of the world.

      This enriches our understanding of evolution, and does not deny it.

      "This implies that at some stage these simpler symbolic processing units did not have it but as they evolved they deveopled it. Yet at no point did they cease to be symbolic processing units nor gain any new capacity; ... But if meaning arose it arose out of symbolic processing.

      That is where you go wrong. Some kind of very crude meaning/sensation/perception was there was the beginning of the first brains.

      You are just stuck in the mistaken of paradigm of thinking early brain must be Turing machines. There is no good reason to assume this:

      http://socialdemocracy21stcentury.blogspot.com/2016/04/miguel-nicolelis-and-ronald-cicurel-on.html

      Delete
    2. LK, if meaning was there from the first brains then meaning is there is the very simple circuits which can simulate them. Sensation is due to chemical and electrical effects on cells. Nothing more.
      You have utterly failed to address my argument here.

      Delete

    3. No, Ken B, not the full complex meanings/semantics that human brains have.

      What I mean is: given the physical nature of human brains (common biological architecture), there is a gradation of minds from the lowest and simplest to the most complex.

      Even the lowest would have had crude and primitive sensation and perception in some limited ways and crude, very low levels of consciousness (in the representation of external environment), hence a crude and primitive meaning/semantics. Raw perception that animal brains have is different from Turing machines.

      I suspect it is bad error to think animals, at any stage of evolutionary history, have ever been Turing machines.

      Delete
    4. More on this: I asked Miguel Nicolelis on Twitter the question whether animal minds have ever been Turing machines:

      https://twitter.com/MiguelNicolelis/status/715927686737170433

      As you see he agrees it is a bad mistake.

      Delete
    5. "Dr Rothbard, would you agree FRB is a bad mistake?"
      "Yes I would."

      These guys seem like serious people, but as your intro their other post notes they are a small minority. As heterodox as Hoppe to coin a phrase. They have published a short book, circumventing peer review, that makes strong claims about computability and formal systems, and a man with no training in either formal systems or computability theory finds them convincing. Noted.

      Here's the only review I could find (he has two posts) from someone qualified in a relevant subject. https://fietkiewicz.wordpress.com/2015/09/09/simulation-book-does-not-compute-ch-1/

      My guess is that, since they rehash yet another Goedel argument, their theory is a mess. I await a detailed review by a competent reviewer. (I will read their book once you finish all the books on Marxism Hedlund has assigned you.)

      Delete
    6. Your entire argument is this. Meat as a substrate can support emergence, but no other known substrate can. This is a species of vitalism.

      Delete
    7. "Meat as a substrate can support emergence, but no other known substrate can."

      (1) No other substance can support emergent properties? I've said no such thing, Ken B. You can get many kinds of emergent properties from non-organic materials, e.g., superconductivity.

      (2) maybe you mean I say that only the relevant physically necessary brain processes can support conscious intelligence of the type humans have?

      I say that it is *probably* true that it is because of certain physical/biological processes in the brain that produce conscious intelligence that humans have it, yes. It can be defended as a scientific hypothesis.

      That doesn't mean I am dogmatic about it. It is a matter for empirical science. It is especially plausible given that failure of strong AI.

      As for vitalism, that is a silly charge. Vitalism is the view that "living organisms are fundamentally different from non-living entities because they contain some non-physical element", e.g., supernatural element.

      There is no supernaturalism here. This theory is all based on current physics, biochemistry and neuroscience. The only point is, to get the needed emergent property (consciousness) you need certain physical processes. You need a particular substrate, just as you need a particular substrate to create magnetism.

      What's more, I've made it clear from the beginning that it may well be possible to create true AI with conscious intelligence like ours, but you would need to use the same physically necessary processes that go on in the brain and put them in some new kind of technology.

      Delete
    8. As for your review, I see no serious engagement with the book's arguments. Also, it is by an electrical engineer and computer science major.

      I could just as easily counter that this man isn't qualified to talk about neuroscience, Ken B.

      Miguel Nicolelis by contrast is an actual neuroscientist:

      https://en.wikipedia.org/wiki/Miguel_Nicolelis

      Delete
    9. He is qualified to tallk about claims regarding what Turing machines can do. I think you (and I know Callahan) have no real understanding of just how PRECISE a claim that a "Turing machine cannot do that" is. It is a claim subject to rigourous mathematical proof. And it will run into a lot of theory. Consider a finite number of neurons in some simple life form with a finite number of states. Everything it does can be done by a TM because all finite sets are computable. So these few simple neurons must have an infinite number of states even for the claim of non-computability to anything but laughable. And I doubt very much any reasearcher has catalogued an infinite number of states. Or proven there such a number much less proven that said set is uncomptuable.

      Delete
    10. The issue is not that Turing machines can perform all computable mathematical functions. Why do you throw a straw man at me?

      The issue is whether human-level consciousness can arise from mere information processing on Turing machines. As we have seen, Turing never even cared about this question at all.

      The computational theory of mind advocates who followed Turing (whether the functionalists or connectionists) thought that human-level consciousness can be reproduced with sufficiently complex information processing on Turing machines, no matter what the substrate.

      But that is a profoundly problematic belief, if you have failed to properly understand and master what creates human consciousness.

      We have come exactly in a circle. I repeat the analogy I gave you a long time ago: it doesn't matter how perfect your computer simulation of digestion is, it is not the physical process of digestion. Turing machines cannot reproduce digestion.

      If consciousness requires physically necessary processes or substrates in our brain biology as the biological naturalist theory of the mind argues (and there are plenty of advocates of this from philosophers, cognitive scientists to neuroscientists), Turing machines can never be conscious.

      Note well: the *particular* biological naturalist theory of the mind that I have sketched and suggested here may well be wrong, and yet something other alternative biological naturalist theory of the mind may still be true (e.g., ones that focus on brain chemistry, neural firing or combination thereof).

      Delete
    11. "Consider a finite number of neurons in some simple life form with a finite number of states. Everything it does can be done by a TM because all finite sets are computable."

      You also beg a question here.

      How do you **know** that even in a simple life form with a simple brain, that its rudimentary conscious perception or primitive consciousness is just a Turing computable function??

      How do you know it isn't a raw biological, physical process like digestion? Do tell.

      Delete
    12. Your position is consciousness sans neurons?
      Anyway I am responding to the claims that these nerves do things no computer can do. Nonsense I say, and finiteness is one proof it's nonsense. But in any case the burden to prove impossibility is yours.

      Delete
    13. (1) "Your position is consciousness sans neurons? "

      What are you even talking about. The position is that consciousness is *probably* an emergent property of huge masses of neural networks, which requires causally necessary physical and biological processes, just as say photosynthesis.

      (2) "Anyway I am responding to the claims that these nerves do things no computer can do. Nonsense I say,"

      And you're wrong. If I said that human stomachs can do "things no computer can do" would you also reply that this is nonsense?

      (3) You have also yet to respond to this very serious question:

      http://socialdemocracy21stcentury.blogspot.com/2016/04/john-searle-on-consciousness-in.html?showComment=1459785439842#c2685900944969989624

      Delete
  10. LK, when you say something cannot be calculated by a TM then of course it matters what TMs can calculate. You specifically cited some guy saying no TM could do that. That would then be a non computable function of its inputs.

    ReplyDelete
    Replies
    1. "LK, when you say something cannot be calculated by a TM then of course it matters what TMs can calculate"

      I don't understand you.

      Yes:

      (1) a function that can be implemented by a Turing machine is Turing computable;

      (2) A Turing computable function can be implemented on a Turing machine with an algorithm to compute the function;

      (3) the Church–Turing thesis is that given any algorithm, there will be some Turing machine that can implement it; that is, if a function is computable, then it is Turing computable.
      --------
      OK?

      But, say, the actual mental experience of a human emotion is not a Turing computable function; it is raw sensation through physically-necessary processes in brains to do with biology, biochemistry, and neurons.

      And, moreover, the issue is whether human-level consciousness can arise from mere information processing on Turing machines.

      And I asked you: How do you *know* that even in a simple life form with a simple brain, that its rudimentary conscious perception or primitive consciousness is just a Turing computable function?

      You give no answer.

      Delete
  11. There's another stuff going on here on Latin America, it's closely similar to Tay, there's an app called "Simsimi" which is basically a mix of Tay and Cleverbot. However, certain troll facebook groups began chatting simsimi, now, the app does strange things like insulting, death threats, and explicit sexual messages. Technology is... strange sometimes. The funny thing is that this happened almost at the same time as Tay.

    ReplyDelete
  12. Chomsky mega lecture series. In part 7 in particular there is a lot about these things (Turing and Searle).
    https://youtu.be/FNhpRu7eNjI?t=50m21s

    ReplyDelete