Carole Cadwalladr, “Are the Robots about to rise? Google’s New Director of Engineering thinks so…,” The Observer, 23 February 2014.Something that looks uncomfortably like a cult has arisen around Kurzweil’s idea of “the Singularity,” which is “a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.”
Of course, there seems little doubt that the rise of increasingly sophisticated machines such as computers and robots will revolutionise economic and social life as we go forward into the future.
Indeed, a substantial part of the industrial revolution is simply the profound changes in production by labour-saving machines.
But the economic problems that are, and will be, created by increasing automation, structural unemployment and loss of aggregate real income should be of profound concern to economists. Indeed, an interesting warning of the possible economic problems that could be caused by technology is given, not by an economist, but by the computer engineer Martin Ford in his The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (2009) (though I think some of his predictions are too dire).
But, to return to my main point, this interview with Kurzweil contains so many questionable ideas, it is difficult to know where to start.
Take the central idea:
“Ray Kurzweil … believes that … computers will gain what looks like a lot like consciousness in a little over a decade is now Google’s director of engineering.”But if we read the whole interview and Kurzweil’s statements elsewhere, it appears that Kurzweil thinks that computers will actually attain consciousness, not just simulate it.
There is a fundamental difference between (1) merely simulating something and (2) actually having that property.
The crude behaviourist Turing test, for example, does not test for whether an entity has conscious life, but merely whether it simulates human intelligence. Thus even if future computers all start passing Turing tests, it is not going to be some shocking milestone in human history: all it will show is that software programs have become sophisticated enough to fool us into thinking machines have conscious minds as we do, even though they do not.
This point has been brilliantly made by the analytic philosopher John Searle, who, in my view, has written outstanding work on philosophy of mind (e.g., Searle 1990; Searle 2002; Searle 1992).
When I was an undergraduate, I did a course on cognitive science, and Searle’s work, such as the Chinese room argument, was rightly required reading.
John Searle convincingly argues that external behaviour of computers – no matter how sophisticated – does not provide any good evidence that computers really have conscious thought as humans do.
This is why the “computer” metaphor for the brain is potentially misleading. Brains are often compared to computers. There is no doubt that computers are based on information processing, and in some sense this is also what the brain does.
But it does not follow that the consciousness of the human mind is just information processing that can be reproduced in other synthetic materials, such as in silicon chips in digital computers. After all, DNA and its behaviour exhibit a type of information processing and storage as well, but DNA is not conscious.
As Searle points about in the video below, all the empirical evidence suggests that consciousness is a biological phenomenon in the brain causally dependent on neuronal processes and biochemical activity, but one that can be explained by physicalist science, not some discredited supernatural ideas about souls or Cartesian dualism.
The crucial term here is biological: consciousness is a biological property of complex living systems like humans and higher animals.
Exactly what causes it and how it emerges is a profoundly difficult scientific question, but it is fairly clear that consciousness is a complex emergent property of the brain, its neurons and biochemical processes.
If that is so, no matter how sophisticated any digital computer is, it remains as unconscious and unfeeling as your washing machine or pet rock. Once this is accepted, it follows that a lot of Ray Kurzweil’s more outlandish claims are ridiculous, such as, for example, that a human being could transfer his or her conscious mind into a digital computer. Perhaps you could create a believable simulation of a human mind with a digital computer, but again it is extremely unlikely that such a thing would be conscious.
I have to stress that this is not some obscurantist, religious objection to artificial intelligence (I am completely non-religious): it is grounded in good science.
As John Searle has argued, science may well be capable of creating truly conscious and self-aware artificial intelligences in the future, but it is unlikely that they will be digital computers.
An artificial intelligence will have to directly reproduce or replicate the biological processes in the brain that cause consciousness. Perhaps an “artificial” intelligence – in the sense of not being a normal human being – will need to have organic or biochemical structures in its “brain” in order for it to be fully and truly conscious.
Such entities, if they were fully conscious, would create all sorts of ethical issues. They would probably have to be imbued with moral/ethical principles as humans are, for example. Probably they would have to be granted some kind of human rights at some point, so that we could not treat them as slaves. And what actually would they do? What work would they perform?
I suspect virtually all the work of production – especially the difficult, unpleasant, and backbreaking work that humans hate – can one day be done by machines. But what we want for this task is unthinking, unfeeling, and unconscious machines: machines that can be treated like slaves with no ethical problems arising. For example, nobody needs to worry that the household washing machine is being mistreated or exploited, because such concepts do not, and cannot, apply to unthinking and unfeeling machines. But with a truly artificial intelligence, suddenly such ethical question would arise.
But such musings can only remain speculations, and whole issue of whether truly conscious artificial intelligence can be created is a matter for a future science that has first completely mastered what human (and higher animal) consciousness actually is.
Gary Marcus, “Ray Kurzweil’s Dubious New Theory of Mind,” The New Yorker, November 15, 2012
Ford, Martin. 2009. The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future. Acculant Publishing.
Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers exceed Human Intelligence. Penguin Books, New York.
Kurzweil, Ray. 2005. The Singularity is Near: When Humans Transcend Biology. Viking, New York.
Searle, John. R. 1980. “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3: 417–424.
Searle, John. R. 1980. “Intrinsic Intentionality,” Behavioral and Brain Sciences 3: 450–456.
Searle, John R. 1982. “The Chinese Room Revisited,” The Behavioral and Brain Sciences 5: 345–348.
Searle, John. R. 1990. “Is the Brain a Digital Computer?,” Proceedings and Addresses of the American Philosophical Association 64: 21–37.
Searle, John R. 1992. The Rediscovery of the Mind. MIT Press, Cambridge, Mass and London.
Searle, John. R. 2002. “Why I Am Not a Property Dualist,” Journal of Consciousness Studies 9.12: 57–64.