Showing posts with label robots. Show all posts
Showing posts with label robots. Show all posts

Monday, February 23, 2015

Automation and Robots in the News

Interesting news on the rise of automation and robots both in the West and in China:
Paul Davidson, “More robots coming to U.S. factories,” USA Today, February 10, 2015.

Georgina Prodhan, “China to have most robots in world by 2017,” Reuters, February 6, 2015.

Paul Wiseman, “Robots Replacing Human Factory Workers at Fast Pace,” Cio Today, February 22, 2015.
While only about 10% of manufacturing tasks in American factories are automated today, some think that this will rise to about 25% in about 2025, a decade from now. But, more importantly, what will the percentage be in 2035 and 2050? Presumably much higher.

The upside of this will be increased productivity and what has been dubbed “reshoring,” or the return of manufacturing to the Western world. If this happens on a large enough scale in the long run, that in turn means lower trade deficits and a larger manufacturing sector and output for nations with mass consumer markets in North America and Europe.

But we can’t see the full effects of this today and the downsides, partly because robotics has been – for a long time – an overrated field with many robots being just expensive frivolous toys. All that is changing, as anyone reading the news can see.

A lot of neoclassical economists and those under their influence see no problem with mass automation, but that is only because neoclassical economists is deeply flawed and mistaken in its core principles.

Economies run on orthodox neoclassical theory are likely to have chronic problems of insufficient aggregate demand and mass structural unemployment as automation in production soars and even service and white collar work can be done by artificial intelligence (AI).

Market economies have no effective and reliable tendency to full employment equilibrium, and there is no necessary reason to think that the issue of structural unemployment will be solved by markets.

Worse still, with the fall in prices and factor input costs, possible general price deflation could put downward pressure on wages in the future, which means debt deflationary problems as goods prices, wages, nominal debt and asset prices are grossly distorted in relation to one another.

In the late 19th century from 1873 to 1896, for example, there was a period of chronic deflation, probably caused to a great extent by positive supply shocks and technological advancement, but the effects on investment, confidence and the economy at large were deleterious (the evidence can be seen here, here, and here).

Economic policies will need to change in the future to reflect the realities of production by mass automation.

Government welfare will have be reconsidered, not as some safety net, but as a basic human right in an age of automation: in essence, everyone will need a basic guaranteed income from the state.

If you wish to work in addition to this, as no doubt many – and probably most – people will, and you cannot find work in the private sector, the traditional policy of Keynesian fiscal stimulus will become weaker and weaker in its effects as more and more work is done by machines and artificial intelligence.

Eventually, governments will need direct mass employment programs to create economically and socially useful work, e.g., in social services where real people can never really be displaced, and in education, sciences, research and development, or other services.

Perhaps I just haven’t read the literature well enough, but I am surprised that Post Keynesian economists haven’t taken these issues more seriously in their academic writing.

Monday, February 24, 2014

Limits of Artificial Intelligence

Futurist and inventor Ray Kurzweil – author of such books as The Age of Spiritual Machines (1999) and The Singularity is Near: When Humans Transcend Biology (2005) – is interviewed here in a somewhat breathless interview in the Observer:
Carole Cadwalladr, “Are the Robots about to rise? Google’s New Director of Engineering thinks so…,” The Observer, 23 February 2014.
Something that looks uncomfortably like a cult has arisen around Kurzweil’s idea of “the Singularity,” which is “a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.”

Of course, there seems little doubt that the rise of increasingly sophisticated machines such as computers and robots will revolutionise economic and social life as we go forward into the future.

Indeed, a substantial part of the industrial revolution is simply the profound changes in production by labour-saving machines.

But the economic problems that are, and will be, created by increasing automation, structural unemployment and loss of aggregate real income should be of profound concern to economists. Indeed, an interesting warning of the possible economic problems that could be caused by technology is given, not by an economist, but by the computer engineer Martin Ford in his The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (2009) (though I think some of his predictions are too dire).

But, to return to my main point, this interview with Kurzweil contains so many questionable ideas, it is difficult to know where to start.

Take the central idea:
“Ray Kurzweil … believes that … computers will gain what looks like a lot like consciousness in a little over a decade is now Google’s director of engineering.”
But if we read the whole interview and Kurzweil’s statements elsewhere, it appears that Kurzweil thinks that computers will actually attain consciousness, not just simulate it.

There is a fundamental difference between (1) merely simulating something and (2) actually having that property.

The crude behaviourist Turing test, for example, does not test for whether an entity has conscious life, but merely whether it simulates human intelligence. Thus even if future computers all start passing Turing tests, it is not going to be some shocking milestone in human history: all it will show is that software programs have become sophisticated enough to fool us into thinking machines have conscious minds as we do, even though they do not.

This point has been brilliantly made by the analytic philosopher John Searle, who, in my view, has written outstanding work on philosophy of mind (e.g., Searle 1990; Searle 2002; Searle 1992).

When I was an undergraduate, I did a course on cognitive science, and Searle’s work, such as the Chinese room argument, was rightly required reading.

John Searle convincingly argues that external behaviour of computers – no matter how sophisticated – does not provide any good evidence that computers really have conscious thought as humans do.

This is why the “computer” metaphor for the brain is potentially misleading. Brains are often compared to computers. There is no doubt that computers are based on information processing, and in some sense this is also what the brain does.

But it does not follow that the consciousness of the human mind is just information processing that can be reproduced in other synthetic materials, such as in silicon chips in digital computers. After all, DNA and its behaviour exhibit a type of information processing and storage as well, but DNA is not conscious.

As Searle points about in the video below, all the empirical evidence suggests that consciousness is a biological phenomenon in the brain causally dependent on neuronal processes and biochemical activity, but one that can be explained by physicalist science, not some discredited supernatural ideas about souls or Cartesian dualism.



The crucial term here is biological: consciousness is a biological property of complex living systems like humans and higher animals.

Exactly what causes it and how it emerges is a profoundly difficult scientific question, but it is fairly clear that consciousness is a complex emergent property of the brain, its neurons and biochemical processes.

If that is so, no matter how sophisticated any digital computer is, it remains as unconscious and unfeeling as your washing machine or pet rock. Once this is accepted, it follows that a lot of Ray Kurzweil’s more outlandish claims are ridiculous, such as, for example, that a human being could transfer his or her conscious mind into a digital computer. Perhaps you could create a believable simulation of a human mind with a digital computer, but again it is extremely unlikely that such a thing would be conscious.

I have to stress that this is not some obscurantist, religious objection to artificial intelligence (I am completely non-religious): it is grounded in good science.

As John Searle has argued, science may well be capable of creating truly conscious and self-aware artificial intelligences in the future, but it is unlikely that they will be digital computers.

An artificial intelligence will have to directly reproduce or replicate the biological processes in the brain that cause consciousness. Perhaps an “artificial” intelligence – in the sense of not being a normal human being – will need to have organic or biochemical structures in its “brain” in order for it to be fully and truly conscious.

Such entities, if they were fully conscious, would create all sorts of ethical issues. They would probably have to be imbued with moral/ethical principles as humans are, for example. Probably they would have to be granted some kind of human rights at some point, so that we could not treat them as slaves. And what actually would they do? What work would they perform?

I suspect virtually all the work of production – especially the difficult, unpleasant, and backbreaking work that humans hate – can one day be done by machines. But what we want for this task is unthinking, unfeeling, and unconscious machines: machines that can be treated like slaves with no ethical problems arising. For example, nobody needs to worry that the household washing machine is being mistreated or exploited, because such concepts do not, and cannot, apply to unthinking and unfeeling machines. But with a truly artificial intelligence, suddenly such ethical question would arise.

But such musings can only remain speculations, and whole issue of whether truly conscious artificial intelligence can be created is a matter for a future science that has first completely mastered what human (and higher animal) consciousness actually is.

Further Reading
Gary Marcus, “Ray Kurzweil’s Dubious New Theory of Mind,” The New Yorker, November 15, 2012
http://www.newyorker.com/online/blogs/books/2012/11/ray-kurzweils-dubious-new-theory-of-mind.html


BIBLIOGRAPHY
Ford, Martin. 2009. The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future. Acculant Publishing.

Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers exceed Human Intelligence. Penguin Books, New York.

Kurzweil, Ray. 2005. The Singularity is Near: When Humans Transcend Biology. Viking, New York.

Searle, John. R. 1980. “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3: 417–424.

Searle, John. R. 1980. “Intrinsic Intentionality,” Behavioral and Brain Sciences 3: 450–456.

Searle, John R. 1982. “The Chinese Room Revisited,” The Behavioral and Brain Sciences 5: 345–348.

Searle, John. R. 1990. “Is the Brain a Digital Computer?,” Proceedings and Addresses of the American Philosophical Association 64: 21–37.

Searle, John R. 1992. The Rediscovery of the Mind. MIT Press, Cambridge, Mass and London.

Searle, John. R. 2002. “Why I Am Not a Property Dualist,” Journal of Consciousness Studies 9.12: 57–64.

Thursday, February 21, 2013

Skidelsky on Robots and Unemployment

An interesting article here:
Robert Skidelsky, “The Rise of the Robots,” Project-syndicate.org, February 19, 2013.
Some people complain that Skidelsky is being a Luddite, but that is utterly unfair, as he does not oppose automation per se.

If you accept that market economies have no tendency to full employment equilibrium, then it follows logically that large-scale automation is most likely to cause serious structural unemployment and a chronic aggregate demand shortfall.

And it will not do to say, “oh, well, enough new jobs will be created designing and maintaining the machines.” For a while they might. But the inexorable march of artificial intelligence means that eventually there is no reason why machines will not design and maintain other machines.

Machines will eventually design, manufacture, test and provide maintenance for new generations of machines, with minimal human supervision.