“There are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until—in a visible future—the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”
This astonishing statement was even more astonishing at the time it was made by Herbert Simon in 1957. Astonishing, and as we now know, wrong, or at least premature. The “visible future” alluded to is no more visible now than it was then. Simon (who was later awarded the Nobel Prize in Economics for his work in operations research) clearly let his optimism get the better of him. This optimism was based on promising early results in artificial intelligence (AI), in which many seemingly difficult problems were rather quickly solved, and the way seemed clear to the grand goal of full human-level machine intelligence. It soon became apparent, however, that the problems remaining to be dealt with were extremely difficult. Simon's statement has become something of an embarrassment to most researchers in AI, who have since generally tried to distance themselves from it and to speak in more imprecise terms of when this holy grail will be attained.
At least one researcher, however, has no such qualms. In his recently published book, The Singularity is Near (Viking, 2005), Ray Kurzweil predicts that a “singularity” will be reached in the next couple of decades, when computers will first equal and then soon far surpass human abilities in all fields of thought. Humans will not be left behind, however. We will augment our physical, mental and sensory abilities with increasingly sophisticated implants and plugins, so that the distinction between human and machine will fade away and become irrelevant. In this vision, called “transhumanism,” we will become computers and they will become us.
Who is Ray Kurzweil, and why do some very smart people take him seriously? He is no armchair futurologist. He has worked for many years in the field of artificial intelligence and has numerous inventions to his credit, including a print-to-speech reader for the blind, a music synthesizer, and a speech recognition system. So he is well acquainted with what computers can and can't do at present. Another quality that sets his predictions apart from most others is his use of quantitative estimates. The dates when he predicts that such events as the equalling of the computing power of a human brain by a \$1000 laptop will occur are found by extrapolating charts of existing trends, and are not mere wishful speculation.
Kurzweil is also an unsinkable optimist. He is confident that the rapid pace of improvement that we have seen in both computer hardware and software in recent decades will continue indefinitely, and even increase. At the heart of this argument is what he calls the “Law of Accelerating Returns.” In simplest terms, it posits that technology feeds back into its own development, so that the rate of progress, the rate at which new inventions and new capabilities are introduced, continually increases. One can find this law operating even in primitive technologies. For instance, the control of fire made possible the metallurgy of bronze and iron, which in turn enabled the invention of new and more sophisticated machines. But Kurzweil argues that this feedback phenomenon is especially marked with information technology. For instance, chip design software running on powerful computers enables the design of even more powerful chips. Thus each generation of computing technology is crucial to the design of the next.
Now, this law is not especially novel. What makes Kurzweil's claim special is his prediction that in the near future this feedback process will cross a threshold where the technology becomes, in a sense, independent of its creators. Human designers will no longer need to play a role in the process, and computers will be able to continue their evolution on their own. This will free them from the limits set by the fixed capacity of human nature.
What is wrong with this picture? Technology does indeed have tremendous potential for further growth before it reaches fundamental limits set by the laws of physics. In a famous talk at Caltech in 1959 titled Plenty of Room at the Bottom, Richard Feynman showed that the amounts of information and computational capacity that can be packed into a small space were vastly larger than the technology of the day allowed. In the nearly half-century since that talk, great progress has been made in miniaturization but there is still much room for further improvements. Kurzweil may be too sanguine when he posits that subatomic particles will provide a basis for computation using features far smaller than atoms (most physicists do not foresee this happening anywhere outside of a neutron star), but he is correct that clever new ideas and steady engineering progress may be able some day to produce a Pentium-class computer no larger than a grain of dust or a tiny robot the size of a human cell.
No, the problem with his prediction that human-level machine intelligence is right around the corner is not with the hardware. Kurzweil has fallen prey to the same mistake that Simon made 50 years ago, of assuming that all that is needed to attain true intelligence in machines is to scale up existing systems a bit. In fact what is needed is a qualitative, not a quantitative leap. The technological feedback loop Kurzweil is relying on will not help us to get past this obstacle. It is a step that will require profound new insights into the nature of intelligence. Such insights cannot come from machines that have not already reached this point: they will have to come from us plain old-fashioned humans.
Research in AI has indeed yielded many remarkable results, but always in circumscribed realms and with limited aims. Getting a computer to play chess at the grandmaster level was once regarded as a benchmark that would prove computers could think. Now that computers do play chess at that level, no one considers this a benchmark for intelligence any more. Computers do not play chess by understanding the game but primarily by searching massively among the possible moves and counter-moves to find the most promising one. Even for a seemingly simple task like reading text out of a page image (optical character recognition or OCR), computers have far higher error rates than grade-school children.
Philosophers such as Hubert Dreyfus, and even many AI researchers, notably Rodney Brooks, have argued that when humans tackle many, perhaps most, real-world cognitive tasks, they do not construct any formal theories or representations in their minds, but simply cope using low-level reactions to stimuli together with implicit knowledge. If this is the case, then efforts to replicate human thinking by constructing computer models of representations of reality are doomed to failure. These efforts constitute the physical symbol-system approach to AI begun in the 1950s by researchers such as Simon, Allen Newell, John McCarthy and others.
Kurzweil recognizes that the symbol-system approach has its limitations and that other strategies may be needed to achieve his goal. An alternative approach is the connectionist style of problem solving. A prime example of this style is the artificial neural network, a system of simulated, simplified neurons connected together in a fashion that captures some of the features of biological nervous systems. Neural networks have proven remarkably successful in tackling pattern-matching problems like OCR that have resisted the symbol-system approach. However, they are unsatisfying to advocates of human-level AI for two reasons: first, although they are effective, they become so by a learning process that ends up obscuring how they actually achieve their results. Second, it is not clear how these sorts of systems can be adapted to more sophisticated tasks such as understanding language or planning strategies.
One of Kurzweil's favorite ideas for making an end-run around the software problem is the brain simulator. After all, he argues, we have one working example of a design for an intelligent machine right between our ears. If we can reverse-engineer this design the way computer engineers figure out how their competitors' products work, we can build a working model, and then improve on it. Some progress on this project has already been made. Neurophysiologists have analyzed the neural circuitry responsible for some low-level aspects of aural and visual perception, and produced computer models that reproduce features of these processes, and are even subject to the same illusions. However, this task will become progressively more complex as it works towards higher-level cognitive processes involving ever larger numbers of neurons cooperating in ever subtler interactions. A complete understanding of how the brain works is clearly not a near-term possibility, if it is possible at all.
In the end, one comes away from reading Kurzweil with the suspicion that he hopes mind will somehow simply emerge from the mechanism once the mechanism is sufficiently speedy and complex. Coming up on 50 years after Simon's astonishing and embarrassing predictions, the secret of human intelligence and creativity remains as mysterious as ever, and as elusive to capture in a machine.