In an article in the New York Times on July 26 entitled Scientists Worry Machines May Outsmart Man, reporter John Markoff notes that a "group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems..." He goes on to note that "Their concern is that further advances could create profound social disruptions and even have dangerous consequences." While I acknowledge that advanced technology creates many new opportunities for its malevolent use, I believe that the concern about machines taking over is overstated.
Ever since the dawn of the field of artificial intelligence, there has been speculation about machine intelligence surpassing human intelligence and somehow reversing the master-servant relationship. It is not that I have some overwhelming faith in humanity's ability to prevail (though for the most part I do). I believe that the problem is not one of technological capability but rather one of the inappropriate human exploitation of that capability. This makes it not a technological problem but a societal one.
The computer scientists mentioned in the article expressed the concern that "technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors." While I'm not certain about the copying of human behaviors, which if true could turn out to be AI's Achilles' heel, I fail to see how such human adaptation is any different from that which has occurred for hundreds if not thousands of years. Moreover, this concern sounds very similar to that once expressed about another emerging technology:
Ever since the dawn of the field of artificial intelligence, there has been speculation about machine intelligence surpassing human intelligence and somehow reversing the master-servant relationship. It is not that I have some overwhelming faith in humanity's ability to prevail (though for the most part I do). I believe that the problem is not one of technological capability but rather one of the inappropriate human exploitation of that capability. This makes it not a technological problem but a societal one.
The computer scientists mentioned in the article expressed the concern that "technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors." While I'm not certain about the copying of human behaviors, which if true could turn out to be AI's Achilles' heel, I fail to see how such human adaptation is any different from that which has occurred for hundreds if not thousands of years. Moreover, this concern sounds very similar to that once expressed about another emerging technology:
"_________________, if they succeed, will give an unnatural impetus to society, destroy all the relations that exist between man and man, overthrow all mercantile regulations, and create, at the peril of life, all sorts of confusion and distress."
What was this writer speaking about? Television? Radio? Internet? No. This quote was from an English magazine editor in 1835 discussing the coming of the railroad. Perhaps we haven't learned as much as we think we have. And, as great as our advances in computational ability have become, one should not confuse this with intelligence.
A simple definition of intelligence is the ability to acquire and utilize knowledge, especially toward some useful goal. In his Multiple Intelligences Theory, psychologist Howard Gardner identified seven different types of intelligence: Linguistic, Logical-Mathematical, Bodily-Kinesthetic, Spatial, Musical, Interpersonal, and Intrapersonal. AI researchers, on the other hand, have developed a completely different classification of intelligence types -- one based more on machine abilities. And while science has made great advances in these areas of machine hosted AI, the ability to mimic human behavior and intelligence is still limited.
Another recent NY Times article, In Battle, Hunches Prove to Be Valuable describes how the most high tech gear "remains a mere supplement to the most sensitive detection system of all -- the human brain." It describes how a soldier's experiential knowledge, depth perception, and focus creates an almost uncanny ability to perceive and respond to dangerous situations. In spite of the many advances in machine knowledge, AI still does not have the ability to mimic this sensory perception and decision making capability, partly because we cannot fully explain it either.
With that said, at the end of the day, the danger seems not so much that machines may outsmart man. Rather, the danger is that man may deploy technology inappropriately, thereby outsmarting himself.