The scientific community is making great strides in the area of artificial intelligence research. IBM has Watson. Apple has Siri. Microsoft has Cortana. Google is making one of the biggest pushes for AI research we have seen in a while. However, not everyone believes that science should be so quick to seek out AI technology.
Elon Musk, a technology guru in his own right, has stated that artificial intelligence could be more dangerous than nuclear weapons.
In a Twitter post in August of this year (2014), Musk tweeted, “We need to be super careful with AI. Potentially more dangerous than nukes.” (1)
Musk cited Nick Bostrom as a source as to why we should be worried about what would happen to the human race should AI technology be achieved. In Bostrom’s book, Superintelligence: Paths, Dangers, Strategies, humans in the presence of an AI are compared to gorillas in the presence of humans.
“As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.” (2)
Such a comparison was not lost on Musk. However, this was neither the first nor last time that Elon Musk would voice his concerns about the overall risks that an artificial intelligence would pose.
Risks and Dangers of AI
In July of this year, Musk appeared on CNBC’s Closing Bell. During the program, Musk voiced his concerns about the potential risks and dangers that could arise as we near the possibility of an artificial intelligence. Interestingly enough, he also cites these concerns as a motive for investing in the company Vicarious, an artificial intelligence company.
Musk contends that he only invested into the company as way to “keep an eye on what’s going on” (3) in the industry. Yet, more recently (October 2014) Musk gave a speech to MIT in which its main theme appeared to have been the dangers of artificial intelligence.
During the speech, Musk stated, “With artificial intelligence we are summoning the demon.” (4) Should we really heed Elon Musk’s warning? Could we be ushering in an Armageddon with the advent of artificial intelligence?
Some scientists believe that Musk’s warnings are a little premature. One of the problems with Musk’s concerns are that we are barely any closer to a true autonomous human being-like artificial intelligence than we were in the 1960s when research began.
Beau Cronin, a Salesforce.com product manager working on AI-influenced technologies, told ComputerWorld, “But in many ways, we are no closer to achieving an overall general artificial intelligence, in the sense that a computer can behave like a human.” (5) Yet, what happens when we do get there?
Is AI More of a Threat?
The main problem with assessing the potential dangers of the technology is that the research is still in its relative infancy and we, as a species, do not fully understand it. Like any new science that we do not fully understand, the jury is still out.
For every argument opposing artificial intelligence and its research, there is a proponent arguing in its favor. The likes of Stephen Hawking and James Barret say it could be the beginning of the end, while the likes of Mark Humphreys and David Brin feel that the concern over AI technology is unwarranted.
Humphreys contends that we do not need to worry, because artificial intelligence robots will never happen. (6) While Futurist David Brin contends that the idea of human extinction via AI technology is so ingrained in the human psyche, that if a true AI is ever achieved, that it will be watched and regulated to the point of uselessness.
Is artificial intelligence more of a threat than nuclear weapons? Currently, no…it is not. Today, we should be more concerned with nuclear weapons. Especially since nukes are not only very real, but are also at the command of nations’ leaders.
However, if a true super-intelligent AI is ever achieved, it may behoove us to watch and regulate it very closely.