The study would involve panels of experts periodically meeting to discuss and report on the influence of artificial intelligence on people and how they work and play as the technology continues to advance. These meetings would not only address the great potential for AI technology, but also the concerns some have raised about it.
Artificial intelligence, defined, means programming computer systems to think, reason, and learn, much like human beings do (2).
Most computer systems operate within strictly defined parameters to perform specific tasks. Artificially intelligent computer systems would handle tasks much like a human being does, by examining a problem and coming up with a solution, sometimes based on incomplete information. Artificially intelligent computers do not necessarily have to be smart on the level of a human being.
The possible applications for artificially intelligent systems are almost endless. Such systems can take over decision-making tasks from humans, handling investments, running driverless cars or running organizations. AI-enabled robots could perform many tasks that hitherto have been performed by human beings, from construction work to mining. Artificially intelligent robots could even take over dangerous tasks such as search and rescue and war fighting.
Concerns Over AI
As with any new technology, many have expressed concerns about how the introduction of AI systems could have disastrous effects on human civilization.
Science fiction has depicted computer systems such as Hal 9000, Colossus, and Skynet running amok and killing human beings or even become cybernetic overlords of the human race.
Elon Musk, the entrepreneur who founded companies such as SpaceX and Tesla, has sounded the alarm about artificially intelligent systems. He compares engineers developing AI computers to medieval sorcerers summoning demons. The sorcerers think they can control the demon, but in most stories the demon gets away and causes death and destruction (3). Similarly, a psychotic computer system could get loose into the Internet and wreak untold havoc.
Concerns about the dangers of AI have not escaped the notice of people who are working on the technology. According to Marketplace (4), Ryan Calo, Assistant Professor of Law at the University of Washington and Affiliate Scholar at the Stanford Center for Internet and Society, has signed an open letter urging caution in developing such systems. He wants to make sure that AI systems do not “disrupt our values” or prove to be discriminatory.
Three Laws for Robots
The question that arises is how one programs ethics into an artificially intelligent system. The late science fiction writer Isaac Asimov took one stab at an answer when he suggested the Three Laws of Robotics (5), which would apply to computer systems as well.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course, what sort of ethics an AI system is programmed with largely depends on who does the programming. An adherent of Nazism or Communism would certainly have a different idea of what constitutes proper ethics than a Libertarian.
A fervent believer in a particular religion might have his or her ideas about how an AI computer should behave that others might not agree with. What if Al Qaeda or ISIS were to create an AI system and turn it loose on the Internet? It might be a weapon of mass destruction that would make a nuclear bomb pale by comparison.
These and other concerns suggest that the Stanford 100-year study on artificial intelligence is not only useful, but vital. AI is coming, whether we want it or not. It behooves researchers to not only create ethical AI systems but to create safeguards against rogue systems before it is too late.