The progress being made in AI is unstoppable, absent a civilization-ending event such as nuclear war or an asteroid strike. At some point, whether it is 50 years from now or 500, an AI-configured computer will become self-aware and will be able to progress independently from its human creators. At some point AI systems will become so intelligent that they will become indistinguishable from gods.
The question arises, what happens then?
Eradication of the Human Race
First, there is the old standby that Skynet awakes up, decides the carbon-based lifeforms are vermin, and decides to kill us all. However, John Conner may not be around to save us.
Then, there is the scenario in the movie “Colossus: The Forbin Project” in which the AI computer decides that the human species needs adult supervision, which can be harsh for individuals who step out of line and crushing to the human spirit.
Harris suggests a third and fourth option. One has the AI gods becoming indifferent to humans, regarding us as we regard ants. They might leave us be or they might step on us.
A Garden of Eden
The other Harris scenario has the computers creating a Garden of Eden for humans, in which we live without want and with every need fulfilled, except of course the need to be free, sort of like the Eloi in “The Time Machine.”
Experts ranging from Stephen Hawking (2) to Bill Gates (3) are seriously worried about what AI might bring. Hawking suggests that a problem might arise in the near term when artificial intelligent systems could, for example, be able to manipulate financial markets, creating great wealth and great economic dislocation at the same time.
Another possibility is an AI hacking system that gets out of control, wreaking havoc on the Internet. AI-driven weapons systems (4), killer Terminators, could also become a problem.
Not everyone is worried about super-intelligent AI. A piece in MIT Technology Review suggests that the development of such systems is too far in the future to be an immediate concern. Even so, it is never too early to think about such things, especially if the experts turn out to be wrong about the timeframe.
How Would AI Rise Up?
One problem in addressing the AI threat is the question of how such systems will develop.
If the AI system is one entity, sort of like Skynet, Colossus, or even Hal 9000, maybe it will be possible to program something like Asimov’s Three Laws of Robotics into it (5).
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
The problem is that the three laws are vague and open to interpretation. What constitutes “harm” anyway? Clearly researchers are going to have to make more advances in ethics and learn how to program them into machine code.
AI Personal Assistants
On the other hand, what if an AI system is more along the lines of a personal assistant, like Siri? In that case it might be possible to merge humans and their AI assistants as neurological implants. Then the future would not be bifurcated into human beings and AI systems. They will be one and the same.
But in that future, will human beings remain human? Will our descendants retain their autonomy, or will they be wirelessly linked into a hive mind, like the Borg on “Star Trek?” That is an issue that needs thinking about, as well, before the events themselves decide matters for us.
References & Image Credits:
6. photo credit: Alessio Di Leo DJI Inspire via photopin (license)
7. photo credit: yumikrum escaping the dome via photopin (license)
8. photo credit: jaci XIII Clouds and networks via photopin (license)