Please enable Javascript to use Top Secret Writers to it's fullest. Without it, you will find much of the modern internet doesn't work. I would add a little button hide this message, but that kind of functionality requires Javascript ;)

Doomsday Scenarios that AI Scientists Dreamed UpPrevious Article
Italy Black Magic Ring Revealed in Rape CaseNext Article

Is AI Really Too Dangerous or Can We Control It?

Line Spacing+- AFont Size+- Print This Article
Is AI Really Too Dangerous or Can We Control It?
Recently, neuroscientist and philosopher Sam Harris gave a TED talk (1) in which he suggested that the development of artificial intelligence (AI) is pretty much going to doom the human species, at least as we know it.

The progress being made in AI is unstoppable, absent a civilization-ending event such as nuclear war or an asteroid strike. At some point, whether it is 50 years from now or 500, an AI-configured computer will become self-aware and will be able to progress independently from its human creators. At some point AI systems will become so intelligent that they will become indistinguishable from gods.

The question arises, what happens then?




Eradication of the Human Race

First, there is the old standby that Skynet awakes up, decides the carbon-based lifeforms are vermin, and decides to kill us all. However, John Conner may not be around to save us.

Then, there is the scenario in the movie “Colossus: The Forbin Project” in which the AI computer decides that the human species needs adult supervision, which can be harsh for individuals who step out of line and crushing to the human spirit.

Harris suggests a third and fourth option. One has the AI gods becoming indifferent to humans, regarding us as we regard ants. They might leave us be or they might step on us.

A Garden of Eden

The other Harris scenario has the computers creating a Garden of Eden for humans, in which we live without want and with every need fulfilled, except of course the need to be free, sort of like the Eloi in “The Time Machine.”

Experts ranging from Stephen Hawking (2) to Bill Gates (3) are seriously worried about what AI might bring. Hawking suggests that a problem might arise in the near term when artificial intelligent systems could, for example, be able to manipulate financial markets, creating great wealth and great economic dislocation at the same time.

Another possibility is an AI hacking system that gets out of control, wreaking havoc on the Internet. AI-driven weapons systems (4), killer Terminators, could also become a problem.

Not everyone is worried about super-intelligent AI. A piece in MIT Technology Review suggests that the development of such systems is too far in the future to be an immediate concern. Even so, it is never too early to think about such things, especially if the experts turn out to be wrong about the timeframe.

How Would AI Rise Up?

One problem in addressing the AI threat is the question of how such systems will develop.

If the AI system is one entity, sort of like Skynet, Colossus, or even Hal 9000, maybe it will be possible to program something like Asimov’s Three Laws of Robotics into it (5).

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

The problem is that the three laws are vague and open to interpretation. What constitutes “harm” anyway? Clearly researchers are going to have to make more advances in ethics and learn how to program them into machine code.

AI Personal Assistants

On the other hand, what if an AI system is more along the lines of a personal assistant, like Siri? In that case it might be possible to merge humans and their AI assistants as neurological implants. Then the future would not be bifurcated into human beings and AI systems. They will be one and the same.

But in that future, will human beings remain human? Will our descendants retain their autonomy, or will they be wirelessly linked into a hive mind, like the Borg on “Star Trek?” That is an issue that needs thinking about, as well, before the events themselves decide matters for us.

References & Image Credits:
1. YouTube
2. Genius.com
3. BBC.com
4. TopSecretWriters
5. Gizmodo
6. photo credit: Alessio Di Leo DJI Inspire via photopin (license)
7. photo credit: yumikrum escaping the dome via photopin (license)
8. photo credit: jaci XIII Clouds and networks via photopin (license)

Originally published on TopSecretWriters.com

Fringe Science

AI May Soon Be Able to Predict How Long You’ll Live

AI May Soon Be Able to Predict How Long You’ll Live

How would your life change if you knew when you would die? Would you change your lifestyle? Diet? Go to the gym more often? What benefits would having such information [...]

Newsletter Subscriber $25 Amazon Gift Card Winners:

1. L Roberts (Cell Phone Spyware)
2. W Seifert (Identity Theft)

Congratulations!

Top Secret Editors

Ryan is the founder of Top Secret Writers. He is an IT analyst, blogger, journalist, and a researcher for the truth behind strange stories.
 
Lori is TSW's editor. Freelance writer and editor for over 17 years, she loves to read and loves fringe science and conspiracy theory.

Top Secret Writers

Gabrielle is a journalist who finds strange stories the media misses, and enlightens readers about news they never knew existed.
 
Sally is TSW’s health/environmental expert. As a blogger/organic gardener, she’s investigates critical environmental issues.
 
Mark Dorr grew up the son of a treasure hunter. His experiences led to working internationally in some surprising situations!
 
Mark R. Whittington, from Houston, Texas, frequently writes on space, science, political commentary and political culture.

Join Other Conspiracy Theory Researchers on Facebook!

Get a Top Secret Bumper Sticker!

Look like a spy with cool new shades

Comment on Breaking Stories

Powered by Disqus