ASU Origins Project Hosted AI Doomsday Workshop
To explore these doomsday scenarios, the ASU (Arizona State University) Origins Project hosted a weekend workshop in February 2017. The ASU Origins Project research focuses on “origins-related issues” and strives to increase the public’s “scientific literacy” by offering various “curricular initiatives and public discussions.” (1)
The weekend workshop, “Envisioning and Addressing Adverse AI Outcomes” was funded by two technology giants, Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn.
Distinguished Workshop Leaders and Participants
Bloomberg Technology reported that some of the possible scenarios of an AI future ranged from “humans are enslaved to an evil race of robot overlords.” The workshop of 40+ scientists, cyber-security and policy industry experts led by “Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss” also sought ways to prevent these and other doomsday AI scenarios from happening (2).
Horvitz works for Microsoft Research as a Technical Fellow and Managing Director. His primary research is on “principles of machine intelligence and on leveraging the complementarities of human and machine reasoning.” (3)
No self-respecting technology doomsday scenario workshop would be without a representative from the Bulletin of the Atomic Scientists’ Doomsday Clock. In addition to serving as chair of the Bulletin’s Board of Sponsors, Krauss, a theoretical physicist is also the ASU Origins Project director (4).
The world-renowned Doomsday Clock is designed to demonstrate how close the world is to “destroying our civilization with dangerous technologies of our own making.” (5)
AI Gone Wild Scenarios
The attendees were split into two teams. The blue team played that they were the out of control AI villains while the red team played the humans who were resisting and fighting the AI attackers.
Worst-case realistic scenarios based on either current technologies or ones that seem to be possible were submitted. Technologies that seem likely to be developed within the next 5-25 years were allowed to be included in these scenarios.
The winning scenarios lead panels comprised of experts for each team. These were made up of around 4 experts per team. The panel discussed how the AI would conduct its attack and how humans could prevent the assault.
Various Scenarios Argued
Some of the scenarios weren’t very elaborate and included cyberattacks or how technology could be implemented to influence elections of various governments.
Another very near future scenario centered on self-driving vehicles. Horvitz described a scenario where vehicle technology might be altered so the car AI misread a stop sign and instead read a yield sign.
Perhaps the most elusive form of cyberattack was the scenario proposed that gave AI the ability to hide and elude “all attempts to dismantle it.” This scenario also included an imaginary virus designed for a specific cyber target, such as the Iranian nuclear program, that grew out of control and infected the Internet. Other scenarios included how to stop a technological manipulation of the stock-market or possible nuclear war
Dystopian Future and Solutions
These and other doomsday scenarios engaged the two teams, but according to the Bloomberg Technology report, the blue team (defenders) didn’t do as well as the red team. The participants in the workshop had very opposite ideas about possible AI futures and humankind. Some expressed dystopian outlooks for the future while others like Horvitz believed public confidence needed to be earned through addressing such concerns about AI technology before they happened.
In its report about the workshop, Bloomberg Technology quoted Horvitz saying, “There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology.”
Can humans outsmart human-created technology or is each new technological advancement simply a step closer to one or more of these proposed doomsday scenarios?
References & Image Credits: