Among the technologists publishing warnings about AI are Elon Musk (“If I were to guess like what our biggest existential threat is, it’s probably that [AI].”), Bill Gates (“I don’t understand why some people are not concerned.”), and Stephen Hawking (“AI could spell the end of the human race.”). Quite alarming, especially when you consider the sources.

Superintelligence-Paths_Dangers_Strategies

Nick Bostrom, a Swedish philosopher, summarizes the existential risks of AI in a clever fable. He explains the possible dual-use consequences of AI in simple, folkloric terms:

A community of sparrows gathers one evening during nest-building season. Among complaints about the difficulty of the work and their own vulnerability, one of them suggests that they go out and find an abandoned owlet or steal an egg from an owl’s nest. The owl could be brought back and reared in their community, and this larger, wiser bird could provide protection and advice for all of them.

Everyone begins chirping about what a great idea this is when a one-eyed sparrow named Scronfinkle says, “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?”

Another called Pastus answers, “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let’s start there. After we have succeeded in training an owl, then we can think about taking on this other challenge.”

Scronfinkle objects, “There is a flaw in that plan.” But all but two or three take off to begin the search. It becomes increasingly clear to those remaining how difficult it will be to plan the taming and domestication of an owl, “especially in the absence of an actual owl to practice on. Nevertheless, they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.”

At this point, Bostrom says, it isn’t known how the story ends. Relating this story in the pages preceding the start of his book, Bostrom then adds, “the author dedicates this book to Scronfinkle and his followers.” The book is Bostrom’s extensive study titled Superintelligence: Paths, Dangers, Strategies.


PLANNING IS UNDERWAY

A two-day workshop was held in February 2017, in Oxford, U.K., to discuss the problem of the returning owlet. A report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” was produced by the 26 attending experts. (Download a PDF copy at https://maliciousaireport.com.) The report addresses a wide range of potential security threats, including digital, physical, and political security. A central theme repeated in many of the papers is the immediate, critical need to develop a culture of responsibility in AI research and development.

The study proposes that increasing the use of AI systems will lead to three changes in the landscape of threats:

  1. Expansion of existing threats due to AI reducing the cost of attacks;
  2. Introduction of new threats because AI could complete tasks that would otherwise be impractical for humans; and
  3. Change to the typical character of threats. By this they mean, “We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems.”

Autonomous drones as weapons, intrusions in manufacturing and service sectors like the energy grid, disinformation campaigns, and conventional dedicated denial of service attacks could all be affected by advancements in AI. To mitigate this, the report has four high-level recommendations:

  1. “Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  2. Researchers and engineers in AI should take the dual-use nature of their work seriously.
  3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns.
  4. [We should] actively seek to expand the range of stakeholders and domain experts involved in the discussion of these challenges.”

Next month, in Part Two, we’ll look at the three penultimate threat-assessment questions: Will machine intelligence ever surpass human intelligence? When will this happen? What can be done to avoid losing ultimate control?

About the Authors