/Hawking expresses concerns on AI, saying “such powerful systems would threaten humanity”

Hawking expresses concerns on AI, saying “such powerful systems would threaten humanity”

Stephen Hawking, the world’s well known physicist at the University of Cambridge, on Thursday made a virtual appearance at GMIC Beijing 2017, one of the world’s major tech conferences. In a prerecorded video interview that was played to thousands of participants, Hawking stressed how AI would benefit humanity and the environment.

“I believe that the rise of powerful AI will be either the best thing or the worst ever to happen to humanity,” said Hawking in the video. “But we should do all we can to ensure that its future development benefits us and our environment,” he added.

Hawking expressed his concerns of the societal and environmental impacts that AI may bring to humans. “The progress in AI research and development is swift. And perhaps we should all stop for a moment, and focus on our research. Not only on making AI more capable, but on maximizing its societal benefit,” he said.

He especially shed light on the consequences of creating something that can match or surpass humans. “AI would take off on its own and redesign itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Hawking's prerecorded interview shown during GMIC Beijing 2017. Photo from GMIC Beijing 2017.
Hawking’s prerecorded interview shown during GMIC Beijing 2017. Photo from GMIC Beijing 2017.

Hawking said that one of the short term concerns he has is about autonomous vehicles. “A self-driving car may, in an emergency, have to decide between a small risk of a major accident, and a large probability of a small accident,” he said.

“Other issues include privacy concerns, as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI,” said Hawking.

It would be devastating if AI products do not operate at human’s will and lose control, according to Hawking. “Long term concerns comprise primarily of the potential loss of control of AI systems via the rise of super-intelligence that do not act in accordance with human wishes, and that such powerful systems would threaten humanity,” said Hawking.

He added that existing tools for harnessing AI such as reinforcement learning, and simple utility functions, are inadequate to solve this.

Despite all the concerns, Hawking encouraged scientists and innovators of the next generation to continue to explore. “We stand on the threshold of a brave new world,” he said. “It is an exciting, if precarious place to be, and you are the pioneers.”

(Top photo from GMIC Beijing 2017)

Timmy contributes at AllChinaTech. He's passionate about photography, education, food and all things tech. He holds a master's degree from Columbia University Graduate School of Journalism.