Stephen Hawking’s Warning Just Before He Died: “AI Could Spell the End of the Human Race.”

The late Stephen Hawking was a major voice in the debate about how humanity can benefit from artificial intelligence. Hawking made no secret of his fears that thinking machines could one day take charge. He went as far as predicting  in 2018 that future developments in AI “could spell the end of the human race.”

But Hawking’s relationship with AI was far more complex than this often-cited soundbite. The deep concerns he expressed were about superhuman AI, the point at which AI systems not only replicate human intelligence processes, but also keep expanding them, without our support – a stage that is at best decades away, if it ever happens at all. And yet Hawking’s very ability to communicate those fears, and all his other ideas, came to depend on basic AI technology.

At the intelectual property and health law centers at DePaul University, my colleagues and I study the effects of emerging technologies like the ones Stephen Hawking worried about. At its core, the concept of AI involves computational technology designed to make machines function with foresight that mimics, and ultimately surpasses, human thinking processes.

Hawking cautioned against an extreme form of AI, in which thinking machines would “take off” on their own, modifying themselves and independently designing and building ever more capable systems. Humans, bound by the slow pace of biological evolution, would be tragically outwitted.