In a startling revelation, Ilya Sutskever, co-founder of OpenAI and a key figure behind the development of ChatGPT, proposed building a doomsday bunker to protect the company’s top researchers from a potential “rapture” triggered by the advent of Artificial General Intelligence (AGI).
According to a new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI by Karen Hao, Sutskever’s concerns about the catastrophic implications of AGI led him to suggest this extreme measure during a 2023 meeting with OpenAI scientists.
This proposal, detailed in an article by the New York Post, underscores the profound anxieties surrounding the development of AI technologies capable of surpassing human cognitive abilities.
During a meeting in the summer of 2023, Sutskever reportedly startled colleagues by referencing a bunker as a contingency plan.
“Once we all get into the bunker…” he began, only to be interrupted by a confused researcher who asked, “I’m sorry, the bunker?” Sutskever clarified, “We’re definitely going to build a bunker before we release AGI,” emphasizing that it would be optional for researchers to seek refuge in it.
The New York Post reports that this was not a one-off comment; multiple sources confirmed to Hao that Sutskever frequently discussed the bunker in internal conversations, reflecting his deep-seated fears about AGI’s potential to disrupt society on a cataclysmic scale.
Sutskever’s use of the term “rapture” evokes a quasi-spiritual scenario where AGI’s emergence could lead to chaos, potentially involving geopolitical conflicts or societal upheaval.
One OpenAI researcher quoted in the book stated, “There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture.
Literally, a rapture.” This belief highlights the existential concerns held by some AI pioneers, who fear that a superintelligent AI could outpace human control, leading to unpredictable and potentially catastrophic consequences.
Sutskever, often regarded as the intellectual force behind ChatGPT, has long been known for his philosophical and almost mystical approach to AI.
According to the New York Post, he frequently discussed AI in moral and metaphysical terms, a perspective that shaped his vision of its risks and opportunities.
His concerns about AGI—a theoretical form of AI capable of performing any intellectual task a human can—were compounded by internal tensions at OpenAI.
In 2023, Sutskever and then-Chief Technology Officer Mira Murati orchestrated a brief boardroom coup to oust CEO Sam Altman, driven by fears that Altman was prioritizing commercial growth over safety protocols.
Although Altman was swiftly reinstated, the episode underscored the rift between OpenAI’s original mission to develop AGI for humanity’s benefit and its evolving commercial ambitions.
The New York Post notes that Sutskever’s bunker proposal came at a time when OpenAI was grappling with its rapid rise to prominence.
The success of ChatGPT had propelled the company to a multi-billion-dollar valuation, but it also intensified debates about the ethical and safety implications of AI development.
Sutskever’s suggestion of a bunker reflects a broader anxiety within the AI community about the uncontrollable consequences of AGI, with some experts, including Google DeepMind’s CEO Demis Hassabis, warning that society may not be prepared for its arrival within the next decade.