The AI Doomers Are Getting Doomier
- sciart0
- Aug 22
- 1 min read
Excerpt: The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go rogue—with apocalyptic consequences. But in 2025, the doomers are tilting closer and closer to a sort of fatalism. “We’ve run out of time” to implement sufficient technological safeguards, Soares said—the industry is simply moving too fast. All that’s left to do is raise the alarm.
In April, several apocalypse-minded researchers published “AI 2027,” a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. “We’re two years away from something we could lose control over,” Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies “still have no plan” to stop it from happening. His institute recently gave every frontier AI lab a “D” or “F” grade for their preparations for preventing the most existential threats posed by AI."