Checking the enthusiasm on superintelligent A.I.
- sciart0
- Jun 14
- 2 min read
Excerpt: "A primary requirement for being a leader in AI these days is to be a herald of the impending arrival of our digital messiah: superintelligent AI.
For Dario Amodei of Anthropic, Demis Hassabis of Google and Sam Altman of OpenAI, it isn’t enough to claim that their AI is the best. All three have recently insisted that it’s going to be so good, it will change the very fabric of society.
Even Meta—whose chief AI scientist has been famously dismissive of this talk—wants in on the action. The company confirmed it is spending $14 billion to bring in a new leader for its AI efforts who can realize Mark Zuckerberg’s dream of AI superintelligence—that is, an AI smarter than we are.
“Humanity is close to building digital superintelligence,” Altman declared in an essaythis week, and this will lead to “whole classes of jobs going away” as well as “a new social contract.” Both will be consequences of AI-powered chatbots taking over all our white-collar jobs, while AI-powered robots assume the physical ones.
Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren’t buying all that talk.
The title of a fresh paper from Apple AAPL -1.38%decrease; red down pointing triangle says it all: “The Illusion of Thinking.” In it, a half-dozen top researchers probed reasoning models—large language models that “think” about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim."