OpenAI CEO Sam Altman faces backlash for downplaying AGI hype after fueling it for months.
What a way to start my morning with some hot news from the X world!
Sam Altman, the CEO of OpenAI, is currently under fire for posting a tweet that dismisses the hype surrounding Artificial General Intelligence (AGI).
“Cut your expectations by 100x,” says the same guy who not long ago claimed superintelligence was only “a few thousand days” away.
Here’s his actual post on X:
twitter hype is out of control again.
we are not gonna deploy AGI next month, nor have we built it.
we have some very cool stuff for you but pls chill and cut your expectations 100x!
This tweet might seem harmless at first, but it’s drawn a lot of criticism on social media.
Why?
Because a lot of people believe Altman is the one who cranked up the AGI hype to begin with. He basically lit up a fire and is now complaining about the smoke.
Let’s rewind a bit.
Over the past several months, Sam Altman has been at the center of the AGI conversation. From cryptic strawberry photos to haikus about the singularity, Altman’s enigmatic posts have fueled speculation and excitement about OpenAI’s progress toward AGI.
Remember the rumors about Q* or the Strawberry model?
And then there’s the blog post. In his Reflections piece, Altman confidently stated that OpenAI is on the path to AGI.
As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started.
So, when Altman suddenly tells everyone to “chill” and dial back our expectations, it feels like a slap in the face. After all, it’s not the community that creates hype. It’s the CEO of the most famous AI company in the world.
AGI stands for Artificial General Intelligence.
Think of it this way: the AI apps we have today, like ChatGPT, Siri, or Google Assistant, are really good at specific tasks. ChatGPT can write essays, Google Assistant can set reminders, and Apple’s Siri can play your favorite playlist. But they’re only good at the things they’ve been programmed to do. Ask ChatGPT to drive a car or Siri to solve a complex math problem it wasn’t trained on, and you’ll hit a wall.
AGI, on the other hand, is the next level.
It’s not just about being good at one thing. It’s about being good at everything. It can learn, reason, and adapt just like a human. It could write a novel, design a rocket, diagnose a disease, or even have a philosophical debate—all without needing to be specifically programmed for each task.
For example:
Some experts think that achieving AGI is decades away; others believe it’s just around the corner. We still don’t know when it’s coming.
What we do know is that building a system that’s completely autonomous and capable of thinking, learning, and reasoning like a human is an entirely different ballgame.
The backlash isn’t just about hurt feelings. It’s about trust.
When you’re the face of a company like OpenAI, your words carry weight. People look to you for guidance, for hope, for a glimpse into the future. And when you spend months hyping up AGI, only to turn around and say, “Hey, calm down; it’s not happening anytime soon,” it leaves people feeling misled.
Even within OpenAI, there’s tension. Researchers like Noam Brown have tried to temper expectations, saying, “There are good reasons to be optimistic, but plenty of unsolved research problems remain."
It’s clear that not everyone at OpenAI is on the same page, which only adds to the confusion.
Amidst the controversy, OpenAI has been quietly working on its next big project: the o3 model.
Altman and other OpenAI staff have been actively promoting a series of new AI reasoning models. Last month, the company shared several updates about these models, including news that it’s testing its next-generation model.
The o3 model is designed to tackle complex tasks that require step-by-step logical reasoning. Unlike its predecessor, the o1 model, o3 uses a technique called “private chain of thought” to plan ahead and solve problems more effectively.
The results are impressive. On benchmarks like Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI), which tests an AI’s ability to handle abstract reasoning, o3 scored 87.7%, smashing o1’s score of 32%. It also aced coding challenges, achieving an Elo rating of 2727 on Codeforces, compared to o1’s 1891.
However, despite these advancements, OpenAI has yet to publish detailed information about the specific capabilities of the o3 model or provide a timeline for its release.
Seriously, given the track record of OpenAI, I am skeptical that they’ll release o3 anytime soon.
This is a critical moment for OpenAI.
The company has positioned itself as a leader in the AI space, and its actions—or lack thereof—have far-reaching consequences. If Altman wants to regain the trust of the public and the AI community, he needs to adopt a more transparent and consistent approach to communication.
No more cryptic tweets, no more vague promises. Just honest, straightforward updates about OpenAI’s progress and intentions.
People aren’t just mad because Altman told them to “chill.” They’re mad because he spent months hyping up AGI, only to pull the rug out from under them. It’s a classic case of overpromising and underdelivering, and it’s left a lot of people feeling disillusioned.
If Altman wants to regain trust, he needs to be more transparent about OpenAI’s progress and intentions. No more cryptic tweets, no more vague promises—just honest, straightforward communication.
Because at the end of the day, we aren’t just investing our time and efforts in AI; we’re investing in the future. And we all deserve to know what we’re getting into.
‍
Software engineer, writer, solopreneur