What's with Project Strawberry and how does it relate to OpenAI's upcoming GPT-5 AI model?
A few days ago, my X feed was flooded with images of strawberries, and it wasn’t because people suddenly developed a fascination with summer fruits. It all started when Sam Altman, the CEO of OpenAI, posted a seemingly innocuous tweet featuring a picture of strawberries in his garden with the caption, “I love summer in the garden.”
At first glance, this tweet might seem like a simple celebration of the season, but given Altman’s position and the timing, the AI community quickly began reading between the lines. It’s almost like a ritual now—whenever Altman tweets something that seems trivial, it often ends up being a subtle hint about something much bigger brewing at OpenAI.
People in the AI community were quick to assume that this tweet was a reference to a recently announced internal project at OpenAI known as “Project Strawberry.” According to various reports, this project involves developing AI models with the capability to plan, navigate the internet autonomously, and carry out what OpenAI refers to as “deep research.”
The moment I heard about this, I couldn’t help but wonder: Could this be the model that finally blurs the line between human-like reasoning and machine efficiency? Is AGI here? Is it GPT-5? I have so many thoughts and questions.
So, what exactly is Project Strawberry?
Also known as Q* (pronounced Q-star), this project gained significant attention in 2023 following the drama involving Sam Altman and Ilya Sutskever, one of the co-founders of OpenAI. However, the groundwork for the project had already begun in early 2022.
Key events in the development timeline include:
Project Q*’s architecture blends large language models, reinforcement learning, and search algorithms. It integrates deep learning methods, similar to those in ChatGPT, with human-crafted rules. The model may combine elements of Q-learning and A* search.
I find it fascinating how OpenAI is blending these methods. It’s like they’re trying to create a hybrid between human-like problem-solving and machine precision. Imagine an AI that can not only generate text but also plan complex strategies and execute them autonomously.
Why is it called Q?*
Although OpenAI hasn’t provided an official explanation, the name “Q*” could be inspired by DeepMind’s work in reinforcement learning. Initially, DeepMind used Q-learning to train a neural network to play Atari video games by optimizing a Q function that estimated rewards from different actions.
Building on these concepts, Q* might aim to fuse large language models with AlphaGo-like search strategies, possibly using reinforcement learning to enhance the model. The objective could be to create a system where language models improve their reasoning skills by “competing against themselves,” advancing the capabilities of these models.
This idea of AI competing against itself to get better is both intriguing and a bit unnerving. On one hand, it’s a brilliant way to accelerate learning; on the other, it raises questions about the ethical implications of creating systems that are constantly pushing their own limits.
What happens when those limits are no longer aligned with human values?
That is quite… scary.
As of now, there is no official model for Project Strawberry available to the public. However, a new model, “anonymous-chatbot,” mysteriously appeared on the Large Model Systems Organization (LMSYS) Chatbot Arena for a brief moment.
The model has since been removed from the platform, but users who had the opportunity to interact with it reported that it excelled in advanced reasoning, even outperforming OpenAI’s latest and greatest, GPT-4o.
The sudden appearance and disappearance of this model in the LMSYS Arena is significant. OpenAI previously tested GPT-4o on the same platform just two weeks before its public release, also under a non-descript name.
At that time, Sam Altman dropped a cryptic hint about the model in an X post, much like he did with the strawberry tweet. If Project Strawberry follows a similar trajectory, we could be looking at a public release in the very near future.
While it’s exciting, it’s also a bit frustrating. As someone who follows these developments closely, I can’t help but feel that we’re being strung along, fed just enough information to keep us hooked but not enough to fully understand what’s coming.
A few days ago, several key OpenAI employees, including co-founder John Schulman’s resignation and co-founder Greg Brockman’s sabbatical, added to the growing uncertainty surrounding the company’s future.
Additionally, Vice President of Consumer Products Peter Deng has departed, further fueling concerns.
These departures are part of a broader pattern of top-level exits at OpenAI this year. Of the 11 original founders, only three remain with the company.
Earlier this year, Jan Leike, who co-led OpenAI’s superalignment group—a team focused on ensuring AI systems align with human interests—also left for Anthropic.
His departure came just hours after Ilya Sutskever, an OpenAI co-founder, chief scientist, and the other leader of the superalignment group, announced he was leaving.
These departures suggest that there may be more going on behind the scenes than we realize. Could it be that these key figures are leaving because they disagree with the direction OpenAI is heading? Or perhaps they see something coming that the rest of us aren’t aware of yet?
Either way, it doesn’t paint a particularly reassuring picture of the company’s internal stability.
Given the delays and uncertainties surrounding OpenAI’s recent projects, it’s hard to take Sam Altman’s optimism at face value. The much-anticipated Advanced Voice Mode has been delayed for over a month, with limited access for most users.
The departure of key figures like Ilya Sutskever and Greg Brockman’s sabbatical only adds to the uncertainty. These changes signal potential internal conflicts and raise concerns about the company’s future direction and the direction AI is headed.
So, while Altman’s strawberry tweet might hint at something imminent, it could just as easily be another case of overhyped promises. With Altman’s track record of making bold claims, it might be wise to take these strawberries with a grain of salt.
For now, trusting that GPT-5 or AGI is just around the corner seems premature at best. My advice? Keep an eye on the developments, but don’t hold your breath waiting for the next AI revolution—at least not just yet.
‍
Software engineer, writer, solopreneur