Here are six reasons why you might want to reconsider upgrading.
Google recently rebranded its AI chatbot Bard to Gemini, and with it came a new paid tier called Gemini Advanced, priced at $20 per month. This upgrade promises access to the latest and most advanced Gemini Ultra 1.0 model.
The tech giant didn’t hold back from telling everyone how great the Gemini Ultra was supposed to be. The way they talked about its abilities was really impressive, and I was convinced it was going to be a big step forward in AI.
However, after testing it out for several days, I’ve noticed some issues that I think people should know about before switching to the paid version.
Let me explain each of these issues in more detail.
Gemini Advanced disappointingly lags behind in response times, taking approximately 5 to 7 seconds, compared to ChatGPT’s 2 to 3 seconds.
Check out the side-by-side speed comparison of the two:
This delay is consistent, even when accessing additional drafts, which similarly take a long time to load (around 5 to 7 seconds).
Gemini Ultra struggles with basic logical reasoning. For instance, when prompted with a simple mathematical question regarding car ownership, the AI obviously fumbles.
Prompt: Today I own 3 cars and sold 2 cars last year. How many cars do I own?
If you ask the same question to ChatGPT using the GPT-4 model, it can answer the question with ease.
The inconsistency in Gemini’s output quality is another concern.
Here’s an example where I asked Gemini to revise an article. The first and third drafts seemed fine, but the second draft simply wrote “3D Model Generator.”
This is not a revision of the article and is clearly not an acceptable response from a model that’s claimed to surpass human intelligence.
You wouldn’t want to be paying for this quality of service, would you?
Gemini Ultra’s image generation capabilities are marred by inexplicable biases and restrictions, especially in its handling of requests involving racial specifics.
Here’s an example:
Prompt: generate an image of two black couple riding a bike
I don’t understand why it refused to create the image. When I tweaked the prompt to generate white people instead of black, it created an image without hesitation.
Such arbitrary limitations are not only frustrating but also contrast sharply with the more inclusive and versatile capabilities of competitors like ChatGPT and Midjourney.
Take a look at how ChatGPT handles the same prompt:
The arbitrary restrictions they put on Gemini are ridiculous. They prevent it from showing things that Google Search has no problem with!
Another issue I noticed is the quality of the images compared to competitors like ChatGPT and Midjourney. Here’s an example prompt:
Prompt: generate a photorealistic image of a 32-year-old female, up and coming conservationist in a jungle; athletic with short, curly hair and a warm smile
The images below are generated with OpenAI’s Dall-E 3 (left) and Midjourney V6 (right).
When tested with meme interpretation, Gemini Ultra’s performance was underwhelming. In the example below, I asked Gemini to decipher a humorous meme from an X user, AshutoshShrivastava.
Even though there are answers that are not offensive or inappropriate, it still refused to respond.
ChatGPT, on the other hand, was able to give me the correct answers.
This highlights a significant gap in contextual understanding and adaptability.
Gemini’s inability to compile and provide download links for generated content further limits its utility.
In connection with the previous request in #4, I asked Gemini to zip all the images and give me the download link.
Prompt: can you compile these images into a zip and give me the download link?
Gemini was not able to fulfil the request.
ChatGPT, on the other hand, was able to compile the images into a zip file and give me a working download link.
Given the current limitations and performance issues, I advise against upgrading to Gemini Advanced at this stage. Use the free Gemini Pro version for now.
While the bundled 2 TB of storage with a Google One subscription may appeal to heavy users of Google’s ecosystem, it’s advisable to wait for subsequent updates and improvements.
Aside from the list above, I also observed several users complaining about Gemini refusing to write codes, being poor at graph generation, having lazy responses, and more.
This review isn’t meant to tarnish Google Gemini’s reputation but to provide an honest assessment of its current offerings.
The $20 monthly subscription fee is a little hard to justify. The slow response times, logical errors, incomplete drafts, biased image generation, inadequate image understanding, and inability to provide downloadable content significantly undermine its value.
I sincerely hope Google will quickly resolve these problems, aiming to match or surpass the capabilities of GPT-4.
Software engineer, writer, solopreneur