Runway is back with a mind-blowing AI model that can generate hyper realistic AI videos.
Surprise, surprise. New York City-based Runway is making a strong comeback!
Less than a week after Luma Labs released Dream Machine, its close competitor, Runway is catching up with its latest and most powerful AI video generator, Gen-3 Alpha.
It’s been more than a year since Runway released Gen-2 and since then, many and far better AI video models like Sora, Kling, and Dream Machine have been announced.
Many, including myself, have wondered — what’s going on with Runway? When are they going to release a successor to Gen-2?
Well, today, we got our answers. The first time I saw the sample videos generated with Gen-3 Alpha, I was impressed with the level of photorealism and temporal coherence of the results.
Gen-3 Alpha allows users to generate high-quality, detailed, and highly realistic video clips of 10 seconds in length, with high precision and a range of emotional expressions and camera movements.
Gen-3 Alpha is the first in a new series of models trained by Runway on a state-of-the-art infrastructure designed for large-scale multimodal training. It offers significant improvements in fidelity, consistency, and motion compared to Gen-2.
Take a look at these examples:
Prompt: Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.
This example showcases Gen-3 Alpha’s ability to handle complex reflections and fast-moving objects with remarkable realism.
Prompt: An astronaut running through an alley in Rio de Janeiro.
The model’s ability to generate detailed environments and believable human movement is evident here. Look at those hands and feet!
Prompt: Dragon-toucan walking through the Serengeti.
Prompt: A Japanese animated film of a young woman standing on a ship and looking back at camera.
This example demonstrates Gen-3 Alpha’s versatility in capturing different styles. Much like Midjourney’s Niji model, Gen-3 can also copy the anime aesthetic really well.
You can explore more examples from Runway’s official X account below:
Gen-3 Alpha is packed with some huge upgrades from its predecessors. Here are some of my top picks:
Right now, there’s no precise release date yet, with Runway only showing demo videos on its website and social account on X. It is also unclear if it will be available through Runway’s free tier or if a paid subscription will be required to access it.
According to Runway’s CTO, Gen-3 will power all of Runway’s existing AI tools.
Runway Gen-3 Alpha will soon be available in the Runway product, and will power all the existing modes that you’re used to (text-to-video, image-to-video, video-to-video), and some new ones that only are only now possible with a more capable base model.
If you are a company interested in fine-tuning and custom models, reach out to Runway using this form.
It’s great to see one of the first companies to release AI video models make a comeback. It’s clear that Runway is not giving up the fight to be a dominant player or leader in the fast-moving generative AI video creation space.
However, it’s important to remember that these examples are hand-picked. Who knows how many attempts it took to get those kinds of quality? The true test of Gen-3 Alpha’s capabilities will come when it’s released to the public. So, don’t get too impressed with the examples just yet.
By far, here is my personal top 5 list of AI video generators based on quality and coherence:
Do you agree with my list?
The hype around OpenAI’s Sora is wearing out with the subsequent release of competitors. Perhaps it’s time for OpenAI to release public access to their AI video model.
Software engineer, writer, solopreneur