AI news
June 18, 2024

Runway’s Gen-3 Alpha AI Video Generator Is Insane

Runway is back with a mind-blowing AI model that can generate hyper realistic AI videos.

Jim Clyde Monge
by 
Jim Clyde Monge

Surprise, surprise. New York City-based Runway is making a strong comeback!

Less than a week after Luma Labs released Dream Machine, its close competitor, Runway is catching up with its latest and most powerful AI video generator, Gen-3 Alpha.

It’s been more than a year since Runway released Gen-2 and since then, many and far better AI video models like Sora, Kling, and Dream Machine have been announced.

Many, including myself, have wondered — what’s going on with Runway? When are they going to release a successor to Gen-2?

Well, today, we got our answers. The first time I saw the sample videos generated with Gen-3 Alpha, I was impressed with the level of photorealism and temporal coherence of the results.

What is Gen-3 Alpha?

Gen-3 Alpha allows users to generate high-quality, detailed, and highly realistic video clips of 10 seconds in length, with high precision and a range of emotional expressions and camera movements.

Gen-3 Alpha is the first in a new series of models trained by Runway on a state-of-the-art infrastructure designed for large-scale multimodal training. It offers significant improvements in fidelity, consistency, and motion compared to Gen-2.

Take a look at these examples:

Prompt: Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.

This example showcases Gen-3 Alpha’s ability to handle complex reflections and fast-moving objects with remarkable realism.

Prompt: An astronaut running through an alley in Rio de Janeiro.

The model’s ability to generate detailed environments and believable human movement is evident here. Look at those hands and feet!

Prompt: Dragon-toucan walking through the Serengeti.
Prompt: Dragon-toucan walking through the Serengeti.
Video from Runway
Prompt: A Japanese animated film of a young woman standing on a ship and looking back at camera.

This example demonstrates Gen-3 Alpha’s versatility in capturing different styles. Much like Midjourney’s Niji model, Gen-3 can also copy the anime aesthetic really well.

You can explore more examples from Runway’s official X account below:

What’s new in Gen-3 Alpha?

Gen-3 Alpha is packed with some huge upgrades from its predecessors. Here are some of my top picks:

  1. Photorealistic Human Generation: This is perhaps the most obvious upgrade. Gen-3 Alpha is now able to create lifelike human characters with a wide range of actions, gestures, and emotions.
  2. Enhanced Fidelity and Consistency: Gen-3 Alpha has a substantial upgrade in the quality of generated videos. The new model achieves remarkable temporal coherence, making the transitions and overall flow of the videos appear more natural and seamless.
  3. Fine-Grained Temporal Control: One of the standout features of Gen-3 Alpha is its ability to handle highly descriptive, temporally dense captions. This capability allows for imaginative transitions and precise key-framing of elements within a scene.
  4. Multimodal Capabilities: The model cannot just generate videos! It supports various input modes such as Image-to-video and Text-to-Image.
  5. Customization: Soon, as consumers, we can create custom versions of Gen-3 models. Just like Stable Diffusion, video models could be fine-tuned to generate videos with custom styles.

How to access Gen-3 Alpha?

Right now, there’s no precise release date yet, with Runway only showing demo videos on its website and social account on X. It is also unclear if it will be available through Runway’s free tier or if a paid subscription will be required to access it.

According to Runway’s CTO, Gen-3 will power all of Runway’s existing AI tools.

Runway Gen-3 Alpha will soon be available in the Runway product, and will power all the existing modes that you’re used to (text-to-video, image-to-video, video-to-video), and some new ones that only are only now possible with a more capable base model.

If you are a company interested in fine-tuning and custom models, reach out to Runway using this form.

Final Thoughts

It’s great to see one of the first companies to release AI video models make a comeback. It’s clear that Runway is not giving up the fight to be a dominant player or leader in the fast-moving generative AI video creation space.

However, it’s important to remember that these examples are hand-picked. Who knows how many attempts it took to get those kinds of quality? The true test of Gen-3 Alpha’s capabilities will come when it’s released to the public. So, don’t get too impressed with the examples just yet.

By far, here is my personal top 5 list of AI video generators based on quality and coherence:

  1. OpenAI Sora
  2. Runway Gen-3 Alpha
  3. Kuaishou Kling
  4. Luma Labs Dream Machine
  5. Google Veo

Do you agree with my list?

The hype around OpenAI’s Sora is wearing out with the subsequent release of competitors. Perhaps it’s time for OpenAI to release public access to their AI video model.

Get your brand or product featured on Jim Monge's audience