RunwayML Gen-3: A Significant Upgrade Over Gen-2?

on

A couple of weeks ago, I compared RunwayML Gen-2 with Luma AI’s Dream Machine, concluding that Runway had the upper hand. Fast forward to now, and RunwayML has launched Gen-3, touted as a major upgrade. Naturally, I decided to rerun the same prompts to see how Gen-3 measures up. Here’s a breakdown of the key differences between the two versions:

Fidelity and Consistency:

Gen-3: Produces more photorealistic videos with improved temporal consistency. Objects and scenes maintain a more stable appearance throughout the video, with less morphing or jittering compared to Gen-2.

Motion Control:

Gen-3: Offers better understanding of real-world movement and physics, resulting in more natural and believable motion. Gen-2 sometimes struggled with this aspect.

Input Options:

Gen-3: Accepts text descriptions, images, or even existing video clips to create new videos, whereas Gen-2 was limited to text and images.

Duration

Gen-3: Defaults to ten-second clips, which can quickly deplete your credits.


Prompt: A field of flowers swaying in the breeze under a vibrant sunset.

RunwayML Gen-2  (Above)

RunwayML Gen-3 (Above)

Analysis: Same prompt, but I think Gen-2 looks better. Gen-3’s higher control capacity may work against it with minimal descriptions.


Prompt: Create a video of a drone flying through an abandoned mall during mid-morning or late afternoon to capture optimal sunlight angles. The scene should be eerie and haunting, with decayed storefronts, broken glass, and scattered debris. The drone should smoothly glide down the main hallway, showcasing wide shots.

RunwayML Gen-2

RunwayML Gen-3

Analysis: This is a huge leap forward. Gen-3 excels with longer prompts, creating an eerie, ghostly mall bathed in bright sunlight. Inspires further exploration when more credits are available.


Prompt: Dragon-toucan walking through the Serengeti.

RunwayML Gen-2

RunwayML Gen-3

Analysis: It’s a toss-up. Gen-2 actually has more detail, but again, the short prompt didn’t serve Gen-3 well.


Prompt: 1970s Cinematic Feel. Grimy Street in London. A taxi cab driver stands in front of his car waiting for his passenger. He is an older man with a weather-beaten face.

RunwayML Gen-2

RunwayML Gen-2

Analysis: Gen-3 starts well, but even with a detailed prompt, a hand appears from the left, disrupting the physics.


Independence Day Experiment

Given the proximity to Independence Day in the US, I tested if Gen-3 could handle historical scenes of 1776 Philadelphia. Unfortunately, it could not. Modern elements like cars appeared, and Independence Hall looked misplaced.

Prompt: 1776 Philadelphia, designed as drone shots to capture the historical ambiance from above. Independence Hall Description: The drone starts high above Independence Hall, slowly descending to reveal the bustling activity below. The streets are filled with horses, carriages, and people dressed in 18th-century attire. Angle: Wide aerial shot of Independence Hall and its surroundings. Action: The drone slowly descends, capturing the historic architecture and the cobblestone streets

Prompt: 1776 The Philadelphia Riverfront Description: The drone captures the busy Delaware Riverfront, where tall ships are docked, and workers load and unload goods. No modernity  Wide aerial shot of the riverfront, showing ships and docks. Action: The drone descends, getting closer to the activity on the docks. Workers carry barrels and crates, sailors prepare ships for departure, and merchants inspect goods being unloaded.

RunwayML Gen-3 shows promise, particularly with more complex and detailed prompts. While it stumbles with minimal descriptions and historical accuracy, its motion control and realism advancements are noteworthy. It’s a powerful tool that requires thoughtful input and a crapton of credits to get the refinement you’ll need.

One Comment Add yours

Leave a Reply

Your email address will not be published. Required fields are marked *