Reviewing Some of the Top Gen AI Video Tools [+ Thoughts On Where They're Headed]

Download Now: 100 Free ChatGPT Prompts
Nathan Lands
Nathan Lands
Matt Wolfe
Matt Wolfe

Published:

AI video generators have been around for a while but haven’t been practical, as they’ve all got this uncanny valley feel to them.

This was until OpenAI’s Sora was released. The early demos were very promising – enough to prompt TV producer and filmmaker Tyler Perry to pause his $500 million set expansion.

It’s months later and we still don’t have access to Sora. But there are more promising options in the market: Luma’s Dream Machine, Runway’s Gen 3 Alpha,.

Let’s dive how they compare and how AI video is impacting video creation.

Click Here to Subscribe to HubSpot's AI Newsletter

Where AI Video Generators Stand Today

A few months ago, pre-Sora, we put out a ton of YouTube videos comparing AI video generators like Pika Labs, HuggingFace, and Runway’s Gen 1.

Here’s what they all had in common:

  • The videos often looked like they were in slow motion.
  • People, animals, and things would morph and distort into odd shapes.
  • AI models continue to struggle with human features like teeth, hands, and feet.

No one has yet found real use with AI video yet. But when OpenAI released Sora, it looked like that would change.

Did OpenAI wait too long to release Sora?

The general consensus among AI enthusiasts is that OpenAI created all this buzz around Sora, yet failed to deliver on a ready-to-use product.

Here’s another way to look at it: By the time it is ready for the public, it will likely be much better quality in comparison to what’s out there.

Just based on demos, Sora has raised the bar. But we can confidently say Runway’s Gen 3 and Luma’s Dream Machine are not far behind.

Here’s how they measure up.

Our Thoughts on Runway’s Gen 3:

    • It’s great at text in video.
  • It can generate up to 10 seconds of footage.
  • It can generate video in about 45 seconds.

The downside is it needs improvement in its resolution.

Our Thoughts on Luma’s Dream Machine

Where Luma’s AI model shines is its ability to turn an image into a solid, quality video. However, when you start with a text prompt, that’s where the quality decreases.

Additional insights:

  • It can only generate up to five seconds of video content, unless you select the extended video feature. Caveat: It uses the last second frame to generate the next minute, which can lead to lower quality output.
  • It can take a while to generate, if there’s a long queue of users.

AI Video Use Cases

It feels like we are in the midst of a creative explosion with all of these AI tools enabling imagination and artistry, and democratizing video creation.

Just a few weeks ago, Matt published a YouTube video showing how he created a music video using:

  • Suno for the song,
  • Midjourney to make the starting images for the video,
  • Luma’s Dream Machine to animate those images and,
  • DaVinci to edit them all.

While the process took 10 hours, mostly due to Luma’s prolonged processing time, it was an exciting project and some of the output was pretty impressive.

AI For Content Creation

As YouTubers, we often use stock B-roll footage to include in our video content as a way to maintain the viewer’s attention.

Sites like Storyblocks get the job done but they still look like stock footage – there’s no way around that.

AI video generators allow you to create custom stock footage for b-roll that’s unique to the content we make. Not only is it better for copyright reasons, but it can also help your channel stand out from competitors.

AI for Video Games

The video game industry has stagnated in recent years and we have an idea why.

The games are so expensive to create, so new games end up looking like duplicates of previous versions. And gamers are tired of it.

There’s a renaissance of indie game developers and what they offer is design freshness and innovation.

Within the next two years, we see AI assisting in the development of:

  • Storylines
  • Characters
  • Art assets

Take Minecraft, Valheim, Fortnite – so many video games are already procedurally generated, in which the story doesn‘t revolve around the world that you’re in, because the world's different every time.

With AI, the worlds in these games could extend even further at high fidelity and greatly increase the replay value of open world survival games.

AI for Filmmaking

We’ve already heard from filmmaker Tyler Perry on the endless possibilities with AI. Star Wars creator and filmmaker George Lucas also recently shared his thoughts on the technology.

"It‘s inevitable … it’s like saying, ‘I don’t believe these cars are going to work. Let's just stick with the horses,’” he said to Brut FR. “You can say that, but that isn't the way the world works."

We anticipate AI will start playing a large role in filmmaking over the next few years, whether it’s replacing background actors with AI, using AI to speed up the addition of visual effects, creating a new genre of AI movies.

We can also see AI used to create unique viewer experiences, similar to what Netflix’s Black Mirror did with its episode “Bandersnatch.” This immersive episode allowed viewers to pick their own adventures at various points of the episodes, each leading to a different outcome.

It increases films’ replay value, which is a win for streamers and advertisers.

We’ll have to wait and see where things are headed with generative AI video. But one thing’s for sure: It’s bound to change the way we consume media.

Click Here to Subscribe to HubSpot's AI Newsletter

Related Articles

Discover the key to unlocking unparalleled productivity with this ultimate guide to revolutionizing your workflow.

    Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

    START FREE OR GET A DEMO