Scroll Top

MidJourney’s founders launch their first generative video app on iOS

Futurist_stylexfer

WHY THIS MATTERS IN BRIEF

Getting an AI to generate great video is alot harder than getting it to generate great images, but Runway’s founders are up to the challenge.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Artificial Intelligence (AI) startup Runway, which was created by the same people behind the Text-to-image generator MidJourney, has launched its first mobile app on iOS that lets people use the company’s Video-to-Video Generative AI tool – Gen-1 – directly from their phones.

 

RELATED
Researchers set a staggering new Quantum teleportation record

 

Gen-1 allows you to transform an existing video based on a text, image, or video input. Functionally, it works a lot like a style transfer tool although, unlike style transfer, it generates entirely new videos as an output rather than applying filters. You can upload a video of someone cycling in the park, for example, and apply an aesthetic or theme. You can give the video the look of a watercolor painting or charcoal sketch, and so on.

 

The Future of Synthetic Content, by keynote Matthew Griffin

 

Of course, because this is a generative AI, the output is often… strange. If you add a claymation effect, for example, your resulting models won’t function like real claymation. The models will warp between each frame, limbs will grow and shrink, features will melt and smear. That’s all to be expected, though, and doesn’t take away from the fun.

Here, for example, are three different renderings of an iconic clip of Al Pacino in Heat (1995). Most notable is the clip in the bottom right, which uses a picture someone took of a cat as an intermediary.

 

RELATED
ChatGPT's maker launches a laughably poor tool to detect AI written text

 

Without having to specify the model applied the cat’s face to Pacino’s and even gave his hands a bit of fur while leaving his suit more or less intact. The other two clips on the top row are preset filters.

 

 

There’s another example: a video of St. Paul’s Cathedral in London with the “paper and ink” filter applied. It’s not a mind-blowing effect, but it was incredibly easy to make. And in the hands of a more experienced and creative individual, I’m sure it could be spectacular.

 

 

People have been testing Runway’s app for a few days now, and it certainly makes the whole process of creating this sort of video much more fluid. Runway’s main software suite is available on the web, which makes the distance between capturing footage and generating it wider. It’s not a seamless experience, of course. There are the usual inefficiencies and unexpected errors you’d expect to find in the first release of an app. But, as Runway CEO Cristóbal Valenzuela told The Verge, making these tools mobile is the important thing.

 

RELATED
Room temperature superconductors in sight as scientists smash record

 

“That’s why the phone makes so much sense because you’re recording directly from your device, and then you tell Gen-1 how to transform that video,” said Valenzuela.

There are other limitations worth mentioning. You can’t work with footage longer than five seconds, and there are certain banned prompts. You can’t generate nudity, for example, and it seems copyright-protected work is off-limits, too. And prompts to create a video “in the style of a Studio Ghibli film” get rejected.

Each video also takes around two to three minutes to create, which doesn’t sound like a lot but feels like an age in the era of instant mobile editing. The processing is done in the cloud and will likely speed up over time. The app only currently supports Runway’s Gen-1 model, but Valenzuela says the purely generative Gen-2 will be added soon.

 

RELATED
AI can identify dangerous lung diseases as well as trained doctors

 

What these notes don’t fully capture, though, is the huge sense of possibility of tools like this. The output of AI text-to-image models also started out as smeared and unrealistic. Now they’re being used to fool the public with swagged-out pictures of the pope.

Valenzuela has compared the current era of generative AI to the “optical toys” phase of the 19th century, when scientists and inventors were creating a whole range of devices that were trivial in their capabilities but also the ancestors of modern cameras.

Runway’s mobile app feels like one of these toys, but within two years some of Hollywood’s most famous directors like Jim Russo believe these apps will be good enough to make “movies that give you pause.”

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This