0

WHY THIS MATTERS IN BRIEF

I’ve been talking about generative video for almost a decade, and it’s now getting better, albeit slowly.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Runway, the Creative Machine Artificial Intelligence (AI) startup that co-created last year’s breakout text-to-image model Stable Diffusion, has now released an AI model that can transform existing videos into new ones by applying any style specified by a text prompt or reference image.

 

RELATED
New synthetic muscle breakthrough will create better, nimbler robots

 

In a demo reel posted on its website, Runway shows how its software, called Gen-1, can turn clips of people on a street into claymation puppets, or books stacked on a table into a cityscape at night. Runway hopes that Gen-1 will do for video what Stable Diffusion, and MidJourney with their Text-to-Image technology did for images.

 

Synthetic Content … it’s “here” but still has a long way to go

 

“We’ve seen a big explosion in image-generation models,” says Runway CEO and cofounder Cristóbal Valenzuela. “I truly believe that 2023 is going to be the year of video.”

Set up in 2018, Runway has been developing AI-powered Text-to-Video editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream movie and TV studios. The makers of The Late Show with Stephen Colbert used Runway software to edit the show’s graphics; the visual effects team behind the hit movie Everything Everywhere All at Once used the company’s tech to help create certain scenes.

 

RELATED
Baidu's new AI creation creates videos from a single image with a single click

 

In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup, then stepped in to pay the computing costs required to train the model on much more data and now Getty is now taking legal action against Stability AI claiming that the company used Getty’s images, which appear in Stable Diffusion’s training data, without permission so Runway ended their partnership in order to keep its distance.

Gen-1 represents a new start for Runway. It follows a smattering of text-to-video models revealed late last year, including Text-to-Video from Meta and Phenaki from Google, both of which can generate very short video clips from scratch. It is also similar to Dreamix, a generative AI from Google revealed last week, which can create new videos from existing ones by applying specified styles. But at least judging from Runway’s demo reel, Gen-1 appears to be a step up in video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models – the company says it will post technical details about Gen-1 on its website in the next few days.

 

RELATED
So long Asimov's Laws, say hello to the 23 Laws of Robotics

 

Unlike Meta and Google, Runway has built its model with customers in mind.

“This is one of the first models to be developed really closely with a community of video makers,” says Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.”

Gen-1, which runs on the cloud via Runway’s website, is being made available to a handful of invited users today and will be launched to everyone on the waitlist in a few weeks.

Last year’s explosion in generative AI was fuelled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made with them. Valenzuela hopes that putting Gen-1 into the hands of creative professionals will soon have a similar impact on video.

 

RELATED
Metaphysic wows US AGT judges with their digital deepfake human synths

 

“We’re really close to having full feature films being generated,” he says. “We’re close to a place where most of the content you’ll see online will be generated.” And he’s right about the latter but as for the former well I reckon that that’s closer to 2030.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *