Scroll Top

Lights, camera, artificial action as AI takes on the special effects industry

futurist_rotoscoping_ai

WHY THIS MATTERS IN BRIEF

Decades after CGI appeared rotoscoping is still one of the most laborious special effects tasks, but AI’s now tackling the problem.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

Artificial Intelligence (AI) is being used to create more and more synthetic content, in the form of audio, text, video, and now synthetic special effects, and back in 2018, inside an old auto body shop in Silicon Valley a gentleman called Stefan Avalos pushed a movie camera down a dolly track. He and a small crew were making a short film about self-driving cars. They were shooting a rusted blue 1963 Austin Mini that, via special effects, would be transformed into an autonomous vehicle that looked more like the DeLorean from the movie “Back to the Future.”

 

RELATED
This creative AI paints like the old masters

 

Stepping back from the camera at the time Avalos referred wryly to the movie he was filming as “Project Unemployment” because the film he and his crew were making was a way of testing new technology from a start-up called Arraiy, a company that at the time was trying to automate the creation of digital effects for movies, television and games, that then went on to be bought by Matterport – all tasks that today are as tedious as they are laborious – using a new AI system.

 

Under the hood of “The Human Race” by The Mill
 

The system in question was also being developed by software giants Adobe and AutoDesk, as well as in other technology industry research labs, including Nvidia, and ultimately, the hope at the time, which has been born out since, was that once the technology was good enough and embedded into studio workflows it would spell the end to all the drudgery associated with creating and refining special effects – as well as many of the people involved in the process. As is the way with automation.

Since the advent of Jurassic Park back in the 80’s, Computer Generated Imagery (CGI) has transformed how movies and TV series are made. But for all CGI’s promise creating digital effects is still a painstaking and enormously tedious process. For every second of movie time armies of designers can spend hours isolating people and objects in raw camera footage, digitally building new images from scratch and combining the two as seamlessly as possible to create the final result that you see on the big screen.

 

RELATED
MIT's latest breakthrough? Getting AIs to explain their decisions

 

Arraiy, the company behind the system Alvaros was testing, was one of the first to use AI to automate as many of these laborious tasks as possible, and thanks to improvements in deep learning we’re now at the point where these systems can edit noise and mistakes out of images as well as, as you’ve seen elsewhere in this codex, create a wide range of synthetic content.

Inside Arraiy’s offices teams of data scientists were hard at work building computer algorithms that could learn design tasks by analysing years worth of work by movie effects houses. That includes systems that learn to rotoscope raw camera footage – carefully separating people and objects from their backgrounds so that they can be dropped onto new backgrounds.

In order to train their algorithms the team collected a decade of rotoscoping and other visual effects work from various design houses, as well as adding their own work to the collection. After filming people, mannequins and other objects in front of a classic green screen company engineers were quickly rotoscope thousands of images relatively quickly that could then be added to the training corpus, and once the algorithm was trained the team unleashed it and let it rotoscope images without help from a green screen.

 

RELATED
Nestle uses AI to set the creative rules for their 15,000 strong marketing team

 

This is the work that Avalos tested while all the while thinking the work could ultimately supplant work done by his own effects house. But as he saw it one of the big upsides in his case was the fact he would no longer have to outsource the tedious work the system was helping automate to workers in other countries.

Meanwhile, elsewhere, Kognat debuted its Rotobot deep learning rotoscope and compositing tool that it made available with NUKE that, similarly, thanks to its intense training is able to isolate all of the pixels that belong to a certain class into a single mask, called segmentation. The effect is the isolation of portions of the image, just like rotoscoping that can isolate all of the pixels of a single instance of a class, for example where a class could be a “person” or a “cat,” and so on, into its own layer.

 

RELATED
Featured Futurist: Stuff Magazine, The Virtual Sporting Life

 

As hardware improves these techniques and tools will soon be able to help Special effects artists generate preliminary effects right on the movie set in real time, and help give filmmakers a better idea of what the finished product will look like – a challenge that was then notably taken up by The Mill, a special effects house based in London, that a year before had taken the same system and used it to create one that could instantly apply effects that changed the look of cars in their critically acclaimed film “The Human Race” that you can watch above.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This