Scroll Top

Nvidia’s GauGan AI can now turn your crappy art into decent synthetic videos

futurist_gaugan_landscapes

WHY THIS MATTERS IN BRIEF

AI is getting better, quickly, at being able to generate increasingly hi resolution and sophisticated synthetic content, and that will turn the entire global creative industry on its head.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

Synthetic content, content that’s created by machines and not humans, has come on a lot in just the past five years – whether it’s AI’s that can make art or music, write books, or create games, images, or videos from scratch using nothing more than their well trained AI minds.

 

RELATED
SpaceX’s first Falcon Heavy launch will attempt upper stage recovery

 

Earlier this year in March during its GPU Technology Conference in California Nvidia took the wraps off of GauGAN, a generative adversarial AI system, that I’ve written about before and talked about in keynotes, that lets users create amazing lifelike landscape images that never existed using nothing more than their crappy art abilities and a computer stylus. And now, thanks to some human ingenuity one researcher has shown how the same technology can be used to help people create increasingly convincing synthetic videos and virtual worlds – as you can see in the video below.

 

See how the tech has evolved in a short space of time

 

In the first month following the beta version’s release on Playground, the web hub for Nvidia’s AI and deep learning research, the company says GauGAN was used to create 500,000 images including concept art for films and video games. And now Nvidia has said it’s updating GauGAN with a new filter feature that layers on lighting and styles from uploaded photos which will make the images it produces even more lifelike.

 

RELATED
ChatGPT passes another US medical exam and diagnosed 1 in 100,000 conditions in seconds

 

“As researchers working on image synthesis, we’re always pursuing new techniques to create images with higher fidelity and higher resolution,” said Nvidia researcher Ming-Yu Liu. “That was our original goal for the project.”

GauGAN — whose namesake is post-Impressionist painter Paul Gauguin — improves upon Nvidia’s Pix2PixHD system introduced last year, which was similarly capable of rendering synthetic worlds but that left artifacts in its images. The machine learning model underpinning GauGAN was trained on more than one million images from Flickr, imbuing it with an understanding of the relationships among over 180 objects including snow, trees, water, flowers, bushes, hills, and mountains. In practice, trees next to water have reflections, for instance, and the type of precipitation changes depending on the season depicted.

 

RELATED
AI learning from AI-generated content will damage future AI models warn researchers

 

Paintbrush and paint bucket tools allow users to design their own landscapes with labels including river, grass, rock, and cloud, and the aforementioned style transfer feature lets them modify the colors and aesthetic on the fly. For example, images can adopt a warm sunset glow, or display the cooler lights of a city skyline. Alternatively, they’re able to upload their own landscape images, which GauGAN converts to segmentation maps — maps describing the location of objects in rough detail — that serve as foundations for artwork.

Nvidia says that GauGAN has been used by a health care organization exploring its use as a therapeutic tool for patients, and by a modeler — Colie Wertz — whose credits include Star Wars, Transformers, and Avengers movies.

 

RELATED
Google Translate has made up its own secret language

 

“We want to make an impact with our research,” said Liu. “This work creates a channel for people to express their creativity and create works of art they wouldn’t be able to do without AI. It’s enabling them to make their imagination come true.”

The code for GauGAN’s AI model was open-sourced on GitHub earlier this year, and an interactive demo is available on Nvidia’s website.

GauGAN is one of the newest reality-bending AI tools from Nvidia, creator of deepfake tech like StyleGAN, which can generate lifelike images of people who never existed. Last September, researchers at the company described in an academic paper a system that can craft synthetic scans of brain cancer. And in December, Nvidia detailed a generative model that’s capable of creating virtual environments using real-world videos.

 

RELATED
Yet another AI has invented its own secret gibberish language to communicate

 

GauGAN’s initial debut preceded GAN Paint Studio, a publicly available AI tool that lets users upload any photograph and edit the appearance of depicted buildings, flora, and fixtures. Elsewhere, generative machine learning models have been used to produce realistic videos by watching YouTube clips, create images and storyboards from natural language captions, and animate and sync facial movement with audio clips containing human speech.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This