Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
AI is helping democratise skills – and in this case it’s helping crappy artists become masters.
Artificial Intelligence (AI) has been filling in the gaps for illustrators and photographers alike for years now, but it’s only recently that it’s been powerful enough to generate realistic looking images from people’s truly awful sketches. Now Nvidia’s GauGAN AI that was the first to do this has just been improved and as you can see in the video the images it produces are significantly better than those generated by previous versions.
Renamed Canvas Nvidia’s new tool lets the creator sketch a crappy paint-by-numbers look-a-like landscape then fills it in with photorealistic content.
As you can see each distinct color in Canvas represents a different type of landscape feature: mountains, water, grass, ruins, and so on. When colors are blobbed onto the canvas, the crude sketch is passed to a Generative Adversarial Network (GAN). GANs essentially pass content back and forth between a creator AI that tries to generate a realistic image, and a detector AI that evaluates how realistic that image is. These then work together to make what they think is a fairly realistic depiction of what’s been sketched.
Canvas is much more user friendly than GauGAN which was originally shown at CVPR in 2019, and it has much smoother edges, produces better quality imagery, and can run on any Windows computer that has a decent Nvidia graphics card.
So far it’s been used to create very realistic faces, animals and landscapes, though there’s usually some kind of “tell” that a human can spot. But the Canvas app isn’t trying to make something indistinguishable from reality — as concept artist Jama Jurabaev explains in the video, it’s more about being able to experiment freely with imagery more detailed than a doodle.
It’s like looking at a photo!
For instance, if you want to have a smoldering ruin in a field with a river off to one side, a quick pencil sketch can only tell you so much about what the final piece might look like. What if you have it one way in your head, and then two hours of painting and coloring later you realize that because the sun is setting on the left side of the painting, it makes the shadows awkward in the foreground?
If instead you just scribbled these features into Canvas, you might see that this was the case right away, and move on to the next idea. There are even ways to quickly change the time of day, palette, and other high-level parameters so they can quickly be evaluated as options.
“I’m not afraid of blank canvases any more,” said Jurabaev. “I’m not afraid to make very big changes because I know there’s always AI helping me out with details… I can put all my effort into the creative side of things, and I’ll let Canvas handle the rest.”
It’s very like Google’s Chimera Painter, if you remember that particular nightmare fuel, in which an almost identical process was used to create fantastic animals. Instead of snow, rock and bushes, it had hind leg, fur, teeth and so on, which made it rather more complicated to use and easy to go wrong with.
Still, it may be better than the alternative, for certainly an amateur like myself could never draw even the weird tube-like animals that resulted from basic blob painting.
Unlike the Chimera Creator, however, this app is run locally, and requires a beefy Nvidia video card to do it. GPUs have long been the hardware of choice for machine learning applications, and something like a real-time GAN definitely needs a chunky one. You can download the app for free here.