Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Increasingly AI is being used as a tool to help democratise creativity and unlock people’s creative potential.
It’s long been known that I’m a crappy artist – despite my best efforts at school it was evident early on that I was just never cut out to be the next Picasso or Rembrandt. The ability to transfer onto a paper what one sees either in one’s mind or with one’s eyes is, after all, a skill that many people would love to have, but, for one reason or another, just don’t.
Anyway, now, thanks to a breakthrough from those great folks at Nvidia I no longer need to just dream about being a great artist, I can give up my aspirations of being a real futurist and become the artist I know I was always destined to be, after the company announced they’ve developed GauGAN. And yes, that’s an intentional reference to the post impressionist painter Paul Gauguin. Obs.
So how does GauGAN take shapeless blobs of colour and turn them into mountains and shimmering Alpine landscapes you might ask? By using a form of AI known as a Generative Adversarial Network (GAN) – the same type of AI that today is being used to create everything from films and fake celebrities, to fake news, as well as the world’s first generations of “Creative Machines,” one of which just sold a painting for over $400,000, and others that are helping design Amazon’s new clothing lines, and invent products as well as helping people create their own videos just by writing what they’d like to see. And much more. In short GAN’s are helping democratise creativity and innovation and that makes them one of my top technologies to watch.
The best description of how a GAN works is “One neural network, called the generator, generates new data instances, while the other, the discriminator, evaluates them for authenticity; I.E. the discriminator decides whether each instance of data that it reviews belongs to the actual training dataset or not.”
In the case of GauGAN the AI learned to create fantastic images by using a discriminator network to compare them to real images. As such, GauGAN “knows” what a field or forest would look like in whatever certain shape you provide. You make the sketch, tell GauGAN where everything should go, and then the program fills in all the details for you. Congratulations, you’re an artist – or should I say artiste!?
“It’s like a colouring book picture that describes where a tree is, where the sun is, where the sky is,” said Brian Catanzaro vice president of applied deep learning research at NVIDIA. “And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colors, based on what it has learned about real images.”
Take a look at the video to see more, and you can try the online demo for yourself here.