Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Pandora’s Box has been opened, and very soon you won’t be able to tell what content was created by a human, and what was created by a machine.
I’ve been talking about the rise of realistic, people fooling synthetic content that’s created by machines and not people in my articles and keynotes, like this one in London that discusses the “Future of Trust,” for years now, and as such I’m going to put a stake in the ground here – in a couple of years time you won’t be able to tell real content from the latest state of the art fake content. At all. Whether it’s synthetic photos and videos of landscapes, celebrities and news, or the myriad of other forms of content including adverts, art, articles, computer games, music, podcasts, virtual bloggers, or even rudimentary short films. Unless, that is, you’re a machine or an Artificial Intelligence (AI) program, like these, that are discovering new ways to find the fakes. But make no mistake, by combining AI with increasingly powerful computer and rendering machines, and the human zest for creativity and mischief we’ve opened Pandora’s box, and the future of content is only going to get trippier from here on in. For better and worse.
Now I’ve got my soap box speech over with this brings me on to the next piece of the “creating synthetic content” jigsaw puzzle that ups the table stakes in this game even more – the ability to use Creative Machines to not just create photorealistic fake celebrity faces but also fake human bodies too.
Researchers from Japan have just shown off their photorealstic DeepFakes of people, complete with clothing, who’ve only ever existed in the fevered imagination of a neural network. The company responsible for this jaw-dropping tech demo is DataGrid, a startup based on the campus of Japan’s Kyoto University.
As the video shows, the AI algorithm can dream up an endless parade of realistic-looking humans who constantly shapeshift from one form to another, courtesy of some dazzling morphing effects.
Like many of the so called generative artificial intelligence tools, including the tools used to create artwork that sold at Christies recently for over $400,000, this latest demonstration was created using something called a Generative Adversarial Network (GAN). A GAN is an AI that pits two artificial neural networks against one another and in my opinion they’re incredibly interesting because they’re currently being used to create everything from new products and inter-planetary landers, as well as synthetic content, to hacking the biometric security systems on smartphones.
In this latest case, one network generates new images, while the other attempts to work out which images are computer generated and which are not. Over time, the generative adversarial process allows the “generator” network to become sufficiently good at creating images that they can successfully fool the “discriminator” every time.
As can be seen from the video, the results are impressive. They don’t appear to have any of the image artifacts or strange glitches which have marked out many attempts at generating images in the past. However, it’s also likely not a coincidence that the video shows humans positioned against simple white backdrops, thereby minimizing the potential of confusing backgrounds which could affect the images created. However, in time, and in a short period of time at that, that too will come.
Provided the research is as it seems this is a fascinating advance in the world of creating photorealistic content, and, to be frank, if we were employed as movie extras or catalogue models for clothing brands, the latter of which who are already feeling the pinch from virtual bloggers and models like Lil Miquela, then we’d probably be feeling a little bit nervous right now. And that’s before we start evaluating its ability to eventually help people create even better fake news content.
It’s time to shut the box people! Ah, we can’t… crap! Well, at least we can also use the same tech to create entertaining and exciting new content – and that’s awesome! Scary awesome. Has there ever been a more exciting yet dangerous piece of tech!? Hmmmm let’s save the answer to that one for another post…