Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures and deep futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, GEMS, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
Being able to clone people’s voices, and mimic them, opens up a range of well meaning opportunities, but it also paves the way for fake news chaos and everything in between…
In the near future, just as it is today, fake news will become an increasing problem, so it’s hard not to be concerned about the kind of mimicry that today’s Artificial Intelligence (AI) technology is making possible. First, researchers developed deep learning AI that can superimpose one person’s face onto another person’s body. Now, researchers at Chinese search giant Baidu have created an AI they claim can learn to accurately mimic your voice — based on less than a minute’s worth of listening to it.
“From a technical perspective, this is an important breakthrough showing that a complicated generative modelling problem, namely speech synthesis, can be adapted to new cases by efficiently learning only from a few examples,” said Leo Zou, a member of Baidu’s communications team, “previously it would take numerous examples for a model to learn. Now, it takes a fraction of what it used to.”
And the company isn’t alone in its ability to mimic real voices by listening to just a minute’s worth of audio, last year I covered Adobe’s Voco product, essentially “photoshop but for voice,” and a company called LyreBird which used neural networks to replicate voices including President Donald Trump and former President Barack Obama with a relatively small number of samples. Like Lyrebird’s work, Baidu’s speech synthesis technology doesn’t sound completely convincing, but it’s an impressive step forward, and way ahead of a lot of the robotic AI voice assistants that existed just a few years ago.
The work is based around Baidu’s text-to-speech synthesis system Deep Voice, which was trained on upwards of 800 hours of audio from a total of 2,400 speakers. It needs just 100 5 second sections of vocal training data to sound its best, but a version trained on only 10 5 second samples was able to trick a voice recognition system more than 95 percent of the time.
“We see many great use cases or applications for this technology, for example, voice cloning could help patients who lost their voices. This is also an important breakthrough in the direction of personalised human-machine interfaces. For example, a mom can easily configure an audiobook reader to read her back stories using her own voice,” (errr, freaky…), “the method [additionally] allows creation of original digital content. Hundreds of characters in a video game would be able to have unique voices because of this technology. Another interesting application is speech-to-speech language translation, as the synthesizer can learn to mimic the speaker identity in another language,” added Zou.