Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
AI is sounding more and more human-like, and that’s an issue if you make a career out of your voice …
Over the past couple of years synthetic voices, in other words human voices that are generated by Artificial Intelligence (AI) rather than actual people, has come a long way with companies like Facebook and LyreBird being able to replicate anyone’s voice with just a minute’s worth of audio, and tech giants like Google promoting Duplex, which they showed off a while ago, to be the voice of Google Assistant.
As amazing as all these advances are though, and as much as they help push synthetic voice past uncanny valley, the point at which people can’t tell if it’s real or synthetic, and as AI tries to get its head around holding human-like conversations rather than limiting itself to saying a few words or mini-sentences, for the most part none of these systems so far have been able to generate human-like voices that convey emotion, let alone generate a human voice that you’d be happy listening to for hours on end.
Enter DeepZen, a voice synthesiser project that uses AI algorithms from IBM’s Power AI and Watson technologies division. DeepZen has developed text-to-speech tools that not only sound human at first listen, but can also pick up on the emotional cues needed for reading text in a compelling manner. In doing so, the company claims that it could reduce the time and cost to produce audiobooks by up to 90 percent.
Taylan Kamis, CEO and Co-founder of DeepZen, explains: “Our aim isn’t to put voice actors out of jobs, but rather to solve the capacity issues in the current market. We identify emotion in text automatically and use voice samples – for which we pay royalties to voice actors – combined with speech synthesis technology to produce convincing voice audio.”
“To [create these voices], we needed to create large and complex neural networks. These require extensive amounts of processing power to produce accurate results fast, so we needed the right technology platform to bring our vision to life,” he added.
While DeepZen promises it’s not going to put narrators out of a job though it’s hard to see how that won’t happen over the longer term, but that conversation asides in the interim their technology will no doubt be an invaluable tool in helping smaller publishers and indie authors create audiobooks without having to go to the hassle of dealing with professional narrators.
And as for next steps, DeepZen have announced that they’re going to be working with Audiowhale to commercialise their technology and bring it to authors and publishers everywhere.