Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Over time we will all be interacting more and more with AI, as a result companies are trying to “humanise” it.
BabyX, the virtual, artificially intelligent creation of Mark Sagar and his company, Soul Machines, who also bought us Ava, a life-like digital avatar that’s now being used by Daimler and Royal Bank of Scotland to support customers and sell financial services products, looks, sounds, and acts so much like a real baby that many of the people who have interactrendered with her have felt themselves producing a genuine emotional response, just like the kind you get when a real baby coos and giggles at you. And that’s exactly the point. BabyX is an experiment that Sagar began in 2015 to try to “humanise” AI and make the technology more appealing for humans to interact with in new and interesting ways.
Sagar is a force for a trend that’s rapidly gaining ground within the tech community, the “humanisation of AI,” which he believes will become increasingly important in the future as we all begin to create new, symbiotic relationships with the AI’s that will be increasingly all around us, from Alexa and Siri, to the AI’s providing customer service support, workplace assistance, and running our connected homes, as well as many others besides.
The world’s first “autonomous virtual infant”
Many AI experts argue that robots and AI systems can only realise their full potential if they become more like humans, with emotions and memories that help inform their behaviour and decision making capabilities, and Sagar’s approach is arguably one of the most innovative in that his detailed, artistically rendered faces mask digital biological models and simulations of unprecedented complexity.
For example, every time BabyX smiles it’s because she’s perceived something with her “senses” that has triggered her digital neural network “brain” to release virtual endorphins, dopamine, and serotonin into her AI system. One visualisation layer in her brain that’s connected to language and praise, for example, glows when she sees words and receives praise.
“Researchers have built lots of computational models of cognition and pieces of this, but no one has stuck them together,” Sagar told Bloomberg last year. “This is what we’re trying to do, wire them together and put them in an animated body. We are trying to make a central nervous system for human computing.”
So far the result of all this is that the team has created the world’s most detailed digital map of the human brain which is all part of the team’s larger goal of trying to reverse engineer the “inner life of the human,” as they refer to it.
Soul Machines debuted its first AI face, Nadia, a prelude to Ava mentioned above, in February last year. Nadia, who speaks with Cate Blanchett’s voice, is now working for Australia’s National Disability Insurance Agency, interacting with customers fulltime on the agency’s website. The goal is to be more usable and personable than the typical text-based chatbots we encounter online. Soul Machines has another ten trials underway with airlines, healthcare providers and financial services firms.
As their technology improves the plan is that BabyX and its siblings will have broader applications that are less reliant upon users’ proximity to a computer screen. Over time these kinds of personable AIs could become the defacto way we interact with tomorrow’s self-driving cars as well as, perhaps, tomorrow’s doctors, sales people and even, perhaps, teachers, something I explore in my forthcoming Future of Education report.
While the research could lead to far more likable, believable virtual assistants and other wonderful breakthroughs though there’s a long road ahead and there are also a number of unanswered ethical and practical questions such as, would an AI stuck with customer service duty grow weary of pushing that proverbial rock up the logistical hill? Could an AI toddler be traumatised by the collective human fear of the uncanny likeness of non-human reactions to human emotions? And, of course, do virtual babies dream of electric cradles?
As BabyX learns to play the piano, laugh at jokes, and interact with humans, it’s easy to anthropomorphise her, and in time, as the technology continues to improve it’s inevitable that there will be a whole host of new questions we need to ask about our relationship with AI, from questions about addiction and love to questions of moral servitude and beyond.