Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify and track hundreds of game changing emerging technologies, and explain their impact on global culture, industry and society, is unparalleled. Recognised for the past five years running as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, one of Europe’s largest energy companies, and his recent work includes mentoring XPRIZE teams, building the first generation of biocomputers, helping the world’s largest manufacturers companies envision the next five generations of smartphones and devices, and what comes next, and helping companies including Qualcomm envision the next twenty years of semiconductors. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BOA, Blackrock, Bloomberg, Booz Allen Hamilton, BCG, Bentley, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JPMorgan Chase, KPMG, Lloyds Banking Group, McKinsey, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.
WHY THIS MATTERS IN BRIEF
Over time we will all be interacting more and more with AI, as a result companies are trying to “humanise” it.
BabyX, the virtual, artificially intelligent creation of Mark Sagar and his company, Soul Machines, who also bought us Ava, a life-like digital avatar that’s now being used by Daimler and Royal Bank of Scotland to support customers and sell financial services products, looks, sounds, and acts so much like a real baby that many of the people who have interactrendered with her have felt themselves producing a genuine emotional response, just like the kind you get when a real baby coos and giggles at you. And that’s exactly the point. BabyX is an experiment that Sagar began in 2015 to try to “humanise” AI and make the technology more appealing for humans to interact with in new and interesting ways.
Sagar is a force for a trend that’s rapidly gaining ground within the tech community, the “humanisation of AI,” which he believes will become increasingly important in the future as we all begin to create new, symbiotic relationships with the AI’s that will be increasingly all around us, from Alexa and Siri, to the AI’s providing customer service support, workplace assistance, and running our connected homes, as well as many others besides.
The world’s first “autonomous virtual infant”
Many AI experts argue that robots and AI systems can only realise their full potential if they become more like humans, with emotions and memories that help inform their behaviour and decision making capabilities, and Sagar’s approach is arguably one of the most innovative in that his detailed, artistically rendered faces mask digital biological models and simulations of unprecedented complexity.
For example, every time BabyX smiles it’s because she’s perceived something with her “senses” that has triggered her digital neural network “brain” to release virtual endorphins, dopamine, and serotonin into her AI system. One visualisation layer in her brain that’s connected to language and praise, for example, glows when she sees words and receives praise.
“Researchers have built lots of computational models of cognition and pieces of this, but no one has stuck them together,” Sagar told Bloomberg last year. “This is what we’re trying to do, wire them together and put them in an animated body. We are trying to make a central nervous system for human computing.”
So far the result of all this is that the team has created the world’s most detailed digital map of the human brain which is all part of the team’s larger goal of trying to reverse engineer the “inner life of the human,” as they refer to it.
Soul Machines debuted its first AI face, Nadia, a prelude to Ava mentioned above, in February last year. Nadia, who speaks with Cate Blanchett’s voice, is now working for Australia’s National Disability Insurance Agency, interacting with customers fulltime on the agency’s website. The goal is to be more usable and personable than the typical text-based chatbots we encounter online. Soul Machines has another ten trials underway with airlines, healthcare providers and financial services firms.
As their technology improves the plan is that BabyX and its siblings will have broader applications that are less reliant upon users’ proximity to a computer screen. Over time these kinds of personable AIs could become the defacto way we interact with tomorrow’s self-driving cars as well as, perhaps, tomorrow’s doctors, sales people and even, perhaps, teachers, something I explore in my forthcoming Future of Education report.
While the research could lead to far more likable, believable virtual assistants and other wonderful breakthroughs though there’s a long road ahead and there are also a number of unanswered ethical and practical questions such as, would an AI stuck with customer service duty grow weary of pushing that proverbial rock up the logistical hill? Could an AI toddler be traumatised by the collective human fear of the uncanny likeness of non-human reactions to human emotions? And, of course, do virtual babies dream of electric cradles?
As BabyX learns to play the piano, laugh at jokes, and interact with humans, it’s easy to anthropomorphise her, and in time, as the technology continues to improve it’s inevitable that there will be a whole host of new questions we need to ask about our relationship with AI, from questions about addiction and love to questions of moral servitude and beyond.