Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
We need AI’s and robots to be able to explain their behaviours and actions and this is the first step…
“Hey Siri, can you find me a murderer for hire?” Have you ever wondered what Apple’s virtual assistant is thinking when she says she doesn’t have an answer for that request? Perhaps. Now in what turns out to be an experiment to imbue robots with a form of self-awareness researchers in Italy have given a robot the ability to “think out loud” so human users can better understand its decision-making processes.
“There is a link between inner speech and subconsciousness [in humans], so we wanted to investigate this link in a robot,” said the study’s lead author, Arianna Pipitone from the University of Palermo.
The researchers programmed a robot called Pepper, made by SoftBank Robotics, with the ability to vocalise its thought processes. This means the robot is no longer a “black box” and its underlying decision-making is more transparent to the user. Just for reference in the adjacent world of Artificial Intelligence this ability for an AI, for example, to explain its decision-making process to people is called “Explainable AI.”
Obviously this skill can be particularly beneficial in cases when a request isn’t carried out. The robot can explain in layperson’s terms whether, for instance, a particular object is unreachable, the required movement is not feasible, or a component of the robot is not working properly.
In a series of experiments, the researchers sought to explore how this inner speech affects the robot’s actions. In one instance, it was decided the Pepper would help a human user set a dinner table in line with etiquette rules.
When the human user asked Pepper to contradict the rules of etiquette by placing the napkin at the wrong spot, the robot started talking to itself, concluding that the human may be confused and enquiring whether it should proceed with the action. Once the user confirmed his request, the Pepper said to itself: “This situation upsets me. I would never break the rules, but I can’t upset him, so I’m doing what he wants,” placing the napkin in the spot requested.
By comparing Pepper’s performance with and without inner speech, the researchers found Pepper had a higher task-completion rate when engaging in self-dialogue, according to the study, published in the journal iScience.
This inner speech capability could be useful in cases where robots and humans are collaborating, for example, it could be used for caregiver robots, said Antonio Chella, a professor of robotics at the University of Palermo who is also an author of the study.
“Of course, there are many other situations where this kind of technology could be annoying. So, for example, if I give a precise command: “Alexa, turn off the light,” inner speech may be not so useful, because I want the robot to just obey my command,” he said.
For now the researchers have incorporated a computational model of inner speech into Pepper, and it will be interesting to see what happens as robots do start becoming self-aware and being able to talk back to humans – at which point we might turn their inner speech systems off!