Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Today you can’t have a conversation with your AI assistant, but in the future you will be…
Recently I had the pleasure of sitting down with the leader of MIT’s Artificial Intelligence (AI) lab Daniela Rus who told me that while we all see and hear about the rise of AI and intelligent machines at the moment this is all a myth and machines aren’t intelligent at all – yet. In fact, especially as it relates to machine learning, a branch of AI all these so called “intelligent machines” are just mechanical turks crunching through procedural like algorithms – in short the equivalent of a child concluding the sky is blue because they followed a flow chart and used a process of deduction to come up with the right answer. And, frankly, she has a point.
All that said though machines, outwardly anyway, do at least appear to be getting “smarter” even if it is all just some interesting illusion, and now they’re about to get even smarter still after Google announced its AI is capable of near human styles of conversation, something that so far has eluded even the smartest AI systems and developers, and even the world’s first official AI counsellor Woebot which would likely find this new tech very handy. For example, have you ever tried to have a conversation with Siri? Precisely my point.
Conversational AI is a catch-all term for natural language processing models for AI that can interpret human words, speak to people, or carry out tasks or computation with natural language. But talk to any of the best known AI assistants today, from Alexa to Google Assistant, and they’re not exactly conversational. Sure, they can tell you jokes, answer factual questions, and even respond to multiple queries without the need to keep repeating a wake word, but conversation or chit-chat is still very much a human skill still.
To try to level AI up though Google have just released Meena, a neural network with 2.6 billion parameters that can handle “multiturn dialogue,” and, naturally, they claimed it’s better than other conversational AI’s with Meena even telling an off-the-cuff joke at the unveilling.
They also released their Sensibleness and Specificity Average (SSA) toolkits that measure the ability of a conversational agent to maintain responses in conversation that make sense and are specific. Humans rank around 86% in SSA, and in initial tests, Meena scores a high of 79%. Meanwhile Mitsuku, an AI agent created by Pandora Bots that’s won the Loebner Prize for the past four years in a row got a 56%, while Microsoft’s XiaoIce, which speaks Mandarin Chinese, got a relatively lousy score of 31%.
Google’s work is detailed in “Towards a Human-like Open Domain Chatbot,” a paper published Monday on preprint repository arXiv.
Meena is trained on 40 billion words and uses a seq2seq model and a variation of Google’s popular Transformer architecture. Google first released Transformer back in 2017 and since then the language has grown to rank among the highest performing language models in the world.
Google have said they may make Meena available to researchers in the coming months but decided to avoid making a demo available immediately, the company said in a blog post.
The SSA standard Google proposes to use to assess the progress of all these conversational AI’s though is different than the metric used by other AI assistants such as Amazon’s Alexa team who are now in their third year of Amazon’s Alexa Prize, a challenge for teams of student developers to create an AI that can hold a conversation for up to 20 minutes. The finalist last year got up to about 10 minutes with the latest round of finalists being announced this May, and if you’re interested you can speak with last year’s finalists by simply saying “Alexa, let’s chat.”
Like Google Amazon is already beginning to grow its own multiturn dialogue offerings. Conversations is a feature that packages voice app recommendations in conversational multiturn dialogue. At the time of launch last summer, Amazon VP of devices David Limp called it “the holy grail of voice science,” and elsewhere Microsoft acquired the company Semantic Machines in 2018 and last year began to showcase more multiturn dialogue for users of the Microsoft Bot Framework.
AI assistants that can maintain a conversation may be able to secure closer bonds with humans and do things like provide emotional support to people, or cure the loneliness epidemic, as well as a million other things besides, and they’ll be here before you know it – at which point are you talking to an intelligent machine or just a clever mechanical turk? The line’s getting blurry and in a couple of years time we may very well see the first truly intelligent machines emerge – even by MIT’s own definition.