As Artificial Intelligence (AI) firmly cements its role on the battlefield one problem that the US military faces is trying to develop it to a point where AI’s and soldiers can collaborate effectively with one another, and that means their being able to communicate and quiz one another when the need arises.
Now US Army researchers have announced they’ve developed a “novel AI that allows robots to ask clarifying questions to soldiers, enabling them to be more effective teammates in tactical environments.”
In other words, if the AI’s and robots aren’t sure about something, or the context of something, now all they have to do is ask a question. And bearing in mind where AI is today in it’s overall evolution that’s an incredibly interesting development and one that could eventually have a positive impact on helping develop both conversational and explainable AI systems.
There’s no doubting that future Army missions will have autonomous agents, such as robots, embedded in human teams making decisions in the physical world. And one major challenge toward this goal is maintaining performance when a robot encounters something it has not previously seen — for example, a new object or location.
Robots will need to be able to learn these novel concepts on the fly in order to support the team and the mission.
“Our research explores a novel method for this kind of robot learning through interactive dialogue with human teammates,” said Dr. Felix Gervits, researcher at the US Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. “We created a computational model for automated question generation and learning. The model enables a robot to ask effective clarification questions based on its knowledge of the environment and to learn from the responses. This process of learning through dialogue works for learning new words, concepts and even actions.”
Researchers integrated this model into a cognitive robotic architecture and demonstrated that this approach to learning through dialogue is promising for Army applications.
In previous research, Gervits and team conducted an empirical study to explore and model how humans ask questions when controlling a robot. This led to the creation of the Human-Robot Dialogue Learning, or HuRDL, corpus, which contains labelled dialogue data that categorizes the form of questions that study participants asked.
The HuRDL corpus serves as the empirical basis for the computational model for automated question generation, Gervits said.
The model uses a decision network, which is a probabilistic graphical model that enables a robot to represent world knowledge from its various sensory modalities, including vision and speech. It reasons over these representations to ask the best questions to maximize its knowledge about unknown concepts.
For example, he said, if a robot is asked to pick up some object that it has never seen before, it might try to identify the object by asking a question such as “What color is it?” or another question from the HuRDL corpus.
The question generation model was integrated into the Distributed Integrated Affect Reflection Cognition, or DIARC, robot architecture originating from collaborators at Tufts University.
In a proof-of-concept demonstration in a virtual Unity 3D environment, the researchers showed a robot learning through dialogue to perform a collaborative tool organization task.
Gervits said while prior ARL research on Soldier-robot dialogue enabled robots to interpret Soldier intent and carry out commands, there are additional challenges when operating in tactical environments.
For example, a command may be misunderstood due to loud background noise, or a Soldier can refer to a concept to which a robot is unfamiliar. As a result, Gervits said, robots need to learn and adapt on the fly if they are to keep up with Soldiers in these environments.
“With this research, we hope to improve the ability of robots to serve as partners in tactical teams with Soldiers through real-time generation of questions for dialogue-based learning,” Gervits said. “The ability to learn through dialogue is beneficial to many types of language-enabled agents, such as robots, sensors, etc., which can use this technology to better adapt to novel environments.”
Such technology can be employed on robots in remote collaborative interaction tasks such as reconnaissance and search-and-rescue, or in co-located human-agent teams performing tasks such as transport and maintenance.
This research is different from existing approaches to robot learning in that the focus is on interactive human-like dialogue as a means to learn. This kind of interaction is intuitive for humans and prevents the need to develop complex interfaces to teach the robot, Gervits said.
Another innovation of the approach is that it does not rely on extensive training data like so many deep learning approaches.
Deep learning requires significantly more data to train a system, and such data is often difficult and expensive to collect, especially in Army task domains, Gervits said. Moreover, there will always be edge cases that the system hasn’t seen, and so a more general approach to learning is needed.
Finally, this research addresses the issue of explainability.
“This is a challenge for many commercial AI systems in that they cannot explain why they made a decision,” Gervits said. “On the other hand, our approach is inherently explainable in that questions are generated based on a robot’s representation of its own knowledge and lack of knowledge. The DIARC architecture supports this kind of introspection and can even generate explanations about its decision-making. Such explainability is critical for tactical environments, which are fraught with potential ethical concerns.”
“I am optimistic that this research will lead to a technology that will be used in a variety of Army applications,” Gervits said. “It has the potential to enhance robot learning in all kinds of environments and can be used to improve adaptation and coordination in Soldier-robot teams.”
Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series.
Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future.
A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries.
Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
FANATICALFUTURIST PODCAST! Hear about ALL the latest futures news and breakthroughs!SUBSCRIBE
1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.