0

WHY THIS MATTERS IN BRIEF

Humans and robots are still very different and there are advantages to being self-aware which is why researchers are cranking up their efforts in the field.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

We’ve already seen robots becoming conscious, albeit in a very odd way, as well as robots being given their own inner voices to guide them, but now robots that can pass cognitive tests such as recognising themselves in a mirror and being programmed with a human sense of time are showing how machines are being shaped to become a bigger part of our everyday lives.

 

RELATED
Ford rolls out collaborative robots and ringfences workers jobs

 

In 2016, for the first time ever, the number of robots in homes, the military, shops, and hospitals surpassed that used in industry. Instead of being concentrated in factories, robots are a growing presence in people’s homes and lives – a trend that is likely going to increase as they become more sophisticated and ‘sentient’.

“If we take out the robot from a factory and into a house, we want safety,” said Dr Pablo Lanillos, an assistant professor at Radboud University in the Netherlands.

And for machines to safely interact with people, they need to be more like humans, experts like Dr Lanillos say so he has designed an algorithm that enables robots to recognise themselves, in a similar way to humans.

A major distinction between humans and robots is that our senses are faulty, feeding misleading information into our brains.

 

RELATED
Urban Aero shows off its first fully autonomous flying taxi

 

“We have really imprecise proprioception – awareness of our body’s position and movement. For example, our muscles have sensors that are not precise versus robots, which have very precise sensors,” he said.

The human brain takes this imprecise information to guide our movements and understanding of the world, and robots are not used to dealing with uncertainty in the same way.

“In real situations, there are errors, differences between the world and the model of the world that the robot has,” Dr Lanillos said. “The problem we have in robots is that when you change any condition, the robot starts to fail.”

At age two, humans can tell the difference between their bodies and other objects in the world. But this computation that a two year old human brain can do is very complicated for a machine and makes it difficult for them to navigate the world.

 

RELATED
In vivo robotic surgery gets closer after mini robot breakthrough

 

The algorithm that Dr Lanillos and colleagues developed in a project called SELFCEPTION, enables three different robots to distinguish their ‘bodies’ from other objects.

Their test robots included one composed of arms covered with tactile skin, another with known sensory inaccuracies, and a commercial model. They wanted to see how the robots would respond, given their different ways of collecting ‘sensory’ information.

One test the algorithm-aided robots passed was the rubber hand illusion, originally used on humans.

“We put a plastic hand in front of you, cover your real hand, and then start to stimulate your covered hand and the fake hand that you can see,” Dr Lanillos said.

Within minutes, people begin to think that the fake hand is their hand.

 

RELATED
An inflatable humanoid robot destined for space secures NASA funding

 

The goal was to deceive a robot with the same illusion that confuses humans. This is a measure of how well multiple sensors are integrated and how the robot is able to adapt to situations. Dr Lanillos and his colleagues made a robot experience the fake hand as its hand, similar to the way a human brain would.

The second test was the mirror test, which was originally proposed by primatologists. In this exercise, a red dot is put on an animal or person’s forehead, then they look at themselves in a mirror. Humans, and some animal subjects like monkeys, try to rub the red dot off of their face rather than off the mirror.

The test is a way to determine how self-aware an animal or person is. Human children are usually able to pass the test by their second birthday.

The team trained a robot to ‘recognise’ itself in the mirror by connecting the movement of limbs in the reflection with its own limbs. Now they are trying to get a robot to rub off the red dot.

 

RELATED
Amazing ray tracing mod pack makes Minecraft look stunning

 

The next step in this research is to integrate more sensors in the robot – and increase the information it computes – to improve its perception of the world. A human has about 130 million receptors in their retina alone, and 3,000 touch receptors in each fingertip, says Dr Lanillos. Dealing with large quantities of data is one of the crucial challenges in robotics.

“Solving how to combine all this information in a meaningful way will improve body awareness and world understanding,” he said.

Improving the way robots perceive time can also help them operate in a more human way, allowing them to integrate more easily into people’s lives. This is particularly important for assistance robots, which will interact with people and have to co-operate with them to achieve tasks. These include service robots which have been suggested as a way to help care for the elderly.

“Humans’ behaviour, our interaction with the world, also depends on our perception of time,” said Anil Seth, co-director of the Sackler Centre for Consciousness Science at the University of Sussex, UK. “Having a good sense of time is important for any complex behaviour.”

Prof. Seth collaborated on a project called TimeStorm which examined how humans perceive time, and how to use this knowledge to give machines a sense of time, too.

 

RELATED
Stuntmen face the last act as technology comes to replace them

 

Inserting a clock into a robot would not give them temporal awareness, according to Prof. Seth.

“Humans – or animals – don’t perceive time by having a clock in our heads,” he said. There are biases and distortions to how humans perceive time, he says.

Warrick Roseboom, a cognitive scientist also at the University of Sussex who spearheaded the university’s TimeStorm efforts, created a series of experiments to quantify how people experienced the passage of time.

“We asked humans to watch different videos of a few seconds up to about a minute and tell us how long they thought the video was,” Roseboom said. The videos were first-person perspectives of everyday tasks, such as walking around campus or sitting in a cafe. Subjects experienced time differently from the actual duration, depending on how busy the scene was.

Using this information, the researchers built a system based on deep learning that could mimic the human subjects perception of the video durations.

 

RELATED
Researchers build a path to a 200,000 core chip

 

“It worked really well,” said Prof. Seth. “And we were able to predict quite accurately how humans would perceive duration in our system.”

A major focus of the project was to investigate and demonstrate machines and humans working alongside each other with the same expectations of time.

The researchers were able to do this by demonstrating robots assisting in meal preparationsuch as serving food according to people’s preferences, something which requires an understanding of human time perception, planning and remembering what has already been done.

TimeStorm’s follow-up project, Entiment, created software that companies can use to programme robots with a sense of time for applications such as meal preparation and wiping down tables.

In the last 10 years, the field of robot awareness has made significant progress, Dr Lanillos says, and the next decade will see even more advances, with robots becoming increasingly self-aware.

“I’m not saying that the robot will be as aware as a human is aware, in a reflexive way, but it will be able to adapt its body to the world.”

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *