Scroll Top

An AI has learned to predict people’s moods from the way they walk

futurist_walk_mood

WHY THIS MATTERS IN BRIEF

When we think of biometrics most people think fingerprints and facial recognition, but the way you walk is a rich source of information too.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

Recently I discussed how Artificial Intelligence (AI) is helping analyse and diagnose everything from cancer and depression to dementia and PTSD, among many other things including even a person’s personality and their intent to criminality using nothing more than a clever app. Now, in a next step, excusing the pun, a team of researchers have figured out how to categorise people’s emotions using AI from the way they walk. And the tech could be used to gauge everything from the mood of shoppers, to the emotional state and mental health of an entire population. It’s also not the only tech that can do this – you might be surprised to learn that AI can also turn your home Wi-Fi router into a radar spy that can analyse the state of your emotions and your health.

 

RELATED
NASA explores brain training to help astronauts live on the Moon

 

We all know that the way you walk says a lot about how you’re feeling at any given moment. When you’re downtrodden or depressed, for example, you’re more likely to slump your shoulders than when you’re contented or upset. Leveraging this somatic lexicon, researchers at the University of Chapel Hill and the University of Maryland recently investigated a machine learning method that can identify a person’s perceived emotion, valence, for example negative or positive, and arousal, including calm or energetic, from their gait alone. The researchers claim this approach — which they believe is the first of its kind — achieved 80.07% percent accuracy in preliminary experiments.

“Emotions play a large role in our lives, defining our experiences and shaping how we view the world and interact with other humans,” wrote the co-authors. “Because of the importance of perceived emotion in everyday life, automatic emotion recognition is a critical problem in many fields, such as games and entertainment, security and law enforcement, shopping, human-computer interaction, and human-robot interaction.”

 

RELATED
Samsung uses IBM's brain chip to build a digital eye

 

The researchers selected four emotions — happy, sad, angry, and neutral — for their tendency to “last an extended period” and their “abundance” in walking activity. Then they extracted gaits from multiple walking video corpora to identify affective features and extracted poses using a 3D pose estimation technique. Finally, they tapped a long short-term memory (LSTM) model — capable of learning long-term dependencies — to obtain features from pose sequences, which they combined with a random forest classifier, which outputs the mean prediction of several individual decision trees, to classify examples into the aforementioned four emotion categories.

The features included things like shoulder posture, the distance between consecutive steps, and the area between the hands and neck. Head tilt angle was used to distinguish between happy and sad emotions, while more compact postures and “body expansion” identified positive and negative emotions, respectively. As for arousal, which the scientists note tends to correspond to increased movements, the model considered the magnitude of velocity, acceleration, and “movement jerks” of hands, feet, and head joints.

 

RELATED
Walmart's suppliers would rather negotiate with AI than a human

 

The AI system processed samples from Emotion Walk, or EWalk, a novel data set containing 1,384 gaits extracted from videos of 24 subjects walking around a university campus, both indoors and outdoors. Roughly 700 participants from Amazon Mechanical Turk labelled emotions, and the researchers used these labels to determine valence and arousal level.

In tests, the team reports that their emotion detection approach offered a 13.85% improvement over state-of-the-art algorithms and a 24.60% improvement over “vanilla” LSTMs that don’t consider affective features. That isn’t to say it’s fool proof — its accuracy is largely dependent on the precision of the 3D human pose estimation and gait extraction. But despite these limitations, the team believes their method will provide a strong foundation for studies involving additional activities and other emotion identification algorithms.

“Our approach is also the first approach to provide a real-time pipeline for emotion identification from walking videos by leveraging state-of-the-art 3D human pose estimation,” wrote the co-authors. “As part of future work, we would like to collect more data sets and address [limitations].”

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This