Scroll Top

This new AI uses both sight and sound to estimate depression



As mental health issues become more pronounced and more prominent in society researchers are trying to find new ways to identify people who suffer from it.


Interested in the future and want to experience even more?! Watch a keynote, grab a free book, read thousands of articles, and connect!

Detecting emotional arousal from the sound of someone’s voice is one thing — startups like Beyond VerbalAffectiva, and MIT spinout Cogito are leveraging natural language processing to accomplish just that. But as robots and bots trained in psychology, such as Woebot who’s now helped millions of people, start appearing on the scene to help patients in new ways, there’s an argument to be made that speech alone isn’t enough to diagnose someone with depression – let alone judge its severity.

Enter new research from scientists at the Indian Institute of Technology Patna and the University of Caen Normandy, which examines how non-verbal signs and visuals can drastically improve estimations of depression level.


This new nanochip re-programs human cells and tissue to cure disease


“The steadily increasing global burden of depression and mental illness acts as an impetus for the development of more advanced, personalized and automatic technologies that aid in its detection,” the paper’s authors wrote. “Depression detection is a challenging problem as many of its symptoms are covert.”

The researchers encoded seven modalities — things like downward angling of the head, eye gaze, the duration and intensity of smiles, and self-touches, along with text and verbal cues — which they fed to a machine learning model that fused them together into vectors, or mathematical representations. These fused vectors were then passed onto a second system that predicted the severity of depression based on the Personal Health Questionnaire Depression Scale (PHQ-8), a diagnostic test often employed in large clinical psychology studies.


China unveils new neuromorphic chip as it aims for AI dominance


To train the various systems, the researchers tapped AIC-WOZ, a depression data set that’s part of a larger corpus — the Distress Analysis Interview Corpus — containing annotated audio snippets, video recordings, and questionnaire responses of 189 clinical interviews supporting the diagnosis of psychological conditions like anxiety, depression, and post-traumatic stress disorder. Each sample contained an enormous amount of data, including a raw audio file, a file containing the coordinates of 68 facial “landmarks” of the interviewee, complete with time stamps, confidence scores, and detection success flags, two files containing head pose and eye gaze features of the participant, a transcript file of the interview, and more.

After several pre-processing steps and model training, the team compared the results of the AI systems using three metrics – Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Explained Variance Score (EVS). They report that the fusion of the three modalities — acoustic, text, and visual — helped in giving the “most accurate” estimation of depression level, outperforming the previous state of the art systems by 7.17% on RMSE and 8.08% on MAE.


A ball of brain cells on a chip just learned maths and speech recognition


In the future, they plan to study recent multitask learning architectures and “dig deeper” into novel representations of text data, and if their work bears fruit it’d be a promising development for the more than 300 million people now living with depression — a number that’s sadly on the rise.

Source: arVix

Related Posts

Leave a comment


1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This