Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Your voice can tell people a lot about how you’re feeling, but now machine learning is taking diagnosis to the next level.
Over the past few years technology has been finding a way to produce more and more innovative ways to diagnose disease, but now it has a new party trick – disease detection by voice analysis – and it’s just as it sounds, if you excuse the pun. You talk, it analyses your voice and screens for illnesses.
We’re all used to being able to tell when someone’s ill by just listening to their voice – on the phone for example. If someone has a cold then they sound nasal, and we know that and we can ask them if they have a cold. If they’re slurring then again we can hear it and we can ask them what’s wrong. But in many cases researchers have found that machine learning algorithms are excellent at picking up the sounds, tremors and quakes that we often can’t hear, and it’s these verbal cues that phones listen for when you talk into them
Over the past few months researchers from a variety of medical centers, universities, and healthcare companies have collected voice recordings from hundreds of patients and fed them to machine learning software that compares the voices to those of healthy people, with the aim of establishing patterns clear enough to pinpoint vocal disease indicators.
In one particularly encouraging study, doctors from the Mayo Clinic worked with Israeli company Beyond Verbal to analyse voice recordings from 120 people who were scheduled for a coronary angiography. Participants used an app on their phones to record 30 second intervals of themselves reading a piece of text, describing a positive experience, then describing a negative experience. Doctors also took recordings from a control group of 25 patients who were either healthy or getting non-heart-related tests.
The doctors found 13 different voice characteristics associated with coronary artery disease. Most notably, the biggest differences between heart patients and non-heart patients’ voices occurred when they talked about a negative experience.
Heart disease isn’t the only illness that shows promise for voice diagnosis. Researchers are also making headway in the conditions below.
- ADHD: German company Audioprofiling is using voice analysis to diagnose ADHD in children, achieving greater than 90 percent accuracy in identifying previously diagnosed kids based on their speech alone. The company’s founder gave speech rhythm as an example indicator for ADHD, saying children with the condition speak in syllables less equal in length.
- Post Traumatic Stress Disorder: With the goal of decreasing the suicide rate among military service members, Boston based Cogito partnered with the Department of Veterans Affairs to use a voice analysis app to monitor service members’ moods. Researchers at Massachusetts General Hospital are also using the app as part of a two year study to track the health of 1,000 patients with bipolar disorder and depression.
- Brain injury: In June 2016, the US Army partnered with MIT’s Lincoln Lab to develop an algorithm that uses voice to diagnose mild traumatic brain injury. Brain injury biomarkers may include elongated syllables and vowel sounds or difficulty pronouncing phrases that require complex facial muscle movements.
- Parkinson’s: Parkinson’s disease has no biomarkers and can only be diagnosed via a costly in-clinic analysis with a neurologist. The Parkinson’s Voice Initiative is changing that by analysing 30 second voice recordings with machine learning software, achieving 98.6 percent accuracy in detecting whether or not a participant suffers from the disease.
Challenges remain before vocal disease diagnosis becomes truly viable and widespread, and there are privacy concerns over the personal health data identifiable in voice samples, but despite these hurdles, our voices appear to be on their way to becoming key players in our health.