Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify and track hundreds of game changing emerging technologies, and explain their impact on global culture, industry and society, is unparalleled. Recognised for the past five years running as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, one of Europe’s largest energy companies, and his recent work includes mentoring XPRIZE teams, building the first generation of biocomputers, helping the world’s largest manufacturers companies envision the next five generations of smartphones and devices, and what comes next, and helping companies including Qualcomm envision the next twenty years of semiconductors. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BOA, Blackrock, Bloomberg, Booz Allen Hamilton, BCG, Bentley, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JPMorgan Chase, KPMG, Lloyds Banking Group, McKinsey, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.
WHY THIS MATTERS IN BRIEF
Your voice can tell people a lot about how you’re feeling, but now machine learning is taking diagnosis to the next level.
Over the past few years technology has been finding a way to produce more and more innovative ways to diagnose disease, but now it has a new party trick – disease detection by voice analysis – and it’s just as it sounds, if you excuse the pun. You talk, it analyses your voice and screens for illnesses.
We’re all used to being able to tell when someone’s ill by just listening to their voice – on the phone for example. If someone has a cold then they sound nasal, and we know that and we can ask them if they have a cold. If they’re slurring then again we can hear it and we can ask them what’s wrong. But in many cases researchers have found that machine learning algorithms are excellent at picking up the sounds, tremors and quakes that we often can’t hear, and it’s these verbal cues that phones listen for when you talk into them
Over the past few months researchers from a variety of medical centers, universities, and healthcare companies have collected voice recordings from hundreds of patients and fed them to machine learning software that compares the voices to those of healthy people, with the aim of establishing patterns clear enough to pinpoint vocal disease indicators.
In one particularly encouraging study, doctors from the Mayo Clinic worked with Israeli company Beyond Verbal to analyse voice recordings from 120 people who were scheduled for a coronary angiography. Participants used an app on their phones to record 30 second intervals of themselves reading a piece of text, describing a positive experience, then describing a negative experience. Doctors also took recordings from a control group of 25 patients who were either healthy or getting non-heart-related tests.
The doctors found 13 different voice characteristics associated with coronary artery disease. Most notably, the biggest differences between heart patients and non-heart patients’ voices occurred when they talked about a negative experience.
Heart disease isn’t the only illness that shows promise for voice diagnosis. Researchers are also making headway in the conditions below.
- ADHD: German company Audioprofiling is using voice analysis to diagnose ADHD in children, achieving greater than 90 percent accuracy in identifying previously diagnosed kids based on their speech alone. The company’s founder gave speech rhythm as an example indicator for ADHD, saying children with the condition speak in syllables less equal in length.
- Post Traumatic Stress Disorder: With the goal of decreasing the suicide rate among military service members, Boston based Cogito partnered with the Department of Veterans Affairs to use a voice analysis app to monitor service members’ moods. Researchers at Massachusetts General Hospital are also using the app as part of a two year study to track the health of 1,000 patients with bipolar disorder and depression.
- Brain injury: In June 2016, the US Army partnered with MIT’s Lincoln Lab to develop an algorithm that uses voice to diagnose mild traumatic brain injury. Brain injury biomarkers may include elongated syllables and vowel sounds or difficulty pronouncing phrases that require complex facial muscle movements.
- Parkinson’s: Parkinson’s disease has no biomarkers and can only be diagnosed via a costly in-clinic analysis with a neurologist. The Parkinson’s Voice Initiative is changing that by analysing 30 second voice recordings with machine learning software, achieving 98.6 percent accuracy in detecting whether or not a participant suffers from the disease.
Challenges remain before vocal disease diagnosis becomes truly viable and widespread, and there are privacy concerns over the personal health data identifiable in voice samples, but despite these hurdles, our voices appear to be on their way to becoming key players in our health.