Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Getting millions of people tested for coronavirus has become increasingly problematic and contentious, but to take this new test all you need is a smartphone or a computer.
What do you get if you cross Artificial Intelligence (AI) with the sensors in your smartphone? That’s right – a Tricorder. AI combined with your smartphone’s camera can be used to detect everything from pancreatic cancer to skin cancer, combined with your smartphone’s accelerometer it can detect heart disease, and when combined with your smartphone’s microphone it can detect the onset of ADHD, Alzheimers, dementia, depression, disease, heart attacks, and even PTSD. And now, albeit with cautious optimism, this combination can also be used to detect whether or not you have the coronavirus, or COVID-19 – the pandemic disease that’s sweeping the world. And it’s an innovation that might, just might, make testing kits which are in short supply and disease scanning Chinese drones obsolete, and even better you can try it for yourself using the link below.
This week, and following on from news that various governments are now turning to smartphones and apps to try to monitor and contain the spread of COVID-19, a team of researchers at Carnegie Mellon University and other institutions released an early version of an app that they claim can determine whether you might have COVID-19, just by analysing your voice patterns. And if you think that sounds like mumbo jumbo then consider this, you can normally tell you’re getting ill when your voice turns gravelly and it’s this kind of pattern that the new app picks up and analyses.
“I’ve seen a lot of competition for the cheapest, fastest diagnosis you can have,” said Benjamin Striner, a Carnegie Mellon graduate student who worked on the project. “And there are some pretty good ones that are actually really cheap and pretty accurate, but nothing’s ever going to be as cheap and as easy as speaking into a phone.”
That’s a provocative claim in the face of the global coronavirus outbreak, and particularly the widespread shortages of testing kits. But Striner believes that the team’s algorithm, even though it’s still highly experimental, could be a valuable tool in tracking the spread of the virus, especially as the team continues to refine its accuracy by collecting more data.
You can use the COVID Voice Detector now to analyse your own voice for signs of infection, though it comes with a hefty disclaimer that it’s “not a diagnostic system,” and definitely not approved by the FDA or CDC, and as such the developers stress it shouldn’t be used as a substitute for a medical test or examination.
The researchers behind the project emphasize that the app is a work in progress.
“What we are attempting to do is to develop a voice-based solution, which, based on preliminary experiments and prior expertise, we believe is possible. The app’s results are preliminary and untested,” said Bhiksha Raj, a professor at Carnegie Mellon who also worked on the project. “The score the app currently shows is an indicator of how much the signatures in your voice match those of other COVID patients whose voices we have tested. This is not medical advice. The primary objective of our effort/website at this point of time is to collect large numbers of voice recordings that we could use to refine the algorithm into something we — and the medical community — are confident about.”
“If the app is to be put out as a public service, it, and our results, will have to be verified by medical professionals, and attested by an agency such as the CDC,” Raj added. “Until that happens, its still very much an experimental and untrustworthy system. I urge people not to make healthcare decisions based on the scores we give you. You could be endangering yourself and those around you.”
“In terms of diagnostics, of course, it’s never going to be as as accurate as taking a swab and putting it on some agar and waiting for it to grow,” said Striner, who has been working around the clock to prepare the app for release. “But in terms of very easily monitoring a ton of people daily, weekly, whatever, monitoring on a very large scale, it gives you a way to handle and track health outbreaks.”
If you have a smartphone or a computer with a microphone, using the app is simple. Users are prompted to cough several times and record a number of vowel sounds, as well as reciting the alphabet. Then it provides a score, expressed as a download-style progress bar, representing how likely the algorithm believes it is that the user has COVID-19.
Also working on the project is Rita Singh, a professor of computer science at Carnegie Mellon who for years has been creating algorithms that identify micro-signatures in the human voice that she believes reveal psychological, physiological, and even medical data about an individual subject.
“The cough of a COVID patient is very distinctive,” Singh said. “It affects the lungs so badly that breathing patterns and several other vital parameters are affected, and those are likely to have very strong signatures in voice.”
A challenge for Singh and Striner’s team of ten Carnegie Mellon researchers — who have all been working on the app from home, the campus is shut down due to the pandemic — has been gathering enough audio from confirmed COVID-19 patients, in order to train the algorithm.
To gather that data, the team reached out to colleagues around the world. Those colleagues didn’t just help them gather audio from COVID-19 patients, but also patients with other viruses, so that they could teach the algorithm to spot the differences. They even pored over news videos to find interviews with patients, and add those to the dataset as well.
“You have samples of people that are healthy, you have samples of people that might just have the flu,” Striner said. “And you have all those different recordings of all the different types of coughs, like what are all the coughs that are out there? And then that allows you to kind of spot the differences.”
It’s difficult to quantify the current version of the app’s accuracy, and both Striner and Singh reiterated that its output shouldn’t be treated as medical advice.
“Its accuracy cannot be tested currently because we don’t have the verified test instances we need,” Singh said, adding that the more people who use the app — healthy or otherwise — the more data they will have to better train the algorithm. “If it comes from a healthy person, we then have examples of what ‘healthy’ sounds like. If it comes from a person who has some known respiratory condition, we then know what that condition sounds like. The system will use all that data as counterexamples, and for disambiguating COVID signatures from those of other confusing conditions.”
Ashwin Vasan, a professor at Columbia University Medical Center who was not involved in the Carnegie Mellon research, expressed reservations about releasing the app during a moment of global health crisis.
“Despite what could be a well-intentioned attempt by a bunch of engineers to help during this crisis, this is not exactly the messaging we want to be out there,” he cautioned. “That somehow there is a nifty new tool we can use to diagnose coronavirus, in absence of the things we really need much more of, actual test kits, serologic testing, PPE for frontline healthcare workers, and ventilators for critically ill patients.”
“Let’s keep the focus on that, especially when our leaders in Washington seem unable to meet those most basic needs,” he added. “Anything else is just a distraction.”
For their part, the Carnegie Mellon team says they’re grappling with the public health implications of the app. Striner said that they’ve consulted with colleagues in the medical research community, and that they carefully considered how to fine-tune the app’s sensitivity.
“We would probably side more towards having some false positives then false negatives, if that make sense,” Striner said. “If you give someone a false negative on COVID, then they walk around and get a bunch of people sick, versus a couple extra false positives, maybe some people get tests they don’t need.”