Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures think tank working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring several Education and Lunar XPrize teams, building the first generation of biological computers and re-envisioning global education with the G20, and helping the world’s largest conglomerates ideate the next 20 years of intelligent devices and machines. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
Not everyone has access to a lab grade microscope, and if you’re a doctor in the middle of nowhere that’s an issue that will delay disease diagnosis and potentially put lives at risk, this breakthrough will revolutionise front line medicine.
Today we are on the verge of being able to give hundreds of millions of people around the world who still have no formal access to primary or secondary healthcare access to powerful smartphone based detection and diagnostic tools that help them detect the onset of everything from dementia and general disease, to heart conditions, inherited genetic disorders, pancreatic cancers, and skin cancer with nothing more than a smartphone and a free app. but now, in another technology related breakthrough, researchers at the University of California Los Angeles (UCLA) have, for the first time, managed to demonstrate that Artificial Intelligence (AI) and Deep Learning, a powerful form of AI, can discern and enhance the tiniest microscopic details in photos taken by regular smartphones, and the technique improves the resolution and colour details of smartphone images so much that they approach the quality of images from laboratory grade microscopes. As a result the breakthrough could have big implications especially for healthcare in remote regions, and even environmental and pollution monitoring.
The breakthrough not only opens up the possibility of one day soon being able to provide people, for example, in deprived, poor or remote areas, who don’t have access to labs or lab grade microscopy an easy way to analyse environmental samples and medical samples, but it also helps feed the trend of helping to democratise access to primary and secondary healthcare, where today we can use smartphones and apps to diagnose everything from general disease, dementia, pancreatic cancer, skin cancer, and much more, to produce “better healthcare outcomes.”
The team’s technique is low cost and simple, using attachments that can be inexpensively produced with a 3D printer, for less than $100 a piece, versus the thousands of dollars it would cost to buy lab grade equipment that produces images of similar quality.
Cameras on today’s smartphones are designed to photograph people and scenery, not to produce high resolution microscopic images, so the researchers had to develop an attachment that could be placed over the smartphone’s lens to increase the resolution and the visibility of tiny details of the photos they take, down to a scale of approximately one millionth of a meter. But that only solved part of the challenge, because no attachment would be enough to compensate for the difference in quality between smartphone cameras’ image sensors and lenses and those of high end lab equipment, so the new technique compensates for the difference by using AI to reproduce the level of resolution and colour details needed for a laboratory analysis.
The research was led by Aydogan Ozcan, Chancellor’s Professor of Electrical and Computer Engineering and Bioengineering at UCLA, and Yair Rivenson, a UCLA postdoctoral scholar. Ozcan’s research group has introduced several innovations in mobile microscopy and sensing, and it maintains a particular focus on developing field-portable medical diagnostics and sensors for resource poor areas.
“Using deep learning, we set out to bridge the gap in image quality between inexpensive mobile phone based microscopes and gold standard bench top microscopes that use high end lenses,” Ozcan said, “we believe that our approach is broadly applicable to other low cost microscopy systems that use, for example, inexpensive lenses or cameras, and could facilitate the replacement of high end bench top microscopes with cost effective, mobile alternatives.”
He added that the new technique could find numerous applications in global health, telemedicine and diagnostics related applications.
The researchers shot images of lung tissue samples, blood and Pap smears, first using a standard laboratory grade microscope, and then with a smartphone with the 3D printed microscope attachment. The researchers then fed the pairs of corresponding images into a computer system that “learns” how to rapidly enhance the mobile phone images. The process relies on a deep learning based computer code, which was developed by the UCLA researchers.
To see if their technique would work on other types of lower quality images, the researchers then used deep learning to successfully perform similar transformations with images that had lost some detail because they were compressed for either faster transmission over a computer network or more efficient storage.