Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Imagine a camera that can see and analyse every part of the visible and non-visible spectrum at the same time, now imagine it’s in a search and rescue drone, or a hunter killer robot…
Hey, I know you like hearing about the grand things that could happen in the future, but don’t forget that sometimes it’s the little “boring” things that can easily fly under the radar that can have the greatest impact, and this is one of those things, and in this case if you change how everything sees the world, including us, then arguably, you could change everything… anyway, enough chatter.
Recent technological advancements and cost reductions for some of the specialised cameras and sensors that can see both the visible and non-visible parts of the electromagnetic spectrum, such as Infra Red (IR), and X-Rays, are opening the floodgates to a host of new camera technologies. And as the technology develops it will change how humans, and machines, see the world and that’s big. Think of drones that can use broad spectrum cameras to find people after an Earthquake, or self-driving cars that can see through blizzards and dense fog, inspectors or factory robots that can see and track the smallest cracks in products, airport cameras that can see through clothing and luggage, next generation home security systems, or satellites and autonomous hunter-killer drones and robots, like the ones the US Marines are thinking of deploying, that can see “everything.”
And yes, you’re right, we should all be excited about the prospect of HK robots that see everything, very, very excited.
That said though, while these new sensors are available they generally produce low quality, fuzzy images, and cameras can only see one spectrum or another, not both at the same time, but last week, at the 23rd Symposium on Sensing, NEC and Tokyo Institute of Technology announced they’ve found a way to stitch these different images together using Artificial Intelligence (AI) to create new, “universal” broad spectrum images that allow cameras, and anything with a camera in it, such as a drone, to see everything all at once, and within the next few years they believe that they’ll be able to stream video using the new technology.
For now though the two organisations believe the initial applications will center around helping improve the quality and reliability of infrastructure and QA inspections, but as you can imagine there are many other possible use cases just waiting to spring out of the proverbial woodwork. And if it’s combined, one day, with Google’s AI based RAISR image sharpening product then anyone, and any company that has anything to do with optics and imaging, could suddenly find all their Christmases have arrived at once. Did you ruin the photo of your kids birthday when you shook the camera? Solved. Having problems seeing the cracks in the concrete of your new skyscraper? Solved… and who ever thought children’s parties and skyscrapers would ever have anything in common?
Traditionally, for example, in order to analyse the non-visible and visible aspects of an image the images had to be viewed and compared separately, or they had to be manually combined by a trained expert, and sometimes some of the hazards present in the images could be easily missed – and let’s face it that’s never a good thing.
The new technology works by using AI to carry out a detailed examination of each individual image in order to assess the degree of visibility of each part, then it automatically extracts the best areas from each image, taking environmental characteristics into consideration, such as brightness, the direction of light and obstacles.
“Now that this technology enables instant visual clarification, even under harsh conditions, it allows users to make well informed evaluations. For example, it can be applied to monitoring systems to assist with night time observations, or to infrastructure inspection devices to improve the detection of interior and exterior abnormalities, such as cracking,” says Akio Yamada, General Manager, Data Science Research Laboratories, NEC Corporation.
“Until now, a specialist had to manually carry out complex conversion tasks in order to combine images taken by different types of cameras. This technology eliminates the need for such manual work, using AI to effectively and automatically combine images taken by different cameras. This also increases visibility by actively utilising the strong points of each visible image and non-visible image, even when the images are difficult to visualise,” added Professor Masatoshi Okutomi, School of Engineering, Tokyo Institute of Technology.
Aren’t cameras and sensors wonderful…