WHY THIS MATTERS IN BRIEF
Imagine a camera that can see and analyse every part of the visible and non-visible spectrum at the same time, now imagine it’s in a search and rescue drone, or a hunter killer robot…
Hey, I know you like hearing about the grand things that could happen in the future, but don’t forget that sometimes it’s the little “boring” things that can easily fly under the radar that can have the greatest impact, and this is one of those things, and in this case if you change how everything sees the world, including us, then arguably, you could change everything… anyway, enough chatter.
Recent technological advancements and cost reductions for some of the specialised cameras and sensors that can see both the visible and non-visible parts of the electromagnetic spectrum, such as Infra Red (IR), and X-Rays, are opening the floodgates to a host of new camera technologies. And as the technology develops it will change how humans, and machines, see the world and that’s big. Think of drones that can use broad spectrum cameras to find people after an Earthquake, or self-driving cars that can see through blizzards and dense fog, inspectors or factory robots that can see and track the smallest cracks in products, airport cameras that can see through clothing and luggage, next generation home security systems, or satellites and autonomous hunter-killer drones and robots, like the ones the US Marines are thinking of deploying, that can see “everything.”
And yes, you’re right, we should all be excited about the prospect of HK robots that see everything, very, very excited.
That said though, while these new sensors are available they generally produce low quality, fuzzy images, and cameras can only see one spectrum or another, not both at the same time, but last week, at the 23rd Symposium on Sensing, NEC and Tokyo Institute of Technology announced they’ve found a way to stitch these different images together using Artificial Intelligence (AI) to create new, “universal” broad spectrum images that allow cameras, and anything with a camera in it, such as a drone, to see everything all at once, and within the next few years they believe that they’ll be able to stream video using the new technology.
For now though the two organisations believe the initial applications will center around helping improve the quality and reliability of infrastructure and QA inspections, but as you can imagine there are many other possible use cases just waiting to spring out of the proverbial woodwork. And if it’s combined, one day, with Google’s AI based RAISR image sharpening product then anyone, and any company that has anything to do with optics and imaging, could suddenly find all their Christmases have arrived at once. Did you ruin the photo of your kids birthday when you shook the camera? Solved. Having problems seeing the cracks in the concrete of your new skyscraper? Solved… and who ever thought children’s parties and skyscrapers would ever have anything in common?
Traditionally, for example, in order to analyse the non-visible and visible aspects of an image the images had to be viewed and compared separately, or they had to be manually combined by a trained expert, and sometimes some of the hazards present in the images could be easily missed – and let’s face it that’s never a good thing.
The new technology works by using AI to carry out a detailed examination of each individual image in order to assess the degree of visibility of each part, then it automatically extracts the best areas from each image, taking environmental characteristics into consideration, such as brightness, the direction of light and obstacles.
“Now that this technology enables instant visual clarification, even under harsh conditions, it allows users to make well informed evaluations. For example, it can be applied to monitoring systems to assist with night time observations, or to infrastructure inspection devices to improve the detection of interior and exterior abnormalities, such as cracking,” says Akio Yamada, General Manager, Data Science Research Laboratories, NEC Corporation.
“Until now, a specialist had to manually carry out complex conversion tasks in order to combine images taken by different types of cameras. This technology eliminates the need for such manual work, using AI to effectively and automatically combine images taken by different cameras. This also increases visibility by actively utilising the strong points of each visible image and non-visible image, even when the images are difficult to visualise,” added Professor Masatoshi Okutomi, School of Engineering, Tokyo Institute of Technology.
Aren’t cameras and sensors wonderful…