Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Being able to photograph and “see” objects that are out of sight have both military and civilian applications.
In an age dominated by megapixel counts, a research group from Xi’an Jiaotong University in China led by Bin Bai have designed a one megapixel camera that can photograph objects that aren’t in its direct line of sight, and that can even take photos round corners. Like traditional cameras, the new experimental shooter records light, but unlike most, the camera can capture reflected light and make out an object that’s around the corner.
The camera doesn’t need a direct line of sight to the object, as long as it is facing light that is bouncing off of it.
Light, although more accurately, photons, naturally bounce off every object all the time and it’s a cameras job to capture them all – however, that said, while the photos that hit an object bounce off it, just as they bounce of any object, the likelihood of those individual photons travelling and hitting the back of the camera’s CMOS sensor is tiny. And even if those individual photons did hit the sensor then because there are so few of them the sensors sensitivity means that it can’t piece enough of them together to form an image. This is what the Chinese team, as well as a team from MIT last year, albeit using lasers, have managed to achieve – reconstructing photos of objects from only a small number of photons.
In the Chinese case their new one megapixel camera takes over 50,000 “photos” and stitches them all together to create a single hi-definition photo and while the team were quiet on how long the process takes it’s likely it takes a long time to take and stitch all 50,000 photos together and that also means it probably can’t take photos of moving objects.
So who knows, one day your next HTC or iPhone might let you take photos you never thought possible, and for those of you who have relatives who love taking photos of themselves in front of walls, well, aren’t you lucky – soon their photos will include walls around the corner too. On a serious note though this new discovery could have applications in the transportation industry – to help give autonomous vehicles greater awareness of their surroundings, military and consumer applications.