Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Lidar sensing systems used to be an incredibly large and incredibly expensive, and now they’re in your phone.
This week Apple introduced its latest line up of smartphones, including the iPhone 12 Pro and iPhone 12 Pro Max, both of which are equipped with a LiDAR scanner which will help to dramatically improve their Augmented Reality (AR) capabilities and help users experience true real time AR experiences, and even help them scan their environments to create Virtual Reality (VR) copies of them, like the ones that Samsung recently showed off.
The new phones are also a clear demonstration of what I discuss in my futurist keynotes – the fact that over time every technology gets faster, or more performant, smaller, and cheaper. Bearing in mind that less than a decade ago the LiDAR sensors Apple’s using would have cost them more than $75,000 per unit it’s a staggering demonstration of just how fast the technology has been developed and commercialised – something that was helped massively by the automotive manufacturers demand for the technology which is a key component in helping them create self-driving cars which are the next generation of vehicle mobility.
LiDAR is interesting from a futures perspective because it’s a so called “Time of flight” depth sensor which measures how long it takes for light to bounce off of objects in the scene and return to the sensor. With precise timing, the information is used to judge the depth of each point, and that’s what will help future augmented reality experiences to be much faster and more accurate which, ultimately, will likely help boost their adoption.
While existing iPhones are already capable of pretty good AR tracking the current approach derives depth from machine vision techniques like SLAM, which tracks points in the scene over time to infer depth. Typically, this means that the system needs a few seconds and some movement from the camera before it can understand its frame of reference and begin to assess the depth of the scene.
Apple says that LiDAR in the iPhone 12 Pro and 12 Pro Max means the phones will be capable of “instant AR.” That’s because LiDAR captures depth information in the equivalent of a ‘single photo’, without the need for any phone movement or the need to compare images across time.
One way to think about it is to think about the pixels in a photograph. When you take a picture, every pixel captures colour and brightness information. Conversely, every pixel of a ‘LiDAR snapshot’ captures a distance value. So rather than needing to wave your phone around for a few seconds before an AR app can establish accurate tracking, tracking can start immediately.
Of course, you can also compare LiDAR depth data over time so that instead of a simple snapshot of depth you can build an entire depth-map of the scene – and it’s this feature that, when combined with AN ai LIKE samsung’s will one day let people render their entire environment straight into VR, and all that will do is change how you capture your family’s memories forever – for example, rather than taking photos or videos of your kids birthday parties imagine capturing the entire party in VR so you can relive it as though you’re back in the room over and over again…
That’s likely the real end game here. But for now LiDAR in a phone is good for AR too …