Why live with 2D imagery when you can live life in glorious 3D!?


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

In 1943, Thomas Watson, the president of IBM, famously predicted the world market for computers would top out at “maybe five” of the machines. He was wrong – you likely have more than that in your own house, let’s face it – but at the time, it made sense. After all, if computers were still gigantic, vacuum-tube-powered addition machines, you probably wouldn’t want more than about five either.


Common element combinations could replace rare Earth metals in electronics


It’s a similar story with holograms. Even back in the 1990s, more than 40 years after Dennis Gabor first came up with the idea of using wavefront interference to reconstruct images in three dimensions, science fiction was still assuming the need for entire decks and suites to power our holographic adventures. But, now they can run on a smartphone.

Almost two years ago, researchers at MIT made a breakthrough – a technology they dubbed “tensor holography.” Since then, the project has continued to improve, and today, the team are working with a system they say is “fully automatic, robust to rendered and misaligned real-world inputs, produces realistic depth boundaries, and corrects vision aberrations.”

“We are amazed at how well it performs,” project co-author Wojciech Matusik commented back in 2021. And it’s economical, too: the amount of computer power and memory required for this real-time 3D holography is less than one megabyte.


IBM unveils the world's first "commercial quantum computer," but is it?


“It’s [a] negligible [amount],” Matusik pointed out, “considering the tens and hundreds of gigabytes available on the latest cell phone.”

Holograms have come a long way since the first laser-generated static images of the early 20th century. Even then, though, it was a complicated thing to set up: it required splitting a laser beam in two, with half the beam being used to illuminate the subject and the other half used as a reference for the light waves’ phase.

Computers made this process easier – but they came with their own problems. Supercomputers designed to run physics-based simulations of the laser setup were imperfect and computationally expensive: “Because each point in the scene has a different depth, you can’t apply the same operations for all of them,” explained study lead Liang Shi. “That increases the complexity significantly.”


US researchers get $1.7m grant to create wonder bio-material that lasts 500 million years


So the team took a different tack entirely. This is the future, they reasoned, and we don’t always need to come up with our own solutions for problems – not when we can just set a computer to teach itself. They built a convolutional neural network and set it to match up 4,000 pairs of computer-generated images: one a 2D picture containing information on the color and depth of every individual pixel, the other a hologram of that image.

The result: a computer program so good at creating holograms that it surprised even the team themselves. “It’s a considerable leap that could completely change people’s attitudes toward holography,” Matusik said. “We feel like neural networks were born for this task.”

So where now for the developing technology? Some experts have pointed out the advantages of holograms over the occasionally nauseating and eye-straining virtual reality (VR) – perhaps the holodeck is a more likely future than the metaverse after all. Other applications include 3D printing, the team point out, as well as medical visualizations, microscopy, and materials science.


OpenAI released the full version of their "dangerous" fake news spewing AI


“Holographic 3D displays provide differentiating interactive experiences from cell phones or stereoscopic augmented reality (AR) and [VR] displays,” the team’s latest paper on the topic explains. “[Our work] runs in real-time on a consumer-grade GPU [graphics processing unit] and 5 FPS [frames per second] on an iPhone 13 Pro…promising real-time mobile performance in future-generation AR/VR headsets and glasses.”

The latest study is published in Light: Science & Applications.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *