Scroll Top

Researchers say you’ll soon be able to run holograms from your smartphone

Futurist_tensorholography

WHY THIS MATTERS IN BRIEF

Why live with 2D imagery when you can live life in glorious 3D!?

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

In 1943, Thomas Watson, the president of IBM, famously predicted the world market for computers would top out at “maybe five” of the machines. He was wrong – you likely have more than that in your own house, let’s face it – but at the time, it made sense. After all, if computers were still gigantic, vacuum-tube-powered addition machines, you probably wouldn’t want more than about five either.

 

RELATED
Google's core search business is feeling the heat from ChatGPT

 

It’s a similar story with holograms. Even back in the 1990s, more than 40 years after Dennis Gabor first came up with the idea of using wavefront interference to reconstruct images in three dimensions, science fiction was still assuming the need for entire decks and suites to power our holographic adventures. But, now they can run on a smartphone.

Almost two years ago, researchers at MIT made a breakthrough – a technology they dubbed “tensor holography.” Since then, the project has continued to improve, and today, the team are working with a system they say is “fully automatic, robust to rendered and misaligned real-world inputs, produces realistic depth boundaries, and corrects vision aberrations.”

“We are amazed at how well it performs,” project co-author Wojciech Matusik commented back in 2021. And it’s economical, too: the amount of computer power and memory required for this real-time 3D holography is less than one megabyte.

 

RELATED
A new DARPA project is experimenting with living factories in space

 

“It’s [a] negligible [amount],” Matusik pointed out, “considering the tens and hundreds of gigabytes available on the latest cell phone.”

Holograms have come a long way since the first laser-generated static images of the early 20th century. Even then, though, it was a complicated thing to set up: it required splitting a laser beam in two, with half the beam being used to illuminate the subject and the other half used as a reference for the light waves’ phase.

Computers made this process easier – but they came with their own problems. Supercomputers designed to run physics-based simulations of the laser setup were imperfect and computationally expensive: “Because each point in the scene has a different depth, you can’t apply the same operations for all of them,” explained study lead Liang Shi. “That increases the complexity significantly.”

 

RELATED
Can your smartphone detect when you're depressed? This startup thinks so

 

So the team took a different tack entirely. This is the future, they reasoned, and we don’t always need to come up with our own solutions for problems – not when we can just set a computer to teach itself. They built a convolutional neural network and set it to match up 4,000 pairs of computer-generated images: one a 2D picture containing information on the color and depth of every individual pixel, the other a hologram of that image.

The result: a computer program so good at creating holograms that it surprised even the team themselves. “It’s a considerable leap that could completely change people’s attitudes toward holography,” Matusik said. “We feel like neural networks were born for this task.”

So where now for the developing technology? Some experts have pointed out the advantages of holograms over the occasionally nauseating and eye-straining virtual reality (VR) – perhaps the holodeck is a more likely future than the metaverse after all. Other applications include 3D printing, the team point out, as well as medical visualizations, microscopy, and materials science.

 

RELATED
OpenAI are building a team to stop super AI going rogue

 

“Holographic 3D displays provide differentiating interactive experiences from cell phones or stereoscopic augmented reality (AR) and [VR] displays,” the team’s latest paper on the topic explains. “[Our work] runs in real-time on a consumer-grade GPU [graphics processing unit] and 5 FPS [frames per second] on an iPhone 13 Pro…promising real-time mobile performance in future-generation AR/VR headsets and glasses.”

The latest study is published in Light: Science & Applications.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This