0

WHY THIS MATTERS IN BRIEF

Rendering high quality virtual worlds is resource intensive,  DeepFovea eliminates up to 99 percent of those resources which will make AR and VR more accessible.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

One of the biggest issues with Augmented Reality (AR) and Virtual Reality (VR) worlds today is the huge amount of  computing power and network bandwidth they need to suck up in order to provide users a high quality experience – something that hopefully cloud based rendering and 5G might help overcome in the future. In order to fix this issue Facebook just unveiled a new light weight Artificial Intelligence (AI) system called DeepFovea, a foveated rendering system, that’s specially designed to overcome both of these problems, and when you have a look at the quality of the VR images that it can render using amazing low amounts of compute and network bandwidth the results, it has to be said, are quite stunning.

 

RELATED
You can now try the "world's most dangerous" fake news AI for yourself

 

This is also the first practical application of a Generative Adversarial Network (GAN), the same type of AI that helps generate everything from DeepFakes and Synthetic Content, to new helping  innovate products like NASA’s interplanetary landers, and, because of the power of the technology it’s able to generate natural looking hi definition video based on nothing more than an incredibly sparse input – as you can see from the video. In tests, DeepFovea can decrease the amount of compute resources needed for rendering by as much as 10 to 14x while any image differences remain imperceptible to the human eye.

 

A high quality VR experience with just 10% of the data

 

When the human eye looks directly at an object, it sees it in great detail. Peripheral vision, on the other hand, is much lower quality, but because the brain infers the missing information, humans don’t notice. DeepFovea uses recent advances in GAN’s that can similarly “in-hallucinate” missing peripheral details by generating content that is perceptually consistent.

 

RELATED
Epic Games revolutionary photo-realistic Unreal engine could finally make synthetic content mainstream

 

The system is trained by feeding a large number of video sequences with dramatically decreased pixel density as input. The input then simulates the peripheral image degradation and the target helps the network learn how to fill in the missing details based on statistics from all the videos it has seen. The result is a natural looking video generated out of a stream of sparse pixels that has been decreased in density by as much as 99 percent along the periphery of a 60×40 degree field of view. The system also manages the level of flicker, aliasing, and other video artefacts in the periphery to be below the threshold that can be detected by the human eye.

 

RELATED
REK's phydigital sports arenas prove the future of sports is mixed reality

 

High quality AR and VR experiences require high image resolution, high frame rate, and multiple views, which can be extremely resource intensive, so in order to advance these systems and bring them to a wider range of audiences and devices, such as those with mobile chipsets and small, portable batteries, we’ll need to dramatically increase rendering efficiency, and that’s what DeepFovea helps achieves. DeepFovea also shows how deep learning can help accomplish this task via foveated reconstruction and it’s also hardware-agnostic which makes it a promising tool for potential use in next-gen head-mounted display technologies.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *