0

WHY THIS MATTERS IN BRIEF

As companies try to find ways to easily render the real world into virtual worlds, this is a space that is accelerating.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

Hot on the heels of Nvidia who recently released their own very impressive so called Video to Video Artificial Intelligence (AI) system, Samsung researchers have unveiled an AI that generates realistic 3D renders of video scenes that, like Nvidia’s solution, can then be edited and used in everything from the production of movies and video games, through to helping future consumers relive special memories in Virtual Reality (VR). And for my ten cents I think it’s the latter use cases that Samsung will focus on developing so keep an eye out for this feature appearing within the next five years in Samsung’s consumer products.

 

RELATED
Research of billions of links shows the internet is both shrinking and expanding

 

In a paper detailing the neural network behind the AI, the researchers explained the inefficient process of creating virtual scenes today:

 

From the real world to the digital one, in a flash. Courtesy: Samsung

 

“Creating virtual models of real scenes usually involves a lengthy pipeline of operations. Such modelling usually starts with a scanning process, where the photometric properties are captured using camera images and the raw scene geometry is captured using depth scanners or dense stereo matching,” they said, adding, “the latter process usually provides noisy and incomplete point cloud that needs to be further processed by applying certain surface reconstruction and meshing approaches. Given the mesh, the texturing and material estimation processes determine the photometric properties of surface fragments and store them in the form of 2D parameterized maps, such as texture maps, bump maps, view-dependent textures, surface lightfields. Finally, generating photorealistic views of the modelled scene involves computationally-heavy rendering process such as ray tracing and/or radiance transfer estimation.”

 

RELATED
Scientists 3D print Lithium Ion batteries for the first time

 

The video input is then converted into points which represent the geometry of the scene. These geometry points are then rendered into computer graphics using a neural network, and vastly speeding up the process of rendering a photorealistic 3D scene that can be translated into a VR world, and you can see the result in the video.

 

See the tech for yourself
 

As mentioned this type of product could one day help speed up game development, especially video game counterparts of movies that are already being filmed. Footage from a film set could provide a replica 3D environment for game developers to create interactive experiences in, but one of the most exciting new use cases, for example, would be letting consumers use it to relive their birthday parties or weddings and events by recording them and then having the solution automatically convert those videos into photorealistic VR worlds that would let them relive those memories all over again using just a VR headset and perhaps, even a haptic clothing like the Teslasuit, that helps them not just relive the moment through their eyes but also through their other senses.

 

RELATED
Pentagon tests Artificial Intelligence electronic warfare systems

 

Before such use cases are realised though the tech still needs some refinement – current scenes can’t be altered or edited, and any large deviations from the original viewpoint results in distortions. Still, it’s a fascinating early insight at what could be possible in a not-so-distant future.

Source: Arvix and Github

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *