0

WHY THIS MATTERS IN BRIEF

Earth imagery and mapping are important for all sorts of reasons, and there are real world implications if they’re manipulated.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

Have you seen the latest deepfake of Elon Musk or Tom Cruise? Yes? Well, how about the latest deepfake of London? And if that’s got you saying “Uh, what?” then you’re not alone. In the future you might just well end up having to ask yourself: can you trust the map on your smartphone, or the satellite image on your computer screen?

 

RELATED
A GPT-3 bot posted comments on Reddit all week and users approved

 

As the technology advances it may only be a matter of time until the growing problem of deepfakes converges with Geographical Information Science (GIS). Researchers such as Associate Professor of Geography Chengbin Deng are doing what they can to get ahead of the problem.

Deng and four colleagues – Bo Zhao and Yifan Sun at the University of Washington, and Shaozeng Zhang and Chunxue Xu at Oregon State University – co-authored a recent article in Cartography and Geographic Information Science that explores the problem. In “Deep fake geography? When geospatial data encounter Artificial Intelligence,” they explore how false satellite images could potentially be constructed and detected. News of the research has been picked up by countries around the world, including China, Japan, Germany and France.

“Honestly, we probably are the first to recognize this potential issue,” Deng said.

 

RELATED
BAE uses AI to defeat todays state of the art electronic jamming systems

 

GIS underlays a whole host of applications, from national defense to autonomous vehicles, a technology that’s currently under development. Artificial Intelligence (AI) has made a positive impact on the discipline through the development of Geospatial Artificial Intelligence (GeoAI), which uses machine learning to extract and analyze geospatial data. But these same methods could potentially be used to fabricate GPS signals, fake locational information on social media posts, fabricate photographs of geographic environments and more.

In short, the same technology that can change the face of an individual in a photo or video can also be used to make fake images of all types, including maps and satellite images.

“We need to keep all of this in accordance with ethics. But at the same time, we researchers also need to pay attention and find a way to differentiate or identify those fake images,” Deng said. “With a lot of data sets, these images can look real to the human eye.”

 

RELATED
Kurzweil: AI aces Turing Test in 2029, and the Singularity arrives in 2045

 

To figure out how to detect an artificially constructed image, first you need to construct one. To do so, they used a technique common in the creation of deep fakes: Cycle-Consistent Adversarial Networks (CycleGAN), an unsupervised deep learning algorithm that can simulate synthetic media.

Generative Adversarial Networks (GAN) are a type of AI, but they require training samples – input – of whatever content they are programmed to produce. A black box on a map could, for example, represent any number of different factories or businesses; the various points of information inputted into the network helps determine the possibilities it can generate.

The researchers altered a satellite image of Tacoma, Washington, interspersing elements of Seattle and Beijing and making it look as real as possible. Researchers are not encouraging anyone to try such a thing themselves – quite the opposite, in fact.

 

RELATED
Google says its chatbot is capable of near human conversation

 

“It’s not about the technique; it’s about how human being are using the technology,” Deng said. “We want to use technology for the good, not for bad purposes.”

After creating the altered composite, they compared 26 different image metrics to determine whether there were statistical differences between the true and false images. Statistical differences were registered on 20 of the 26 indicators, or 80% percent.

Some of the differences, for example, included the colour of roofs; while roof colours in each of the real images were uniform, they were mottled in the composite – something which could actually, for example, affect governments ability to quantify the amount of solar power cities could produce. And there are millions more examples in store… The fake satellite image was also dimmer and less colorful, but had sharper edges. Those differences, however, depended on the inputs they used to create the fake, Deng cautioned.

 

RELATED
The US military is funding an AI that's learning to write its own code

 

This research is just the beginning. In the future, geographers may track different types of neural networks to see how they generate false images and figure out ways to detect them. Ultimately, researchers will need to discover systematic ways to root out deep fakes and verify trustworthy information before they end up in the public view.

“We all want the truth,” Deng said. But in the future the truth comes with caveats …

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *