Scroll Top

Survey shows images of deepfake faces are now more real than genuine photos

Futurist_realface

WHY THIS MATTERS IN BRIEF

When deepfakes are more convincing and real than genuine images of people we change our behaviours, opinions, and more importantly trust behaviours.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

With everyone talking about platforms like ChatGPT and MidJourney you might think that Deepfakes have been taking a back seat, but while you’ve been looking elsewhere they’ve been getting better. So good in fact that if even if you think you are good at analyzing faces research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don’t exist.

 

RELATED
The first aqueous ionic transistor for liquid computers ran a liquid neural network

 

A few years ago, a fake LinkedIn profile with a computer-generated profile picture made the news because it successfully connected with US officials and other influential individuals on the networking platform, for example. Counter-intelligence experts even say that spies routinely create phantom profiles with such pictures to home in on foreign targets over social media.

 

The Future Synthetic Content, by keynote speaker Matthew Griffin

 

These deepfakes are becoming widespread in everyday culture which means people should be more aware of how they’re being used in marketing, advertising, and social media. The images are also being used for malicious purposes, such as political propaganda, discrediting people, espionage, and information warfare.

Making them involves something called a deep neural network, a computer system that mimics the way the brain learns. This is “trained” by exposing it to increasingly large data sets of real faces.

In fact, two deep neural networks are set against each other, competing to produce the most realistic images. As a result, the end products are dubbed GAN images, where GAN stands for Generative Adversarial Networks. The process generates novel images that are statistically indistinguishable from the training images.

 

RELATED
Regenerative medicine breakthrough regrows mouse toes

 

In a study published in iScience, researchers showed that a failure to distinguish these artificial faces from the real thing has implications for our online behavior. Our research suggests the fake images may erode our trust in others and profoundly change the way we communicate online.

They also found that people perceived GAN faces to be even more real-looking than genuine photos of actual people’s faces. While it’s not yet clear why this is, this finding does highlight recent advances in the technology used to generate artificial images.

And they also found an interesting link to attractiveness: faces that were rated as less attractive were also rated as more real. Less attractive faces might be considered more typical, and the typical face may be used as a reference against which all faces are evaluated. Therefore, these GAN faces would look more real because they are more similar to mental templates that people have built from everyday life.

But seeing these artificial faces as authentic may also have consequences for the general levels of trust we extend to a circle of unfamiliar people—a concept known as “social trust.”

 

RELATED
AI learning from AI-generated content will damage future AI models warn researchers

 

We often read too much into the faces we see, and the first impressions we form guide our social interactions. In a second experiment that formed part of our latest study, we saw that people were more likely to trust information conveyed by faces they had previously judged to be real, even if they were artificially generated.

It is not surprising that people put more trust in faces they believe to be real. But they also found that trust was eroded once people were informed about the potential presence of artificial faces in online interactions. They then showed lower levels of trust, overall – independently of whether the faces were real or not which could have significant societal conseuqneces.

This outcome could be regarded as useful in some ways, because it made people more suspicious in an environment where fake users may operate. From another perspective, however, it may gradually erode the very nature of how we communicate.

In general, we tend to operate on a default assumption that other people are basically truthful and trustworthy. The growth in fake profiles and other artificial online content raises the question of how much their presence and our knowledge about them can alter this “truth default” state, eventually eroding social trust.

 

RELATED
Altman says AI will make workers more efficient and scare the crap out of them

 

The transition to a world where what’s real is indistinguishable from what’s not could also shift the cultural landscape from being primarily truthful to being primarily artificial and deceptive.

If we are regularly questioning the truthfulness of what we experience online, it might require us to re-deploy our mental effort from the processing of the messages themselves to the processing of the messenger’s identity. In other words, the widespread use of highly realistic, yet artificial, online content could require us to think differently—in ways we hadn’t expected to.

In psychology they use a term called “reality monitoring” for how we correctly identify whether something is coming from the external world or from within our brains. The advance of technologies that can produce fake, yet highly realistic, faces, images, and video calls means reality monitoring must be based on information other than our own judgments. It also calls for a broader discussion of whether humankind can still afford to default to truth.

 

RELATED
Meet Lia, Soul Machine's scarily lifelike digital human that reacts to your emotions

 

It’s crucial for people to be more critical when evaluating digital faces. This can include using reverse image searches to check whether photos are genuine, being wary of social media profiles with little personal information or a large number of followers, and being aware of the potential for deepfake technology to be used for nefarious purposes.

The next frontier for this area should be improved algorithms for detecting fake digital faces. These could then be embedded in social media platforms to help us distinguish the real from the fake when it comes to new connections’ faces.

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This