Connect with us

World News

Here’s how to spot AI-generated deepfake images

blogaid.org

Published

on

Here's how to spot AI-generated deepfake images

LONDON (AP) — AI fakery is quickly becoming one of the biggest problems we face online. Misleading images, videos and audio are spreading due to the rise of the Internet misuse of generative artificial intelligence tools.

With AI deepfakes popping up almost every day, there are images of everyone Taylor Swift Unpleasant Donald Trump for Katy Perry attending the Meta Gala, it’s becoming increasingly difficult to distinguish what’s real and what’s not.

Video and image generators such as DALL-E, Midjourney and Sora from OpenAI make it easy for people without any technical skills to create deepfakes: just type a request and the system will spit it out.

These fake images may seem innocent. But they can be used for fraud, identity theft, propaganda and election manipulation.

A photo-illustrated image highlighting just a few notable parts of an AI-generated deepfake of Pope Francis that went viral on social media.

In the early days of deepfakes, the technology was far from perfect and often left clear signs of tampering. Fact-checkers have pointed out images with obvious errors, such as hands with six fingers or glasses with differently shaped lenses.

But as AI has improved, it has become a lot more difficult. Some commonly shared advice — such as looking for unnatural blinking patterns in people in deepfake videos — no longer applies, says Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert on generative AI.

Still, there are some things we need to look for, he said.

A lot of AI deepfake photosespecially in humans, have an electronic glow, “an aesthetic kind of smoothing effect” that makes the skin “look incredibly polished,” Ajder said.

However, he cautioned that creative impulses can sometimes eliminate these and many other signs of AI manipulation.

Check the consistency of shadows and lighting. Often the subject is clearly in focus and appears convincingly lifelike, but elements in the background may not be as realistic or polished.

Face swapping is one of the most common deepfake methods. Experts advise to look closely at the edges of the face. Does the facial skin tone match the rest of the head or body? Are the edges of the face sharp or blurry?

If you suspect that the video of a person speaking has been manipulated, look at his or her mouth. Do their lip movements match the audio perfectly?

Ajder suggests looking at the teeth. Are they clear, or are they blurry and somehow inconsistent with what they actually look like?

Cybersecurity company Norton says its algorithms may not yet be sophisticated enough to generate individual teeth, so a lack of contours for individual teeth could be a clue.

THINK OF THE BIG PICTURE

Sometimes context matters. Take a moment to consider whether what you see is plausible.

Poynter’s journalistic website advises that if you see a public figure do something that seems “exaggerated, unrealistic, or out of character,” it could be a deepfake.

For example, would the Pope really wear a luxurious puffer jacket? depicted by a notorious fake photo? If he did, wouldn’t additional photos or videos be published by legitimate sources?

At the Met Gala, over-the-top costumes are the whole point, adding to the confusion. But such major events are usually captured by officially licensed photographers who take plenty of photos that can help with verification. One clue that the Perry photo was a fake is the carpeting on the stairs, which some eagle-eyed social media users noticed was from the 2018 event.

USING AI TO FIND THE FAKES

Another approach is to use AI to combat AI.

OpenAI said Tuesday it is releasing a tool to detect content created with DALL-E 3, the latest version of its AI image generator. Microsoft has one authentication tool which can analyze photos or videos to provide a confidence score on whether it has been manipulated. Chip maker Intel FakeCatcher uses algorithms to analyze the pixels of an image to determine whether it is real or fake.

There are online tools that promise to detect fakes if you upload a file or paste a link to the suspicious material. But some, like OpenAI’s tool and Microsoft’s authenticator, are only available to select partners and not to the public. That’s partly because researchers don’t want to tip off bad actors and give them a bigger advantage in the deepfake arms race.

Open access to detection tools could also give people the impression that they are “divine technologies that can outsource critical thinking for us,” when instead we should be aware of their limitations, Ajder said.

THE OBSTACLES TO FINDING FAKES

All this said, artificial intelligence has developed rapidly and AI models are being trained on internet data to produce increasingly higher quality content with fewer errors.

This means that there is no guarantee that this advice will still be valid even a year from now.

Experts say it could even be dangerous to put the burden on ordinary people to become digital Sherlocks, as it could give them a false sense of confidence as deepfakes become increasingly difficult to spot, even for trained eyes.

Swenson reported from New York.