Your support helps us to tell the story
Support NowMy recent work focusing on Latino voters in Arizona has shown me how crucial independent journalism is in giving voice to underrepresented communities.
Your support is what allows us to tell these stories, bringing attention to the issues that are often overlooked. Without your contributions, these voices might not be heard.
Every dollar you give helps us continue to shine a light on these critical issues in the run up to the election and beyond
Eric Garcia
Washington Bureau Chief
Artificial intelligence (AI) is well and truly here – which means you can’t necessarily trust every image you see online.
AI editing tools can help you make uncannily real images, often known as ‘deepfakes’, particularly when talking about the manipulation of someone’s likeness. Sometimes AI features are included right in smartphones, like Google’s Add Me feature on the Pixel 9, which lets the photo-taker be inserted into an image.
Platforms like Midjourney make AI image generation simple, and they can be seriously realistic – just think of the picture of the Pope wearing a white Balenciaga puffer jacket that went viral last year, fooling many a person.
If you want to get better at spotting images that aren’t quite right, then you’re in the right place. We’ve gathered some top tips to help you spot deepfake images, to help you avoid being fooled by too-good-to-be-true photos.
Zoom in on details
Whether it’s completely AI-generated or simply heavily edited, there are some telltale signs that most deepfake images still exhibit. If you zoom in on things like people’s eyes and the definition of the edge of their face, this can often show up inconsistencies or blurriness that can be a red flag.
For AI-generated imagery, hands and fingers are still frequently the site of issues, so any weirdness in these areas can be obvious. Plus, if a deepfake is made with a face replacement, you’ll often see slight blurriness around the edge of the whole face. In videos, lips might not sync properly with the words they’re supposedly saying, which can also clue you in.
Think emotionally
One thing that many face-swap apps or deepfake programmes can struggle with is particularly complex emotions – expressions on real faces are unbelievably precise and complicated, after all.
So, if you’re looking at a beaming smile that seems a little too rigid, or a face that’s unbelievably neutral, that could be another clue that something’s up. This is easier to spot in videos, too, where an expression that doesn’t seem quite in tune with what someone’s saying can stand out more.
Look at the overall picture
This might seem a little hard to define, but most AI generators still default to creating images that don’t quite look real, since they’re almost too perfect. This might mean that a group photo has everyone lit almost exactly the same way, without any obscuring shadows or differences, or it might just make for a plasticky, wax-like aesthetic that feels a little ‘off’.
While it might not always be easy to pin down a single pixel that concerns you, if you think a photo looks a little unreal then you should probably take the time to do more research into it.
Don’t ignore the background
Especially in photos of people, it can be tempting to focus on things like someone’s face or hair to try to figure out if they’re real, but the background of an image can often be just as obviously wrong. AI-generated backgrounds will sometimes have physical contradictions or architecture that doesn’t actually make any sense, for instance.
This is also true for items and devices that might be out of focus or simply not in the centre of a shot – they’re great ways to detect that AI has generated an image. This won’t necessarily work as well on a deepfake where only the face has been swapped out, though.
Research real-world context
This tactic takes things outside the realm of technical expertise and microscopic analysis and simply stands as a reminder that you can always do a web search to see if the image you’re looking at lines up with context. This is particularly useful if it’s a purported photo or video of a public figure like a politician, as it can be quite easy to establish where they were at a given date or time, and if there’s any reporting about what the image shows.
If it’s a social media image of someone less notable, that might be more difficult, but it still pays to be a little careful and think things through. After all, leaping to conclusions based on an image that doesn’t end up being accurate is potentially a little embarrassing.
Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.