x

How to spot AI-generated images and online content during the 2026 primary elections

How to spot AI-generated images and online content during the 2026 primary elections
2 hours 37 minutes 52 seconds ago Wednesday, February 18 2026 Feb 18, 2026 February 18, 2026 10:42 AM February 18, 2026 in News - AP National
Source: texastribune.org
A supercomputer at the University of Florida on Oct. 14, 2025. Exa Moseley/University of Florida via REUTERS

Misinformation and disinformation are especially common during election times, and the spread of AI-created content — such as the AI-generated video of Jasmine Crockett and John Cornyn dancing — has only made it harder to discern fact from fiction.

At The Texas Tribune, we do not use AI to create news content. Our AI policy prohibits publishing news photographs or videos created by or manipulated by generative AI. In cases where AI-generated images are the newsworthy subject of a story, we clearly label them as such with a watermark and caption. Social media platforms do not have such strict rules.

Here’s what you need to know if you spot a suspicious image, video or audio about elections, particularly on social networks.

Check the source and the context

There is no one solution for identifying false media content, and sometimes we can’t be 100% certain something was generated by AI, but some well-established methods of information verification still hold true.

If authenticity is in doubt, a good first step is to look for the source.

That leads to some questions:

  • Is the photo or video being shared with credit to a photographer or news agency?
  • Is it from a credible news source?
  • For videos, are there multiple angles of the event or similar footage from different news stations?
  • Has the image or video been verified by experts?

If you don’t know where an image originated, you can run a reverse image search on Google. This shows whether it was previously published and whether reputable sources have confirmed its authenticity.

For videos, you can run a reverse image search on a single frame in the sequence. Take a screenshot of the frame and use it in a reverse image search, just as you would with a regular image.

There are two ways to run a reverse image search. For images published online, right-click on it and select “Search image with Google Lens”. You can also upload the photo directly from your computer by going to Google.com and clicking on the camera icon.

Google Lens also lets you search the entire image or just a portion of it. You can even copy text or translate a phrase directly from a photograph.

Once a search is run, you can filter the results to see visually similar images or exact matches.

This can help determine if the image was previously published on another website, if it was taken out of context, or if a real image was altered using AI. It also can help identify the date the image originally appeared.

When reviewing results, it’s crucial to verify the legitimacy of all the sites that published the image or video. Some content may have been manipulated or taken out of context years ago, circulating ever since.

Context also matters. Were there news reports about the event depicted in the image or video? Is there a transcript of the audio published by a verified source? Would the politician publicly say what you heard in the viral audio?

If you can’t find a reliable source to back up the image, audio or video, the recommendation is simple: don’t share the content.

How to verify whether an image was generated by AI

Even though AI has become much better at generating content since its explosion in 2023 and deepfakes are becoming more difficult to identify, the technology still has flaws that can be detected.

Early AI didn’t generate hands and fingers correctly but now usually does. Still, always check whether hands have five fingers and the contours look clear. If hands are holding an object, does it seem natural?

Eyes are another key detail. The MIT Media Lab, a research laboratory within the School of Architecture and Planning at the Massachusetts Institute of Technology, recommends paying close attention to this part of the body, especially if the person in the image is wearing glasses.

It’s useful to analyze whether the shadows and reflections on the lenses make sense given the surrounding context. And sometimes, AI doesn’t show the correct perspective of elements.

If the person in the photo appears to have a vacant stare or an unusual facial expression, that can also be a sign it was generated by AI.

Faces and skin texture can look particularly artificial.

Fact-checkers also recommend looking closely at teeth (if there are too many or they appear shrunken, stretched or misaligned) and hair (which may blend unnaturally into the background, look overly rigid, or appear stuck to the shape of the eyebrows).

One sign that an image may have been generated by AI is duplicated people or objects in the background.

Finally, you can turn to specialized tools designed to help detect this type of technology. But don’t rely solely on AI-detection tools — they are not foolproof.

  • Tools such as Hive Moderation, AI or not, Image Whisperer or AI detectors from Hugging Face are free and accessible, allowing users to upload a suspicious image to receive an estimate of the likelihood that it was created using AI.
  • InVID-WeVerify is useful for searching the origin of content. It can be added as a Chrome extension.
  • You can also ask Gemini, Google’s AI chatbot, whether a piece was generated using Google’s AI technology. When Google creates AI-generated content — such as images, videos, audio or text — it adds an imperceptible watermark that can be detected using SynthID technology. Gemini can identify content made with Google’s tools, but not AI generated by other companies.
  • Additionally, Google’s Fact Check Tools can help show whether any fact-checking organizations — such as Politifact, Snopes or Factchequeado — have already verified the content.

Even though they can be cut or edited, you can also look for watermarks in the image from, for example, Gemini, Dall-E and Midjourney.

How to check if a video was created with AI

You can also run a reverse image search on individual video frames to help locate the original content — either using InVid or by taking screenshots to upload to a reverse image search.

Pay close attention to the person’s face in the recording and ask whether the skin looks too smooth or too wrinkled, or if the aging of the skin matches the hair and eyes.

Look closely at the eyes, eyebrows, glasses and lighting. Check whether shadows fall where they should and whether the person blinks too much or too little. Does the body move in a natural way? Are the people in the video acting like real humans?

Observe lip movements to see if they are properly synchronized with the voice.

If the video shows a politician, research and compare other videos to determine whether the person typically appears that way.

Finally, check for duplicated people in the background or for missing body parts — AI still tends to clone elements in images and video frames — whether the background is really blurry and moves or if words on signs are legible.

You can also run the video through Hive Moderation, InVID, AI or Not and Gemini. With Hive Moderation, upload a video shorter than 20 seconds; longer videos will have to be split into several clips. Sometimes only part of a video is AI-generated, so it’s necessary to test different sections.

How to tell if an audio clip is AI-generated

Audio is the most difficult format to detect if it was created with AI, but tips include listening carefully for unusual pauses, intonation, diction or repeated words.

If the audio is low quality or you can’t clearly hear what the person is saying, this could be a sign that it is fake or has been manipulated.

AI often struggles to generate natural details, such as breathing during long sentences or varied intonation between words. It is also difficult for AI to accurately replicate accents.

If the audio appears to be from a politician, compare it to other public and verified videos of that person to see if it truly matches.

Beyond searching for a full transcript on Google, pay attention to sudden changes in tone or unusual emphasis on words that a real person wouldn’t stress that way, as if it has a robotic intonation.

  • You can also run the audio in Hive Moderation, AI or Not, and ElevenLabs AI speech classifier. Like Gemini, ElevenLabs can only tell if an audio was created using ElevenLabs technology.
  • Hiya Deepfake Voice Detector analyzes audio in real time to determine if you’re hearing a real human voice or something cooked up by AI.

Disclosure: Google has been a financial supporter of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune's journalism. Find a complete list of them here.

This article first appeared on The Texas Tribune.

More News

Radar
7 Days