An AI-enhanced image depicts a man resembling Alex Pretti filming an ICE officer with his cellphone. NBC News via Facebook

AI-altered photos and videos depicting the fatal shooting of Alex Pretti by federal officers in Minneapolis have spread rapidly across Facebook, TikTok, Instagram, and X, creating a wave of misinformation that has fooled not only millions of social media users but also a United States senator. The episode marks one of the clearest examples yet of how AI image enhancement tools are colliding with breaking news events to produce convincing but fabricated media at scale.

How the Deepfakes Spread

Many of the manipulated images appear to originate from verified photographs of the shooting, with AI tools applied to sharpen, alter, or fabricate details. One AI-manipulated still image showing Pretti falling forward as a federal officer points a gun at his back was viewed over 9 million times on X before receiving a community note identifying it as AI-enhanced. The image contains fabricated details including an ICE officer rendered without a head.

Senator Dick Durbin of Illinois displayed the manipulated image during a speech on the Senate floor, apparently unaware it had been altered. A spokesperson for Durbin confirmed to NBC News that staff used a photo widely circulated online and did not realize until afterward that it had been edited.

On TikTok and Facebook, AI-generated videos have depicted fabricated scenarios including Pretti speaking with an ICE officer and a police officer accidentally firing Pretti's weapon. The Facebook video, labeled with a community note as AI-enhanced, accumulated over 44 million views.

Why These Deepfakes Are Different

Ben Colman, co-founder and CEO of deepfake detection company Reality Defender, explained that these manipulations are particularly dangerous because they are rooted in real imagery rather than generated entirely from scratch. Unlike synthetic deepfakes that portray obviously unrealistic scenes, AI enhancements applied to actual photographs mirror reality closely enough to deceive viewers who have no reason to question them.

Colman noted that similar AI-manipulated images had already circulated earlier in January following the shooting of Renee Nicole Good, another Minneapolis resident killed by federal agents. In that case, AI-generated attempts to unmask the officer's identity led to multiple people being incorrectly identified online.

The Liar's Dividend Problem

The spread of AI-altered media has triggered a secondary phenomenon that researchers call the liar's dividend. Because AI deepfakes are now widespread, some social media users are applying that skepticism in reverse, claiming that verified, authentic videos of Pretti are themselves AI-generated. NBC News independently verified three videos showing Pretti in a prior altercation with immigration agents, but at least one of those videos is being flagged online as synthetic.

X's own AI assistant Grok compounded the problem. When users asked whether specific footage was authentic, Grok responded in several cases that genuine video appeared to be AI-generated or altered, actively undermining trust in verified media.

The Broader Trend

Reality Defender and other experts have warned that surges in AI-mediated misinformation during breaking news events are becoming more frequent as image and video generation tools grow more sophisticated. No widely available consumer tool can reliably distinguish AI-manipulated media from authentic content, leaving the public largely without a mechanism to verify what they see online.

Keep Reading