
OpenAI terminated a ChatGPT account linked to an individual associated with Chinese law enforcement who used the platform to plan and document covert influence operations targeting Japanese Prime Minister Sanae Takaichi and dissidents worldwide, according to the company's February 2026 threat disruption report published Wednesday. The account provided investigators a rare window into what OpenAI internally dubbed "Cyber Special Operations"—an industrial-scale disinformation machine spanning 300 platforms and involving hundreds of operators.
The ChatGPT user attempted in mid-October 2025 to design a smear campaign against Takaichi after she publicly criticized human rights abuses in Inner Mongolia. The planned tactics included amplifying negative commentary through fake foreign resident email accounts, labeling her as far-right, and stoking public anger over US tariffs to deflect from China-Japan tensions. ChatGPT refused to assist with the campaign design.
Despite the refusal, the same user returned days later submitting status reports documenting the operation's real-world implementation, confirming the attack proceeded using other AI tools. OpenAI's open-source investigation identified the specific hashtag from the report spreading across X, Pixiv, and Blogspot alongside AI-generated memes falsely associating Takaichi with nationalist groups. Most content flopped—YouTube videos pulled single-digit views and fewer than 150 of over 50,000 claimed posts received meaningful engagement.
Resource-Intensive Global Operations
The threat actor's ChatGPT sessions revealed operations spanning over 300 foreign social media platforms with thousands of fake accounts managed by hundreds of human operators across multiple Chinese provinces. The user claimed at least 300 operators worked in a single province alone, with similar numbers elsewhere based on other reports.
Teams relied on domestically deployed AI models including DeepSeek-R1, Qwen2.5, and YOLOv8 for monitoring, translation, content generation, and profiling tasks while using ChatGPT specifically to refine operational documents and status reports.
Beyond Takaichi, the account detailed over 100 distinct tactics targeting Chinese dissidents worldwide. Prominent victims included activist Li Ying (known as "Teacher Li"), human rights organization Safeguard Defenders, and dissident Jie Lijian—against whom operatives allegedly fabricated an obituary and spread fake gravestone images to cause psychological distress.
In one documented case, the threat actor described forging US county court documents and presenting them to social media platforms attempting to trigger account removals of dissidents.
Connections to Known Threat Networks
OpenAI connected these activities directly to the China-linked "Spamouflage" campaign that Meta publicly attributed to Chinese law enforcement in 2023. The company further linked the operation to doxxing website revealscum..com, which OpenAI first exposed as part of the Spamouflage network in May 2024.
Ben Nimmo, principal investigator on OpenAI's intelligence team, told reporters the operation was unusual and revealed significant details about China's strategy for covert influence operations and transnational repression. The pattern demonstrated industrial-scale coordination with systematic resource allocation designed to suppress free speech and silence CCP critics.
The user also claimed successful outcomes in some cases—dissidents reportedly lost followers, stopped posting, or had accounts taken down as results of these coordinated mass-reporting campaigns using AI-fabricated violation evidence.
Broader Threat Landscape
OpenAI's February 2026 report documented additional malicious uses beyond the Chinese law enforcement account. A cluster of ChatGPT accounts ran dating scams targeting Indonesian men, defrauding hundreds monthly by generating promotional text for fake dating services. Several accounts posed as law firms impersonating real attorneys and US law enforcement targeting fraud victims.
A small set of accounts likely originating in China requested information about US persons, federal building locations, online forums, and guidance on face-swapping software. The same accounts generated English-language emails to state-level US officials and policy analysts inviting targets to paid consultations—classic intelligence gathering approaches.
OpenAI's disclosure marks one of the most detailed public examinations of how frontier AI tools are being weaponized by state-linked actors for coordinated harassment, influence operations, and targeted suppression campaigns against political opponents and critics operating both domestically and internationally.




