
Anthropic publicly accused three prominent Chinese AI laboratories of orchestrating coordinated campaigns to extract capabilities from its Claude chatbot through massive-scale distillation attacks, making it the latest American AI company to level such allegations after OpenAI issued similar complaints earlier this month.
The San Francisco-based company said Monday that DeepSeek, Moonshot AI, and MiniMax violated its terms of service by generating over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in what Anthropic characterized as an industrial-scale effort to bypass developmental costs and safety protocols.
What the Chinese Labs Allegedly Did
According to Anthropic's detailed blog post, the three labs followed a similar playbook: using fraudulent accounts and proxy services to access Claude at scale while evading detection. The volume, structure, and focus of the prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use.
Each campaign specifically targeted Claude's most differentiated capabilities: agentic reasoning, tool use, and coding. Anthropic attributed each campaign to a specific lab "with high confidence" through IP address correlation, request metadata, infrastructure indicators, and corroboration from industry partners who observed the same actors on their own platforms.
MiniMax accounted for the largest volume with over 13 million exchanges, primarily targeting agentic coding, tool use and orchestration. Anthropic detected this campaign while it was still active—before MiniMax released the model it was training. When Anthropic released a new Claude model during MiniMax's active campaign, the lab pivoted within 24 hours, redirecting nearly half its traffic to capture capabilities from the latest system.
DeepSeek generated over 150,000 exchanges with synchronized traffic across accounts showing identical patterns, shared payment methods, and coordinated timing that suggested load balancing tactics. In one notable technique, DeepSeek's prompts asked Claude to imagine and articulate the internal reasoning behind a completed response and write it out step by step—effectively generating chain-of-thought training data at scale.
Moonshot AI conducted over 3.4 million exchanges targeting agentic reasoning, tool use, coding, and data analysis capabilities.
The Distillation Technique Explained
Distillation involves training a less capable model on the outputs of a stronger one. While this technique is widely used and legitimate—frontier AI labs routinely distill their own models to create smaller, cheaper versions for customers—Anthropic argues it can also be weaponized.
A competitor can pose as a legitimate customer, bombard a frontier model with carefully crafted prompts, collect the outputs, and use those outputs to train a rival system—capturing capabilities that took years and hundreds of millions of dollars to develop.
The technique burst into public consciousness in January 2025 when DeepSeek released its R1 reasoning model, which appeared to match or approach the performance of leading American models at dramatically lower cost.
National Security Implications
Anthropic warned that illicitly distilled models may lack necessary safeguards, creating significant national security risks if deployed by authoritarian governments for offensive cyber operations, disinformation campaigns, and mass surveillance.
"These campaigns are growing in intensity and sophistication," Anthropic stated. "The window to act is narrow, and the threat extends beyond any single company or region."
The company emphasized that while Claude is not commercially available in China, the labs bypassed geofencing and access restrictions by routing traffic through proxy services that resell access to major Western AI models.
Broader Context and Industry Response
The accusations follow OpenAI's February 12 memo to the House Select Committee on the Chinese Communist Party alleging that DeepSeek systematically "stole" its intellectual property through large-scale distillation. Google's Threat Intelligence Group also warned of distillation attacks targeting its Gemini models, observing campaigns using more than 100,000 prompts.
The same day as Anthropic's announcement, Reuters reported that US officials believe DeepSeek trained its latest model—expected to launch as soon as next week—on Nvidia's most advanced Blackwell chip, potentially violating US export controls.
DeepSeek, Moonshot AI, and MiniMax have not responded to requests for comment. The Chinese embassy in Washington said Beijing opposes "drawing ideological lines, overstretching the concept of national security, expansive use of export controls and politicizing economic, trade, and technological issues."
What This Means for AI Competition
The allegations underscore the intensifying technological competition between American and Chinese AI labs, with distillation emerging as a key battleground. Anthropic argues that the fact Chinese labs developed high-performance models through distillation actually reinforces the rationale for export controls—restricting chip access forces competitors to rely on extracting capabilities from deployed models rather than training independently.
For the broader AI industry, the revelations highlight the challenges of protecting proprietary models once deployed as commercial services accessible via API. Even with terms of service restrictions and access controls, determined actors can orchestrate sophisticated campaigns using distributed infrastructure to evade detection.
Anthropic has not announced specific lawsuits but signaled it has cut off known access points and is urging tighter export controls on advanced chips and AI services.




