
Three of the most competitive companies in AI - OpenAI, Anthropic, and Google - have begun sharing intelligence about a specific threat: Chinese competitors who are systematically extracting the outputs of American frontier models to train cheaper, cloned versions that could undercut US labs on price while siphoning their customers.
Rivals OpenAI, Anthropic, and Alphabet's Google have begun working together to clamp down on Chinese competitors extracting results from cutting-edge U.S. artificial intelligence models to gain an edge in the global AI race. The firms are sharing information through the Frontier Model Forum, an industry nonprofit that the three technology companies founded with Microsoft in 2023, to detect so-called adversarial distillation attempts that violate their terms of service. The Japan Times
Adversarial distillation is effectively industrial espionage using the AI's own API: feed the frontier model vast quantities of prompts, collect its outputs, use those outputs to train a competing model. The result is a cheaper system trained on the expensive work of a much more capable one.
Why This Threat Is Serious
The rare collaboration underscores the severity of a concern raised by U.S. AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk. The Japan Times
The economic threat and the national security threat are connected. If Chinese labs can train competitive models by distilling American frontier systems, the US investment advantage in compute and R&D does not translate into a durable technical lead.
The Stanford Context
This story lands directly against the Stanford AI Index finding that China's best model now trails America's top model by just 2.7%. If adversarial distillation is contributing to that compression, the remediation effort is urgent - not just for the companies involved but for the policy and regulatory environment around AI access.
What This Means for Your Business
For enterprise AI buyers and CIOs, the Frontier Model Forum's collaboration signals that the major US AI platforms are treating model integrity as a shared infrastructure problem. This is one of the few areas where competitive pressure gives way to collective defense. The practical implication: API access controls, usage monitoring, and terms of service enforcement are likely to tighten across all three platforms. If your enterprise AI deployments involve high-volume API usage, expect more compliance requirements in your access agreements over the coming months.




