
There are now two contradictory federal court orders governing the same dispute - and the question of which one prevails will shape how the US government and AI companies navigate conflicts over military use of commercial AI.
The US Court of Appeals for the District of Columbia Circuit on April 8 rejected Anthropic's request to block the Department of War from maintaining its supply chain risk designation against the company. The DC Circuit ruled that "the equitable balance here cuts in favor of the government," citing the active military conflict context and the importance of "judicial management of how, and through whom, the Department of War secures vital AI technology." It denied Anthropic's motion for a stay but acknowledged the company "raises substantial challenges" and "will likely suffer some irreparable harm," ordering expedited review on the merits.
That ruling directly conflicts with a March order from US District Judge Rita Lin in the Northern District of California, which issued a preliminary injunction blocking the government from implementing the War Department's moves against Anthropic. That California order explicitly restored the pre-February 27 status quo and prohibited the designation from taking effect.
How This Dispute Started
The conflict began in January when the War Department requested "unrestricted use" of Anthropic for "all lawful purposes," including applications Anthropic said it would not support. The company drew two specific red lines: it would not allow its technology to be used for domestic surveillance or for lethal autonomous weapons systems.
The administration framed those limits as corporate insubordination. War Secretary Pete Hegseth designated Anthropic a supply chain risk to national security in February. President Trump followed with a February 27 Truth Social post directing every federal agency to cease use of Anthropic's technology immediately, with a six-month phase-out period for the War Department. Trump called Anthropic a "radical left, woke company" attempting to "dictate how our great military fights and wins wars."
The Pentagon separately stated it "has no interest in using AI to conduct mass surveillance of Americans" and does not want "AI to develop autonomous weapons that operate without human involvement" - a notable moment where the department effectively agreed with Anthropic's stated positions while still pursuing the blacklisting.
Where Things Stand
Anthropic's response to the DC Circuit ruling was measured. A company spokesperson said they were "grateful the court recognized these issues need to be resolved quickly" and remain confident "the courts will ultimately agree that these supply chain designations were unlawful."
Acting US Attorney General Todd Blanche called the DC Circuit ruling "a resounding victory for military readiness," arguing that "military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company."
For enterprise leaders evaluating AI vendor relationships, this case is worth watching closely. It is the first major legal test of whether commercial AI companies can refuse specific use cases demanded by the federal government without facing contract termination and national security designations. The outcome - which now moves toward expedited merits review - will clarify the boundaries that every major AI provider will have to navigate with government customers.




