
OpenAI CEO Sam Altman announced late Friday that his company reached an agreement with the Pentagon to deploy ChatGPT on classified military networks—just hours after the Defense Department blacklisted rival Anthropic for refusing to remove identical safeguards. The deal raised immediate questions about why the Pentagon accepted from OpenAI the same restrictions it called "ideological" and "woke" when Anthropic demanded them.
"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network," Altman wrote on X Friday night. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
Those are the exact two red lines that led Defense Secretary Pete Hegseth to give Anthropic CEO Dario Amodei a 5:01 PM Friday deadline and ultimately designate the startup a "supply chain risk" when Amodei refused. A senior Pentagon official told Axios earlier this week: "The problem with Dario is, with him, it's ideological. We know who we're dealing with."
"More Guardrails Than Anthropic's"
In a Saturday blog post, OpenAI claimed its agreement "has more guardrails than any previous agreement for classified AI deployments, including Anthropic's." The company outlined three firm red lines: no mass domestic surveillance, no directing autonomous weapons systems, and no high-stakes automated decisions like social credit scoring.
"In our agreement, we protect our red lines through a more expansive, multi-layered approach," OpenAI stated. "We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in US law."
The contract language specifies: "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker."
The Critical Difference
The distinction appears to be one of trust, not substance. OpenAI's approach relies on existing US law and Pentagon policies to prevent misuse, supplemented by technical safeguards and oversight by cleared OpenAI personnel. Anthropic wanted explicit contractual prohibitions beyond just "lawful purposes" language, arguing that AI capabilities are evolving faster than law.
"Anthropic contends the law has not caught up with AI and worries AI can supercharge the legal collection of publicly available data, from social media posts to geolocation," Axios reported. This philosophical difference—whether to trust that existing law prevents mass surveillance or demand additional contractual restrictions—became the breaking point.
Critics immediately pounced on perceived contradictions. Techdirt's Mike Masnick argued the OpenAI deal "absolutely does allow for domestic surveillance" because compliance with Executive Order 12333 permits NSA collection of communications outside the US even when they involve Americans—what he described as "how the NSA hides its domestic surveillance."
Rushed Deal, Painful Optics
Altman acknowledged the agreement was "definitely rushed" and admitted "the optics don't look good" in responses on X Saturday. The backlash was immediate—Anthropic's Claude app overtook ChatGPT in Apple's App Store rankings by Saturday afternoon, suggesting consumers viewed OpenAI as capitulating where Anthropic stood firm.
"We really wanted to de-escalate things, and we thought the deal on offer was good," Altman explained. "If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as rushed and uncareful."
In an internal memo to employees Thursday—before Anthropic's Friday deadline—Altman had written: "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines. We would largely follow Anthropic's approach if we were in the same position."
Twenty-four hours later, OpenAI signed a deal the Pentagon wouldn't give Anthropic, raising questions about whether principle or political calculation drove the decision.
OpenAI stated it asked the Pentagon to offer the same terms to all AI companies and expressed hope that Anthropic "and more labs will consider it." The company also said it told the government that Anthropic should not be designated a supply chain risk.
The Pentagon has now secured agreements with both OpenAI and xAI for classified deployments as it works to replace Claude across military systems within six months.



