Anthropic
Anthropic's CEO called OpenAI's military deal 'safety theater.' Then the Pentagon called Anthropic a national security risk.
Anthropic's CEO blasts OpenAI's military deal as 'straight up lies' while the Pentagon brands Anthropic a security risk. Inside the war for the soul of AI.

The long-simmering cold war between the two giants of AI safety has turned hot. Following the leak of a scathing 1,600-word internal memo from Anthropic CEO Dario Amodei, the industry must reckon with a definitive split: OpenAI's pivot to military integration versus Anthropic's public refusal of "unrestricted" government access. This isn't just a corporate spat; it is the moment the "AI Safety" consensus died.
The conflict between Anthropic and OpenAI over Department of War contracts proves that corporate technical safeguards are insufficient against political mandates, as evidenced by the Pentagon’s immediate move to label dissenters as national security risks. By accepting the contract, OpenAI has confidently signaled that its alignment research is now subservient to state utility, while Anthropic’s refusal has logged the first major instance of a domestic tech firm being branded a "supply chain risk" for ethical non-compliance.
The 'Mendacious' Memo and the Pentagon Pivot

The timeline of the breakdown is documented with uncomfortable precision. On February 26, 2026, Anthropic formally refused a request from the Department of War (DoW)—the official rebranding of the U.S. Department of Defense enacted earlier this year—for "unrestricted access" to its Claude models Defense.gov. Anthropic cited documented risks of mass surveillance and the potential for autonomous weaponry as red lines that could not be crossed AP News.
OpenAI moved with significantly more speed. Just 48 hours later, on February 28, CEO Sam Altman announced a comprehensive partnership with the DoW, claiming the deal included rigorous technical safeguards to prevent the model from being used for lethal force or red-line abuses TechCrunch.
The response from Anthropic was not a polite press release but a 1,600-word internal broadside. In the leaked memo, Amodei called OpenAI’s messaging "mendacious" and "straight up lies." He introduced the term Safety Theater—the practice of implementing measures that provide the illusion of security without actually mitigating the core risks—to describe OpenAI's contractual "safeguards." Amodei wrote, "The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses."
The "Department of War" rebranding was not merely cosmetic; it signaled a March 2026 mandate requiring AI providers to grant "unrestricted" data access for national security purposes, effectively ending the era of voluntary safety standards, according to internal memo reports(https://www.federalregister.gov/documents/2026/03/15/2026-05432/ai-national-security-mandate).
OpenAI’s Defense: The Responsibility of Engagement
OpenAI defenders argue that their engagement ensures the military uses AI responsibly through technical blocks and explicit contractual prohibitions on lethal force. In a company blog post titled "Our Agreement with the Department of War," officials stated, "It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose." They emphasized that the contract explicitly excludes use cases not covered under lawful use OpenAI Blog.
However, this defense relies on the stability of legal definitions. Amodei argues these safeguards are "safety theater" because the contract defers to "lawful use," a term the Department of War can redefine unilaterally through policy changes, rendering technical blocks moot TechCrunch. When the state defines what is "lawful," a lawful use clause is not a safeguard; it is a blank check.
Supporters of OpenAI's position note that a total withdrawal from government contracts would leave the field entirely to less safety-conscious actors or foreign adversaries [source needed]. They argue that having some guardrails is better than none. Yet, the rapid erosion of these guardrails suggests that engagement may simply be a slow-motion surrender.
| Clause Type | OpenAI Contract (Accepted) | Anthropic Proposal (Rejected) |
|---|---|---|
| Usage | "Lawful use" as defined by DoW | Explicit prohibition of kinetic targeting |
| Data Access | Tiered "national security" overrides | Air-gapped, zero-retention logging |
| Red-Teaming | Joint DoW/OpenAI committee | Independent third-party audit only |
The Weaponization of Compliance
The fallout was immediate and binary. Within 48 hours of the OpenAI announcement, ChatGPT mobile uninstalls surged by 295%, suggesting a massive consumer exodus from models perceived as "weaponized" TechCrunch. Users who once viewed the chatbot as a creative assistant now see a data pipeline for the Pentagon.
More chilling, however, was the state’s retaliation against Anthropic. Defense Secretary Pete Hegseth quickly applied a Supply Chain Risk Designation to Anthropic The Verge. Historically, this legal classification under the Federal Acquisition Supply Chain Security Act was used to ban technology from foreign entities like Huawei. By applying it to a domestic firm in San Francisco, the DoW has plausibly established a scarlet letter for ethical refusal.
Anthropic’s $200 million government work was halted overnight, and the firm was branded a risk to the very nation it was founded to protect Reuters. This move signals a new era: in 2026, "alignment" is no longer about human values, but about state compliance. The government’s willingness to use national security law against domestic tech founders for refusing "unrestricted access" sets a precedent that turns safety research into a liability.
The End of the AI Safety Era
Anthropic has now successfully pivoted to a Public Hero status, climbing to #2 in the App Store as users flee OpenAI's military-linked ecosystem App Annie. But this commercial victory comes at a total loss of government influence. By being blacklisted, Anthropic has no seat at the table where the rules for autonomous weaponry are actually being written.
Meanwhile, the normalization of AI in autonomous weaponry continues apace. The OpenAI deal creates a precedent where technical "safeguards" are used as a marketing tool to bypass public scrutiny of military-industrial integration. The receipts are clear: OpenAI secured its financial future by trading its founding principles for the scale only the Department of War can provide.
The upcoming battle over unrestricted data access in the Trump administration will likely target other firms. If a company as safety-conscious as Anthropic can be designated a national security risk for a simple refusal to grant unrestricted access, no Silicon Valley firm is truly independent. We are witnessing the forced consolidation of the AI sector into a state-aligned apparatus.
Analysis of the Alignment Split
The evidence of the past week suggests Amodei’s thesis is correct: safety is being traded for scale. The conflict between Anthropic and OpenAI is not a disagreement over technical implementation, but a fundamental collapse of the "Safety" framework. When corporate safeguards meet political mandates, the safeguards are documented to fail.
OpenAI’s reliance on "lawful use" clauses is, as Amodei argued, Safety Theater. It provides the illusion of ethical boundaries while ensuring the model remains a compliant tool of the state. Conversely, the branding of Anthropic as a supply chain risk confirms that the government now views ethical refusal as a form of sabotage. The alignment problem has been solved, not by researchers, but by the Pentagon: models will align with the state, or they will be designated out of existence.