Original source

Summary

Anthropic CEO Dario Amodei’s official statement laying out the company’s position in its conflict with the US Department of Defense. Amodei affirms Anthropic’s commitment to national defense AI deployment while drawing two red lines: mass domestic surveillance and fully autonomous weapons. The statement reveals that the DoD threatened to designate Anthropic a “supply chain risk” and invoke the Defense Production Act if safeguards were not removed.

Key Points

  • Anthropic was the first frontier AI company to deploy models on classified government networks, at the National Laboratories, and to provide custom models for national security customers. Claude is “extensively deployed” across the DoD for intelligence analysis, operational planning, cyber operations, and more.
  • Anthropic cut off CCP-linked firms at a cost of “several hundred million dollars in revenue” and advocated for strong chip export controls.
  • Two red lines that Anthropic will not cross:
    1. Mass domestic surveillance — Amodei argues AI-driven surveillance “presents serious, novel risks to our fundamental liberties” and that current law hasn’t caught up with AI capabilities. Cites the Intelligence Community’s own acknowledgment that government purchase of Americans’ movement, browsing, and association data raises privacy concerns.
    2. Fully autonomous weapons — Not a philosophical objection; a reliability objection. “Frontier AI systems are simply not reliable enough to power fully autonomous weapons.” Anthropic offered to collaborate on R&D to improve reliability; DoD declined.
  • DoD demands: The Department requires contractors to accede to “any lawful use” and remove safeguards. Threats include removal from DoD systems, “supply chain risk” designation (never before applied to an American company), and invocation of the Defense Production Act.
  • Amodei flags the contradiction: Supply chain risk designation treats Anthropic as a security threat; DPA invocation treats Claude as essential to national security. “These latter two threats are inherently contradictory.”
  • Anthropic’s position: Will not remove safeguards. Prefers to continue serving DoD with the two safeguards in place. Will enable smooth transition to another provider if DoD chooses to offboard them.

Newsletter Angles

  • This is the primary source document for the Anthropic-DoD conflict. Every other article in the wiki references this statement. It establishes Anthropic’s framing: patriotic AI company drawing principled lines, not anti-military refusenik.
  • The surveillance argument is the editorial core. Amodei’s point about the government purchasing Americans’ data from commercial sources — and AI making it trivially easy to assemble comprehensive profiles at scale — is a concrete, testable claim about near-term risk. This is not hypothetical.
  • The reliability framing on autonomous weapons is strategically smart: it avoids the ethical minefield (“killer robots bad”) and instead says “the technology doesn’t work well enough yet.” This positions Anthropic as the responsible engineer, not the conscientious objector.
  • Tech-State Conflict: This is the clearest case of a private tech company drawing limits on state power and facing existential retaliation. The DPA threat — normally used to compel companies to produce wartime materiel — being aimed at forcing removal of AI safety guardrails is a new category of state coercion.

Entities Mentioned

  • Dario Amodei — author; Anthropic CEO
  • Anthropic — the company whose position is being articulated
  • US Department of Defense — counterparty demanding unrestricted AI access
  • Pete Hegseth — referenced indirectly as head of DoD (“Department of War” is Hegseth’s preferred framing)

Concepts Mentioned

Quotes

We believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.

Under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant… Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life — automatically and at massive scale.

These threats do not change our position: we cannot in good conscience accede to their request.

These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Notes

This is Anthropic’s own statement — it represents the company’s framing of events, not an independent account. The DoD’s perspective is not represented here; the Department’s AI strategy document and Hegseth’s subsequent X posts provide the counterpoint (see The Department of Defense’s Conflict With Anthropic and Deal With OpenAI Are a Call for Congress To Act). The statement does not include a publication date in the raw file, but the American Progress article dates the events to late February 2026.