Overview

Anthropic is a US AI safety company founded in 2021, creator of the Claude AI assistant. In early 2026, it was blacklisted by the US Department of Defense as a “national-security supply-chain risk” after refusing to allow Claude to be used for military surveillance or autonomous weapons. The case represents the first major direct confrontation between a large AI company and the US executive branch over military AI use.

Key Facts

  • Creator of Claude AI chatbot; Claude Code is a key enterprise revenue driver
  • Refused to allow Claude to be used for US military surveillance or autonomous weapons Britain woos Anthropic expansion after US defence clash
  • Trump directed federal agencies to cease Anthropic use: February 27, 2026
  • Pentagon designated Anthropic a supply-chain risk: March 5, 2026
  • US judge temporarily blocked the blacklisting: March 26, 2026; second lawsuit pending
  • Britain (Keir Starmer’s government) actively recruiting Anthropic to expand to UK; proposals include London office + dual stock listing
  • CEO: Dario Amodei; UK visit planned for late May 2026
  • Valuation: $183B (Series F, $13B raised, October 2025)
  • Revenue: ~$7B ARR as of October 2025; projected $9B ARR by year-end 2025; 2026 target: up to $26B
  • Actual booked revenue as of July 2025: ~$1.5B (per Zitron analysis) — significantly below ARR figures
  • Burns $5B+/year; not profitable
  • Holds $200M DoD contract and GSA App Store access despite political conflict with Trump administration
  • Offered Claude to all government customers for $1/year (2025)
  • Politically opposed to Trump: opposed 10-year state AI regulation moratorium in “Big Beautiful Bill”; endorsed California SB 53 (AI transparency); hired senior Biden-era officials for government relations
  • Dario Amodei compared Trump to a “feudal warlord” during 2024 election; publicly supported Kamala Harris; excluded from White House dinners attended by OpenAI, Meta, and Nvidia leaders
  • Co-founder Jack Clark’s essay “Technological Optimism and Appropriate Fear” (October 2025) triggered David Sacks’s “regulatory capture through fear-mongering” accusation

Newsletter Relevance

Power: Anthropic is the clearest current example of a private technology company refusing state demands to militarize its product and being punished with regulatory destruction. The Pentagon’s “supply-chain risk” designation is a new kind of weapon — regulatory rather than legal or military.

DePIN bridge: Anthropic’s situation is an AI-domain preview of pressures DePIN networks will face. If a DePIN energy or communications network becomes strategically important, the state will want to co-opt or control it. Anthropic’s refusal and the resulting blacklist is a model for what that confrontation looks like.

AI Sovereignty: The UK’s aggressive recruitment of Anthropic shows how geopolitical competition for AI infrastructure works in practice — US pressure creates openings for allies/rivals.

Connections

Source Appearances

Open Questions

  • What specifically did the DoD ask Anthropic to enable — targeted surveillance? Drone targeting? Autonomous kill decisions?
  • What is the status of the two lawsuits? What legal theory is Anthropic using?
  • Does Anthropic have meaningful DoD/government revenue at stake, or is this about precedent?
  • What does a “dual stock listing” in the UK actually offer Anthropic?
  • Are other AI companies (OpenAI, Google DeepMind) complying with DoD requests that Anthropic refused?