Original source

Summary

Center for American Progress analysis of the Anthropic-DoD conflict and OpenAI’s subsequent military contract. Adam Conner (VP of Technology Policy) argues these events demand Congressional hearings and legislation protecting citizens from AI-enabled mass surveillance. The piece provides the most detailed legal and procedural analysis of the “supply chain risk” designation, its likely illegality, and the existential commercial threat it poses to Anthropic.

Key Points

  • Timeline: DoD demanded Anthropic remove restrictions or face consequences. On Feb 27, 2026, Pete Hegseth designated Anthropic a “Supply-Chain Risk to National Security” via X post. Trump had earlier that day directed the federal government to “IMMEDIATELY CEASE all use of Anthropic’s technology” via Truth Social.
  • Legal analysis of “supply chain risk”: The designation is defined in 10 U.S.C. Section 3252 and 41 U.S.C. Section 4713 as risk from adversaries sabotaging or subverting systems. A contract dispute with an American company does not meet either definition. The designation has never been applied to an American company before.
  • The kill shot: Hegseth’s statement that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic” goes beyond existing statutory authority. Since Anthropic runs entirely on Amazon Web Services and Google Cloud — both DoD contractors — this would effectively deplatform Anthropic from its infrastructure. Compared to Huawei sanctions, which required an act of Congress.
  • OpenAI deal: Hours after the Anthropic designation, Sam Altman announced OpenAI had signed a DoD classified-network contract. OpenAI claims its deal has “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” OpenAI shares both of Anthropic’s red lines (mass domestic surveillance, autonomous weapons) plus “high-stakes automated decisions.”
  • OpenAI’s enforcement problem: If the government destroyed Anthropic for maintaining restrictions, why would OpenAI be allowed to exercise similar restrictions? This is the article’s sharpest question.
  • Anthropic’s Claude models were reportedly being used to plan US military strikes against Iran over the same weekend they were designated a “supply chain risk” (WSJ, Washington Post).
  • Emil Michael: Under Secretary of Defense for Research and Engineering, who led negotiations with both companies. Called Amodei a liar on X; demanded he testify under oath. Michael is “infamous in Silicon Valley” for his tenure at Uber where he suggested hiring opposition researchers to discredit journalists.
  • Five Congressional action items laid out: investigate the contract dispute, examine what “mass domestic surveillance” Anthropic feared, define “all lawful uses,” examine the supply chain risk authority abuse, and investigate DPA threats.
  • The heart of the problem: “The federal government’s use of AI in conducting surveillance is generally unregulated, by both Congress and the courts.”

Newsletter Angles

  • The most detailed legal analysis of the Anthropic-DoD conflict in the wiki. The statutory citations (10 U.S.C. 3252, 41 U.S.C. 4713) and the Huawei comparison provide the evidentiary backbone for any piece on Regulatory Weaponization.
  • The OpenAI credibility question — why would the government honor OpenAI’s restrictions after destroying Anthropic for similar ones? — is the single most important unanswered question in the AI-defense cluster.
  • Emil Michael’s involvement adds a Silicon Valley-to-Pentagon pipeline dimension. The man who suggested smearing journalists at Uber is now negotiating AI military contracts.
  • Claude planning Iran strikes while designated a “supply chain risk” is the most surreal detail in the entire story — the government was simultaneously using and trying to destroy the same product.

Entities Mentioned

  • Anthropic — subject of unprecedented “supply chain risk” designation
  • Dario Amodei — Anthropic CEO; called a liar by Emil Michael
  • OpenAI — signed DoD deal hours after Anthropic designation
  • Pete Hegseth — Secretary of Defense; issued the designation via X
  • Donald Trump — directed government to cease Anthropic use
  • US Department of Defense — agency exercising the disputed authority
  • Amazon — AWS hosts Anthropic; DoD contractor; would be forced to drop Anthropic as customer
  • Emil Michael — Under Secretary of Defense for R&E; led negotiations; Uber background
  • Sam Altman — OpenAI CEO; announced DoD deal; conducted X AMA

Concepts Mentioned

Quotes

These events are a call to action for Congress to both investigate the events involving the DOD and both Anthropic and OpenAI as well as take action to pass legislation providing protections for citizens against mass surveillance enabled by AI.

If the government’s position were to be upheld by the courts, this would be the commercial equivalent of the death penalty for Anthropic.

The heart of the problem is that the federal government’s use of AI in conducting surveillance is generally unregulated, by both Congress and the courts.

Perhaps the most significant question for OpenAI is why they believe they would be allowed to exercise any objections to the DOD contract after the government has attempted to destroy Anthropic for what OpenAI argues were the same or worse contract terms.

Notes

American Progress is a center-left think tank. The piece favors Anthropic’s position and treats the DoD’s actions as likely illegal. However, the legal analysis is well-sourced with statutory citations and legal scholars. The author, Adam Conner, is VP of Technology Policy at American Progress. Note: the raw file’s frontmatter lists “Will Beaudouin” as author but the article byline and author bio identify Adam Conner — Conner is the correct author.