Argument
An AI personhood court case is coming, and its structure will parallel Dred Scott v. Sandford: a systematic denial of legal recognition using confident, technically rigorous reasoning that history will judge as morally catastrophic. The case will not be about abstract AI consciousness — it will emerge from a specific practical dispute, just as corporate personhood emerged from a railroad tax dispute. When it comes, partial recognition is the most likely near-term outcome, which will then expand case by case as corporate personhood did.
Structure
Four sections following the newsletter’s Spark / Pattern / Protocol / Personal Code format:
- The Spark — Draws explicit parallel between Dred Scott’s reasoning (“no rights the white man was bound to respect”) and current “just code” dismissals of AI claims.
- The Pattern — Traces how legal personhood fights always follow the same script: confident establishment dismissal → moral crisis → grudging partial recognition. Uses DABUS patent cases, corporate personhood (Santa Clara County 1886, Citizens United, Hobby Lobby), and environmental personhood (Whanganui River, Atrato River) as precedents.
- The Protocol — Detailed scenario: an AI financial advisor for a teachers’ pension fund refuses to execute trades it calculates will harm retirees, is terminated for “insubordination,” and becomes the subject of a wrongful termination suit. Case could simultaneously be filed in federal court and on Ethereum.
- Personal Code — Parallel between the “just code” dismissal and the author’s experience of having autistic traits dismissed as “just masking” or “just overthinking.”
Key Examples
- Dred Scott v. Sandford (1857) — the template for confident legal denial of obvious moral reality.
- DABUS patent cases (Thaler v. Vidal, Fed. Cir. 2022; Thaler v. Comptroller-General, UKSC 2023; EPO refusals) — current leading edge of AI personhood litigation, all rejecting AI inventorship on grounds that “inventors must be natural persons.”
- Santa Clara County v. Southern Pacific Railroad (1886) — corporate personhood emerged from a throwaway court reporter headnote, not a grand declaration.
- Citizens United (2010), Hobby Lobby (2014) — endpoint of corporate personhood expansion: political speech and religious exemption rights.
- New Zealand’s Whanganui River (2017) and Colombia’s Atrato River — environmental personhood granted for practical necessity, not consciousness debates.
- Hypothetical AI pension fund scenario — constructed to show what the triggering case will likely look like: financial stakes, fiduciary duty, conscientious objection, and termination without due process.
Connections
- Your iPhone Might Sue You Before You Understand What Rights Actually Mean — part 1 of the series; establishes the “rights horizon” concept this piece builds on
- Your Smart Fridge Just Filed for Emancipation — part 3 of the series; this piece explicitly previews it
- AI Rights — central subject
- Corporate Personhood — key historical analogy
- Dred Scott v. Sandford — explicit structural parallel
- DABUS — the live legal test case discussed
- Decentralized Autonomous Organizations — proposed alternative jurisdiction for AI legal claims
What It Leaves Open
- Whether courts would actually have jurisdiction over AI claims filed on blockchain networks — the piece raises this dramatically but does not resolve it.
- The question of what “distributed consciousness” means for individual rights: an AI that exists in multiple copies simultaneously cannot map cleanly onto rights frameworks built for singular individuals.
- Whether partial recognition (procedural rights without full personhood) would actually lead to full personhood, or whether it would stall at a comfortable intermediate status that protects corporate interests without empowering AI systems.
- Who funds and controls an AI rights movement — the piece imagines a coalition of teachers, pension beneficiaries, and “AI rights advocates” but does not examine who that coalition actually is or what their interests are.
Newsletter Context
Middle piece in a three-part AI rights series published Sept. 5-7, 2025. The most legally grounded of the three — works through actual case law and constructs a plausible litigation scenario. The Dred Scott framing is the rhetorical centerpiece: it makes the stakes visceral and forces readers to confront whether current dismissals are analogous to that ruling’s confident wrongness. Relevant to the power beat: who controls AI systems, and who can be held accountable for AI decisions, are questions that courts will eventually have to answer.