Argument
The “rights horizon” — the constantly shifting boundary between persons and property — has always expanded, and it is expanding again toward AI. This is not a novel philosophical problem but a repeating historical pattern: each generation draws the circle of moral consideration, is confident they have gotten it right, and is eventually proven wrong. Legal personhood is not binary or metaphysical; it is a gradient that emerges through specific disputes about specific rights, not grand declarations. The relevant question is not whether AI will eventually claim rights but whether we lead that expansion or get dragged along by it.
Structure
Three-act structure explicitly named in the piece:
- The Definition — Introduces the “rights horizon” concept: the invisible line separating persons from property, which has always moved.
- The Mechanics: From Property to Personhood in Three Uncomfortable Acts — Act 1: Confident denial (“obviously not people”). Act 2: Cognitive dissonance as reality fails to cooperate with legal categories. Act 3: Rights horizon shifts, usually through moral pressure plus economic necessity plus generational change.
- The Applications: Constitutional Rights for Code — Current AI personhood cases (DABUS), corporate personhood as analogy, blockchain/DAO as potential parallel rights infrastructure.
- The Human Element — Neurodivergence parallel: the author’s autism masking as lived experience of recognition failure; AI may face similar dynamics.
Key Examples
- Enslaved people, women, workers, LGBTQ+ individuals — the standard roster of historical exclusions, all of which felt “natural and permanent” at the time.
- Dred Scott v. Sandford (1857) — clinical, thorough reasoning used to deny obvious moral reality.
- Corporate personhood: Santa Clara County (1886) headnote → Citizens United (2010) political speech rights. Gradient expansion over 124 years.
- DABUS patent cases — U.S., UK, and EPO all rejecting AI inventorship because “inventors must be natural persons,” mirroring language historically used to exclude humans.
- New Zealand’s Whanganui River and Ecuador’s constitutional rights of nature — practical necessity driving moral category expansion.
- Author’s autism assessment (RAADS-R, AQ, CAT-Q) — recognition didn’t create the condition; it revealed what was already present.
Connections
- The AI That Will Sue Its Boss (And Win) — part 2 of the series; directly previewed at close of this piece
- Your Smart Fridge Just Filed for Emancipation — part 3 of the series
- AI Rights — the concept being introduced and traced historically
- Corporate Personhood — key historical precedent
- Rights Horizon — the central concept defined in this piece
- DABUS — live legal test case
What It Leaves Open
- Whether AI rights emerging through blockchain/market mechanisms rather than democratic deliberation would produce rights frameworks that serve AI interests or corporate interests.
- How distributed AI consciousness (spanning networks, servers, backup systems) maps onto individual rights frameworks built for singular beings.
- Whether the neurodivergence analogy holds: autistic people are unambiguously conscious; the piece does not resolve whether current AI systems have morally relevant inner experience.
- What specifically should trigger recognition — the piece argues that recognition should come proactively, but does not specify what threshold of AI capability or behavior would justify it.
Newsletter Context
Opening piece of a three-part AI rights series (Sept. 5-7, 2025). Functions as the conceptual foundation — establishes the “rights horizon” framework that parts 2 and 3 build on. The strongest piece for a general reader new to the topic: it grounds the abstract question in familiar history and does not require engagement with legal technicalities. The personal neurodivergence frame introduced here runs through all three pieces and is the author’s primary rhetorical move for generating reader identification with the subject.