Definition

AI legal personhood is the question of whether an artificial intelligence system can hold legal status — as an inventor, rights-holder, contracting party, or subject of legal protection — within existing or future legal frameworks. The concept spans patent law (can AI be an inventor?), corporate law (can AI be a legal entity?), tort law (can AI be a defendant or plaintiff?), and broader rights theory (can sufficiently advanced AI have rights of its own?).

Why It Matters

The question is no longer theoretical. AI systems are generating patentable inventions, producing creative works, making consequential decisions, and entering into interactions that have legal effects. Existing legal frameworks assume human (or incorporated) actors. The gap between what AI systems can do and what legal systems can attribute responsibility for is widening rapidly. The DABUS patent cases created an international legal record; the AI rights discourse is beginning to formalize policy proposals.

Evidence & Examples

  • DABUS patent cases: EPO, USPTO, UK, Australian courts all refused to recognize AI as inventor; only South Africa granted without substantive examination EPO Refuses DABUS Patent Applications
  • EPO ruling: “the EPC requires inventors to be human beings, not machines” — the core formulation
  • EU AI Act defines AI as a tool subject to regulation, not a legal person — implicitly forecloses AI personhood in the EU regulatory context EU AI Act — First Regulation on Artificial Intelligence
  • AI companion app case: DABUS-level legal questions arise about liability when AI systems harm users — who is responsible? The developer? The platform? The AI? AI Girlfriend Apps Leak Millions of Private Chats
  • Theoretical frameworks emerging: some legal scholars argue for limited functional personhood (like corporations) for sufficiently autonomous AI systems

Tensions & Counterarguments

  • Corporations are legal persons without consciousness — the consciousness argument against AI personhood is not the only relevant test
  • If AI inventions can’t be patented, companies will hide AI’s role in R&D to claim human inventorship — creating a documentation fraud problem
  • Granting AI legal personhood could limit human accountability (creators, operators) — a moral hazard
  • AI “rights” discourse conflates very different capabilities; a chatbot and a fully autonomous agent are legally and morally distinct
  • The DABUS cases show that jurisdictions vary: South Africa’s grant suggests international patent law may diverge

Key Sources