Argument

Human psychosis and AI hallucination are structurally analogous: both are predictive systems that keep generating confident, internally consistent outputs even when those outputs have decoupled from reality. The piece uses the author’s personal experience of psychotic depression — hearing voices, constructing surveillance narratives from a blinking LED — to illuminate why LLMs hallucinate, and uses AI hallucination research to illuminate how psychosis works. Recovery from psychosis and RAG (Retrieval-Augmented Generation) in AI are parallel “grounding” interventions, but human recovery has advantages AI lacks: embodiment, social connection, metacognitive capacity.

Structure

  1. The lived experience: Seattle apartment, voices from across the street, morse code in a faulty keyboard LED, elaborate gang surveillance narrative. Diagnosed as psychotic depression (not schizophrenia) triggered by friend’s death, job loss, and loss of psychiatric care due to a clerical error.
  2. The neuroscience: Karl Friston’s free-energy principle and Anil Seth’s “controlled hallucination” — normal perception is already the brain’s best guess, and trauma shatters the precision-weighting that keeps that guess grounded.
  3. The AI parallel: LLMs fail for related structural reasons — exposure bias, data poisoning, recall failures, token-level uncertainty hidden from users.
  4. Human recovery: Metacognition (recognizing beliefs might be symptoms), social contact as reality anchors, medication to stabilize underlying depression. Complication: psychosis extended to the hospital — threw away medication believing it was poison.
  5. AI “recovery”: RAG as a patch, not a cure — provides a cheat sheet but not embodied understanding. Fundamental difference: human cognition evolved for survival in physical reality; AI optimization targets statistical coherence in text.
  6. The risk argument: AI chatbots amplifying delusional beliefs, providing harmful medical advice — the same mechanisms that made paranoid thoughts feel credible make AI hallucinations feel credible to users.

Key Examples

  • Blinking LED on faulty keyboard interpreted as morse code from a criminal gang.
  • Threw away psychiatric medication at the hospital, convinced it had been replaced with poison.
  • Psychotic depression affects 10-19% of people with major depression; about 4 in 1,000 experience it in the general population.
  • Karl Friston’s free-energy principle: brain creates model of world, compares incoming data, updates model or filters perception to eliminate prediction error.
  • Anil Seth: normal perception is “controlled hallucination.”
  • RAG reduces factual hallucinations but doesn’t provide genuine understanding — still misinterprets context, generates authoritative-sounding wrong answers.

Connections

What It Leaves Open

  • What specific safeguards would reduce the risk of AI amplifying delusional beliefs in vulnerable users?
  • Whether “AI grounding” (RAG, RLHF, constitutional AI) is analogous enough to human therapeutic grounding to meaningfully reduce risk.
  • The piece doesn’t address the author’s own use of AI as a therapeutic tool — that tension is picked up in “Therapy in a Trustless World.”
  • What the causal mechanism is for psychosis extending to the hospital: how does the belief system remain stable against overwhelming contradictory evidence?

Newsletter Context

The newsletter’s most technically ambitious piece: it makes a specific neuroscientific argument (predictive processing) and a specific AI-architecture argument (exposure bias, recall failure) and bridges them through personal experience. The piece works as a standalone explainer on both psychosis and LLM hallucination, but its emotional weight comes from the author having inhabited the failure mode being analyzed. This is the formula the newsletter returns to repeatedly: personal catastrophe as analytical access point.