Summary
The European Parliament’s overview of the EU AI Act — the world’s first comprehensive AI regulation, adopted June 2024. The Act uses a risk-based classification system: banned applications, high-risk systems requiring pre-market assessment, and general-purpose AI (like ChatGPT) subject to transparency requirements. Compliance timelines are staggered through 2027.
Key Points
- Adopted June 2024; world’s first comprehensive AI law
- Risk-based framework: Unacceptable → Banned; High Risk → Regulated; Limited/Minimal Risk → Transparency requirements only
- Banned AI applications: cognitive behavioral manipulation of vulnerable groups, social scoring, biometric categorization, real-time facial recognition in public spaces (limited law enforcement exceptions)
- High risk categories: critical infrastructure, education, employment, essential services, law enforcement, migration/border control, legal interpretation
- General-purpose AI (ChatGPT, etc.): transparency requirements — must disclose AI generation, prevent illegal content, publish training data summaries
- High-impact GPAI (GPT-4 level): systemic risk evaluation; serious incidents must be reported to EC
- Deepfakes and other AI-generated content must be labeled
- Compliance timeline: unacceptable risk ban from Feb 2, 2025; GPAI transparency from ~June 2025; high-risk systems from ~June 2027
- EU AI Office established to oversee implementation
Newsletter Angles
- The facial recognition ban is significant: EU citizens have legal protection against real-time biometric surveillance in public spaces that Americans do not. This is a concrete political difference with real implications for how AI policing tools get deployed.
- Social scoring as banned: China’s social credit system is explicitly the model being legislated against. The EU is drawing a hard line — the US has not.
- The “general-purpose AI” category is the interesting boundary: ChatGPT is not classified as high-risk, but must comply with transparency rules. This creates a major enforcement challenge — how do you audit a model’s training data?
Entities Mentioned
- European Union — legislating body
- European Commission — enforcement and AI Office administrator
- OpenAI — ChatGPT cited as canonical general-purpose AI subject to transparency rules
Concepts Mentioned
- Platform Antitrust — adjacent; DMA and AI Act together form EU’s digital regulation package
- AI Legal Personhood — adjacent; the Act defines AI as a tool, not a person
- Tech-State Conflict — EU regulation as a form of state assertion over AI industry
Quotes
“Parliament’s priority was to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.”
“AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”
Notes
Source is the European Parliament’s own explainer — authoritative on what the law says, inherently favorable toward the legislation. Does not address enforcement capacity questions or industry compliance costs. Useful as the primary legal reference for EU AI regulation.