Summary
Google’s official September 2025 edition of the Search Quality Evaluator Guidelines — the internal rubric that approximately 16,000 human “raters” use to evaluate search results, which in turn trains Google’s ranking algorithms. The document is hundreds of pages long and covers page-quality rating, needs-met rating, YMYL (“Your Money or Your Life”) topics, and explicit instructions about AI-generated content, scaled content abuse, and the E-E-A-T framework (Experience, Expertise, Authoritativeness, Trust).
Key Points
- Primary source document (not an analysis): this is the rubric itself, not commentary about the rubric
- Structure: Three parts — Page Quality Rating, Understanding User Needs (Needs Met), and Rating Using the Mobile Device
- E-E-A-T framework: Experience, Expertise, Authoritativeness, Trust — the core dimensions raters use to assess quality, with Trust elevated as the most important factor
- YMYL topics: pages that can impact “Your Money or Your Life” (health, financial, safety, civic information) get the most stringent scrutiny
- AI content guidance: rubric instructs raters to assign the lowest ratings to pages where “all or nearly all main content is auto-generated with little added value” — scaled content abuse is explicitly flagged
- Reputation: raters are instructed to research the website’s and content creators’ reputations externally, including customer reviews
- September 2025 version: the most current edition at time of ingest; Google updates this document on a roughly annual basis
Newsletter Angles
- Platform governance as de facto policy: This document is effectively the constitution of Google Search. It shapes what billions of users see about politics, health, civic questions, and money. Its ranking of “Trust” above “Expertise” and “Authoritativeness” is a civic policy choice made by a private company.
- The AI content crackdown is explicit: Google has publicly committed — in its own rubric — to deranking bulk AI-generated content. This is directly load-bearing for any analysis of the post-LLM content economy, SEO farming, and the economics of journalism.
- Raters as invisible labor: 16,000 contractors evaluating search results is a significant workforce whose judgments train a system used by billions. The Mechanical Turk Pattern applies: human labor hidden behind an “algorithmic” facade.
- Primary source hygiene: The current wiki has a secondary analysis (Google Search Quality Rater Guidelines — Key Insights About AI Use via Originality.ai). Having Google’s own text in the wiki lets future claims trace back to the source rubric rather than to third-party summaries.
Entities Mentioned
- Google — publishing entity; the rubric is Google’s own document
- Alphabet — parent company (if page exists; otherwise implicit)
Concepts Mentioned
- Algorithmic Incentives — the rubric is the clearest public window into what Google’s ranking system actually rewards
- Attention Economy — ranking decisions determine which content reaches attention at scale
- Mechanical Turk Pattern — 16,000 human raters whose judgment trains the “AI” ranking system
- Misinformation Economy — scaled AI content abuse is explicitly addressed
Notes
This is a 176+ page PDF — the wiki captures the top-line framework and analytical hooks, not a section-by-section summary. For specific claims (e.g., the exact wording on AI content, the YMYL examples, the E-E-A-T definitions), consult the raw file directly.
Pair with Google Search Quality Rater Guidelines — Key Insights About AI Use (Originality.ai’s commentary on the January 2025 edition) for contextual framing; this September 2025 edition supersedes that one.
The document is publicly hosted by Google at guidelines.raterhub.com — it is not a leaked document. Google publishes it deliberately as a transparency gesture, though critics note the gap between the stated rubric and measurable ranking behavior remains wide.