In a decisive move to protect the integrity of its human-curated knowledge, English Wikipedia has officially banned the use of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini for generating or rewriting article content.
The policy update, ratified on March 20, 2026, follows a landslide 44-2 community vote. It replaces earlier, more ambiguous guidelines that only prohibited creating articles “from scratch,” closing loopholes that allowed editors to use AI for massive rewrites or “patching” existing entries with synthetic text.
1. The New “No-AI” Rule
The new policy establishes a “bright-line” prohibition against AI-authored prose to prevent the proliferation of “AI slop”โcontent that sounds authoritative but often lacks verifiable facts or proper sourcing.
- The Prohibition: Editors are strictly forbidden from using LLMs to generate new articles or rewrite existing paragraphs.
- The Loophole Closure: Previously, some editors argued that using AI to “improve” an existing draft was permitted. The March 2026 update explicitly bans this, citing that AI models often distort meanings in ways not supported by the cited sources.
- Metadata & Bots: The ban also targets autonomous AI agents (like the suspected bot TomWikiAssist) that can edit pages 24/7 at a scale human moderators cannot match.
2. The Two “Surgical” Exceptions
Despite the broad ban, the community recognized that AI has utility as a “mechanical” assistant rather than a “creative” one.
- Basic Copyediting: Editors may use AI to suggest refinements to their own human-written text, such as grammar checks, spelling corrections, or stylistic polishing, provided no new information is introduced.
- Draft Translation: AI-assisted translation from other language Wikipedias is allowed, but only if the editor is fluent in both languages and performs a rigorous human review to catch “hallucinated” details before publishing.
3. Why the Ban was Necessary
The Wikipedia community cited three primary drivers for this emergency policy shift in early 2026:
- Phantom Citations: A massive surge in “hallucinated” referencesโwhere AI cites books or papers that do not existโhas overwhelmed volunteer fact-checkers.
- “Model Collapse” Protection: Wikipedia is the 5th most visited site and a primary training source for almost all AI models. Editors realized that if Wikipedia becomes “contaminated” with AI-generated text, future AI models trained on that data will rapidly degrade (a phenomenon called model collapse).
- Volunteer Burnout: The sheer volume of AI-generated “stubs” (short, low-quality articles) created a backlog that human moderators could no longer manage through traditional means.
4. Enforcement: Detection vs. Content
Wikipedia has admitted that AI detection tools are unreliable and will not be used as the sole basis for banning users.
| Enforcement Method | Strategy |
| Stylistic Signs | Used as a “red flag” but not sufficient for a ban. |
| Content Verification | Moderators look for unsourced claims or “hallucinations” as the primary evidence of a policy violation. |
| WikiProject AI Cleanup | A dedicated group of 200+ volunteers tasked specifically with hunting down and reverting AI-generated “slop.” |
| Sanctions | Repeated use of AI for writing is now classified as “disruptive editing,” which can lead to temporary or permanent blocks. |
5. Global Context: A Fractured Map
While English Wikipedia has taken this stand, the rules vary significantly across the 300+ language editions:
- Spanish Wikipedia: Maintains an even stricter total ban on all LLM use, including for translation and copyediting.
- German Wikipedia: Passed a sweeping ban in February 2026 that even restricts AI-generated text in discussion (talk) pages.
- Wikimedia Foundation: While the volunteer community is banning AI-written content, the Foundation continues to explore AI as a “back-end” tool for fighting vandalism and protecting against cyberattacks.
“Wikipedia’s value in 2026 is that it is not AI,” noted Ilyas Lebleu, the editor who proposed the ban. “We are the last major knowledge platform where every word is still backed by a human who takes responsibility for its truth.”


