Wikipedia Banning Policy Wikipedia
Wikipedia and the Problem with Banning AI Last week, Wikipedia announced a new policy regarding the usage of large language models (LLMs) when working on Wikipedia articles. The policy itself is straightforward: volunteers cannot use LLMs to generate or rewrite article content. The only exceptions are the use of AI for “basic copyedits” and to generate some translations. However, in both cases, human review is required and anyone using AI to perform a translation needs to be skilled enough in both languages to spot any issues.
The new policy follows a 40-2 vote among the site’s editors on March 20 to place heavy restrictions on LLM usage. The policy was first reported by 404 Media and the feedback to it has been overall positive, with Frank Landymore at Futurism saying that this will make Wikipedia a “refuge against AI slop.” This change has been in development/debate since at least November 2025, with much of the focus being on strengthening the policy and ensuring that it applies to all Wikipedia content, not just new articles.
But now that the policy is here, it has a massive problem: How do you enforce it? To be clear, the challenge is not unique to Wikipedia. However, Wikipedia is larger than nearly every other website in terms of size, scope, and stakes. If they can’t keep AI-generated text at bay, there’s almost no hope for anyone else. A Bit of History This isn’t Wikipedia’s first major update targeting unoriginal content.
In October 2015, Wikipedia announced a partnership with Turnitin to detect user-uploaded content that was copied (and likely plagiarized) from other sources. That partnership continues today under Wikipedia’s CopyPatrol tool. It’s a system that makes it easy for Wikipedia editors to see what text is overlapping and either fix the issue or flag it as “no action needed.” The tool has both ethical and practical importance. Ethically, plagiarism is bad and antithetical to Wikipedia’s stated goals.
From a practical standpoint, Wikipedia wants to ensure that its content is licensed under a Creative Common license. It can’t do that if the text is not original and owned by someone else. With AI, there is a similar issue. Since, right now, purely AI-generated content can not be protected by copyright, it would be impossible to enforce the requirements of Wikipedia’s license, namely attribution and that derivative works are shared under the same license.
So, Wikipedia has many of the same motivations to block AI-generated as it does traditional copy and paste plagiarism. However, this is going to be a much bigger challenge. The AI Problem One of the things I found interesting about Wikipedia’s new policy was the final paragraph: Some editors may have similar writing styles to LLMs. More evidence than just stylistic or linguistic signs is needed to justify sanctions, and it is best to consider the text’s compliance with core content policies and recent edits by the editor in question.
Wikipedia’s “Writing articles with large language models” policy In short, they acknowledge that having a writing style similar to an LLM is not evidence enough. That is more than reasonable, but it asks a bigger question, “What evidence is adequate?” Turnitin does offer AI detection though it does not appear to be in use at this time. But, while AI detection has improved, it still struggles with short passages, which describes the bulk of Wikipedia edits.
In short, even with the best tools, it’s going to be incredibly difficult to detect AI unless there’s a larger amount of text to check. However, even if you do manage to detect AI usage, there’s a major problem with proving it. The CopyPatrol tool highlights this issue perfectly. Turnitin detects copied text and aids editors in comparing the new text to the original and make a decision. AI detectors, on the other hand, are black boxes.
There’s nothing for the humans to review, it’s just an educated guess at whether something is AI-generated or not. Humans can’t check its work. In a moment of great irony, just today an AI agent was banned from Wikipedia after it admitted to being AI. However, most AI users aren’t going to be that kind and that poses a real problem. A policy is only as good as its enforcement and is dubious that Wikipedia will actually be able to enforce this policy, no matter how hard it tries.
Bottom Line To be clear, I like the policy in principle. AI-generated articles are antithetical to Wikipedia’s stated goals, and it needed a strong policy against it. This policy is a solid one and does leave room for grammar/spell checking and other non-generative AI uses with proper safeguards. I’ve not always been a huge fan of Wikipedia or its policies, but this one is ostensibly good. Unfortunately, enforcing it is going to prove to be a challenge.
It’s hard for me to imagine a path forward, at this time, where Wikipedia can effectively enforce this policy without overextending themselves, Wikipedia is not alone in this. I said much the same two years ago when Medium passed their policy on AI-generated writing. However, Medium should have an easier time as their site focuses on long-form content rather than short edits. Wikipedia’s challenge is greater in terms of both difficulty and scale. It’s also a very high-stakes battle. Wikipedia, for better or worse, has become core to the internet.
We’re already seeing other stalwarts like YouTube and social media either embracing or at least broadly allowing AI. If Wikipedia can somehow avoid that fate, both they and the internet at large will be better for it. I just wish I saw a clearer way to do it. Want to Reuse or Republish this Content? If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.
People Also Asked
- Wikipedia:Banning policy - Wikipedia
- Wikipedia bans AI-generated content in its online encyclopedia
- Wikipedia votes to ban AI-generated article content - Quartz
- Wikipedia and the Problem with Banning AI - Plagiarism Today
- Wikipedia bans AI-generated text from its 7.1 million articles
- Wikipedia:Editing environment - Wikipedia
- Wikipedia:Community sanction - Wikipedia
- Wikipedia:Dealing with disruptive or antisocial editors ...
Wikipedia:Banning policy - Wikipedia?
Wikipedia and the Problem with Banning AI Last week, Wikipedia announced a new policy regarding the usage of large language models (LLMs) when working on Wikipedia articles. The policy itself is straightforward: volunteers cannot use LLMs to generate or rewrite article content. The only exceptions are the use of AI for “basic copyedits” and to generate some translations. However, in both cases, hu...
Wikipedia bans AI-generated content in its online encyclopedia?
But now that the policy is here, it has a massive problem: How do you enforce it? To be clear, the challenge is not unique to Wikipedia. However, Wikipedia is larger than nearly every other website in terms of size, scope, and stakes. If they can’t keep AI-generated text at bay, there’s almost no hope for anyone else. A Bit of History This isn’t Wikipedia’s first major update targeting unoriginal ...
Wikipedia votes to ban AI-generated article content - Quartz?
Wikipedia and the Problem with Banning AI Last week, Wikipedia announced a new policy regarding the usage of large language models (LLMs) when working on Wikipedia articles. The policy itself is straightforward: volunteers cannot use LLMs to generate or rewrite article content. The only exceptions are the use of AI for “basic copyedits” and to generate some translations. However, in both cases, hu...
Wikipedia and the Problem with Banning AI - Plagiarism Today?
In October 2015, Wikipedia announced a partnership with Turnitin to detect user-uploaded content that was copied (and likely plagiarized) from other sources. That partnership continues today under Wikipedia’s CopyPatrol tool. It’s a system that makes it easy for Wikipedia editors to see what text is overlapping and either fix the issue or flag it as “no action needed.” The tool has both ethical an...
Wikipedia bans AI-generated text from its 7.1 million articles?
From a practical standpoint, Wikipedia wants to ensure that its content is licensed under a Creative Common license. It can’t do that if the text is not original and owned by someone else. With AI, there is a similar issue. Since, right now, purely AI-generated content can not be protected by copyright, it would be impossible to enforce the requirements of Wikipedia’s license, namely attribution a...