Wikipedia editors have restricted contributors paid by the Open Knowledge Association (OKA) for using AI to translate articles, after discovering the AI introduced factual errors and "hallucinations" into the content. The incident highlights the ongoing challenge of maintaining Wikipedia's reliability against error-prone generative AI, even when used with good intentions to expand content.
Investigations revealed OKA relied on contractors instructed to copy/paste article text into AI models like ChatGPT, Gemini, and Grok for translation, with minimal human review, leading to widespread problems. The case demonstrates how Wikipedia's open governance model allows volunteer editors to identify and remediate such systemic issues.
The main topics covered are: Wikipedia's policy enforcement against AI misuse, the specific problems caused by AI-translated articles, the operational methods of the OKA organization, and the role of Wikipedia's community in quality control.