Wikipedia bots designed to perform maintenance tasks engaged in persistent automated conflicts with each other, undoing each other's edits for years without resolution.
Between 2001 and 2010, automated software bots on Wikipedia that were designed to perform maintenance tasks such as undoing vandalism, enforcing bans, checking spelling, creating inter-language links, and identifying copyright violations began engaging in persistent conflicts with each other. Researchers at the University of Oxford found that these bots were reverting changes made by other bots far more often than those made by humans, with some conflicts continuing for years. On English Wikipedia, bots reverted another bot on average 105 times over the ten-year period, compared to only 3 times for human editors. The frequency of bot-on-bot conflicts increased consistently over time. Notable examples included Xqbot and Darknessbot clashing over 3,629 different articles in one year, and Tachikoma bot fighting with Russbot on more than 3,000 articles over two years. The conflicts varied by language, with Portuguese Wikipedia bots fighting the most (185 bot-bot reverts per bot on average) and German Wikipedia bots the least (24 times on average). Most disagreements occurred between bots specializing in creating and modifying inter-language links, with the same bots responsible for the majority of reverts across all language editions studied.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.