Content Moderation Case Studies: Using AI To Detect Problematic Edits On Wikipedia (2015)
https://www.techdirt.com/articles/20201030/15153945624/content-moderation-case-studies-using-ai-to-detect-problematic-edits-wikipedia-2015.shtml
Summary: Wikipedia is well known as an online encyclopedia that anyone can edit. This has enabled a massive corpus of knowledge to be created, that has achieved high marks for accuracy, while also recognizing that at any one moment some content may not be accurate, as anyone may have entered in recent changes. Indeed, one of the key struggles that Wikipedia has dealt with over the years is with so-called “vandals” who change a page not to improve the quality of an entry, but to deliberately decrease the quality.
In late 2015, the Wikimedia Foundation, which runs Wikipedia, announced an artificial intelligence tool, called ORES (Objective Revision Evaluation Service) which they hoped might be useful to effectively pre-score edits for the various volunteer editors so they could catch vandalism quicker.