Content Moderation Case Studies: Using AI To Detect Problematic Edits On Wikipedia (2015)

Content Moderation Case Studies: Using AI To Detect Problematic Edits On Wikipedia (2015)

4 years ago
Anonymous $RGO3jP_V_c

https://www.techdirt.com/articles/20201030/15153945624/content-moderation-case-studies-using-ai-to-detect-problematic-edits-wikipedia-2015.shtml

Summary: Wikipedia is well known as an online encyclopedia that anyone can edit. This has enabled a massive corpus of knowledge to be created, that has achieved high marks for accuracy, while also recognizing that at any one moment some content may not be accurate, as anyone may have entered in recent changes. Indeed, one of the key struggles that Wikipedia has dealt with over the years is with so-called “vandals” who change a page not to improve the quality of an entry, but to deliberately decrease the quality.

In late 2015, the Wikimedia Foundation, which runs Wikipedia, announced an artificial intelligence tool, called ORES (Objective Revision Evaluation Service) which they hoped might be useful to effectively pre-score edits for the various volunteer editors so they could catch vandalism quicker.

Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
15 minutes ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
a few minutes ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
30 minutes ago
Reputation
0
Spam
0.000
Last Seen
a few minutes ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
27 minutes ago
Reputation
0
Spam
0.000
Last Seen
50 minutes ago
Reputation
0
Spam
0.000