Content Moderation Case Study: Detecting Sarcasm Is Not Easy (2018)

Content Moderation Case Study: Detecting Sarcasm Is Not Easy (2018)

4 years ago
Anonymous $rxtAWepgzY

https://www.techdirt.com/articles/20200910/12184145281/content-moderation-case-study-detecting-sarcasm-is-not-easy-2018.shtml

Summary: Content moderation becomes even more difficult when you realize that there may be additional meaning to words or phrases beyond their most literal translation. One very clear example of that is the use of sarcasm, in which a word or phrase is used either in the opposite of its literal translation or as a greatly exaggerated way to express humor.

In March of 2018, facing increasing criticism regarding certain content that was appearing on Twitter, the company did a mass purge of accounts, including many popular accounts that were accused of simply copying and retweeting jokes and memes that others had created. Part of the accusation for those that were shut down, was that there was a network of accounts (referred to as “Tweetdeckers” for the user of the Twitter application Tweetdeck) who would agree to mass retweet some of those jokes and memes. Twitter suggested that these retweet brigades were inauthentic and thus banned from the platform.