Why Tech Platforms Don’t Treat All Terrorism the Same
https://www.wired.com/story/why-tech-platforms-dont-treat-all-terrorism-same/
In January 2018, the top policy executives from YouTube, Facebook, and Twitter testified in a Senate hearing about terrorism and social media, touting their companies’ use of artificial intelligence to detect and remove terrorist content from groups like ISIS and Al Qaeda. After the hearing, Muslim Advocates, a civil rights group that has been working with tech companies for five or six years, told executives in an open letter it was alarmed to hear “almost no mention about violent actions by white supremacists,” calling the omission “particularly striking” in light of the murder of Heather Heyer at a white supremacist rally in Charlottesville, Virginia, and similar events.
More than a year later, Muslim Advocates has yet to receive a formal response to its letter. But concerns that Big Tech expends more effort to curb the spread of terrorist content from high-profile foreign groups, while applying fewer resources and less urgency toward terrorist content from white supremacists, resurfaced last week after the shootings at two mosques in Christchurch, New Zealand, which Prime Minister Jacinda Ardern called, “the worst act of terrorism on our shores.”