Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Facebook's Moderation Of Terrorist Content Results In The Removal Of Journalists' And Activists' Accounts (June 2020)

from the context-matters dept

Summary: In almost every country in which it offers its service, Facebook has been asked — sometimes via direct regulation — to limit the spread of “terrorist” content.

But moderating this content has proven difficult. It appears the more aggressively Facebook approaches the problem, the more collateral damage it causes to journalists, activists, and others studying and reporting on terrorist activity.

Because documenting and reporting on terrorist activity necessitates posting of content considered to be “extremist,” journalists and activists are being swept up in Facebook’s attempts to purge its website of content considered to be a violation of terms of service, if not actually illegal.

The same thing happened in another country frequently targeted by terrorist attacks.

In the space of one day, more than 50 Palestinian journalists and activists had their profile pages deleted by Facebook, alongside a notification saying their pages had been deactivated for “not following our Community Standards.”

“We have already reviewed this decision and it can’t be reversed,” the message continued, prompting users to read more about Facebook’s Community Standards.

There appears to be no easy solution to Facebook’s over-moderation of terrorist content. With algorithms doing most of the work, it’s left up to human moderators to judge the context of the posts to see if they’re glorifying terrorists or simply providing information about terrorist activities.

Decisions to be made by Facebook:

  • How do you define ?terrorist? or ?extremist? content?
  • Does allowing terrorist content to stay up in the context of journalism or activism increase the risk it will be shared by those sympathetic/supportive of terrorists?
  • Should moderated accounts be allowed to challenge takedowns of terrorist content or the deactivation of their accounts?
  • Does aggressive moderation of terrorist content result in additional unintended harms, like the removal of war crime evidence?

Questions and policy implications to consider:

  • Would providing more avenues for removal challenges and/or additional transparency about moderation decisions result in increased government scrutiny of moderation decisions?
  • Can this collateral damage be leveraged to push back against government demands for harsher moderation policies by demonstrating the real world harms of over-moderation?
  • Does this aggressive moderation allow the terrorists to “win” by silencing the journalists and activists who are exposing their atrocities?
  • Could Facebook face sanctions/fines for harming journalists and activists and their efforts to report on acts of terror?

Resolution: Facebook continues to struggle to eliminate terrorist-linked content from its platform. It appears to have no plan in place to reduce the collateral damage caused by its less-than-nuanced approach to a problem that appears — at least at this point — unsolvable. In fact, its own algorithms have generated extremist content by auto-generating “year in review” videos utilizing “terrorist” content uploaded by users, but apparently never removed by Facebook.

Facebook’s ongoing efforts with the Global Internet Forum to Counter Terrorism (GIFCT) probably aren’t going to limit the collateral damage to activists and journalists. Hashes of content designated “extremist” are uploaded to GIFCT’s database, making it easier for algorithmic moderation to detect and remove unwanted content. But utilizing hashes and automatic moderation won’t solve the problem facing Facebook and others: the moderation of extremist content uploaded by extremists and similar content uploaded by users who are reporting on extremist activity. The company continues to address the issue, but it seems likely this collateral damage will continue until more nuanced moderation options are created and put in place.

Filed Under: , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Facebook's Moderation Of Terrorist Content Results In The Removal Of Journalists' And Activists' Accounts (June 2020)”

Subscribe: RSS Leave a comment
2 Comments
Anonymous Coward says:

Your points about moderation are interesting when applied to free content distribution, but very problematic when it comes to paid advertising. Facebook is profiting from all of its ads, including billions of dollars of political advertising. It keeps ads cheap by refusing to allocate adequate resources even to monitor the ads that it’s taking payment to target on behalf of the advertisers. It should definitely be liable for any harm done by these ads both civilly and criminally. Calling it a mere "platform" when it is totally opaque about its targeting practices and ad promotion technologies is absurd. It cannot meaningful assert neutrality, if it hides the information that would allow independent evaluation of its practices. It should therefore be legally liable for all advertising on the platform.

Imagine a cottage industry of lawyers suing this incredibly wealthy corporation for all the damages it’s causing by paid advertising. Users should be able to sue for harm caused by failure to apply FB’s own terms of service as a breach of contract. With tens of billions of dollars in profits to protect, you better believe, a few really large judgments for punitive damages in egregious cases, and FB would get right on this problem of content moderation for the ad side of the biz. With one of the world’s greatest technical teams able to work wonders in targeting ads (i.e., matching content to thousands of variables to maximize impact), who knows what they could do if they took seriously the need to apply legal standards and their own terms of service to ads? I have every confidence in the capability of this company, but not its will to do anything meaningful about content moderation.

Instead, we continue to have no real pressure on FB to do a better job of moderating content. And apologists for the incredible harm FB has done to our political system and social fabric, saying, poor FB, with all their billions of dollars profit, how can anyone be expected to do a good job of moderation at scale. Well, how indeed, if there are no real monetary incentives to do it?

Stephen T. Stone (profile) says:

Re:

Aside from “nerd harder” or “throw more people at the problem”, what the hell else could Facebook do to make you happy with its moderation abilities? Because as much as people like you seem to think it’s possible with those first two ideas, content moderation doesn’t scale. And speaking as someone who knows about content moderation from both sides of the user/moderator dichotomy, Facebook will never get it 100% right even if Facebook employees nerd harder and Zuckerberg throws more people at the problem.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow