Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Senator Asks YouTube To Block Al Qaeda Videos (2008)

from the is-that-constitutional? dept

Summary: In 2008, the Senate Homeland Security and Government Affairs Committee, chaired by then Senator Joe Lieberman, produced a report entitled: ?Violent Islamist Extremism: The Internet, and the Homegrown Terrorist Threat.? The report mentions a rap video called ?Dirty Kuffar? (Kuffar meaning ?nonbeliever?) that, according to the report, praises Osama bin Laden and the attacks of 9/11.

A few days after the report came out, Lieberman sent a letter addressed to then Google CEO Eric Schmidt pointing to the report and asking the company to remove terrorist content from the site, including things like the named video.

As some quickly pointed out, the one video named in the report was hardly espousing terrorism or hate speech. It may be mildly offensive, but it was clearly protected political speech.

The letter that Lieberman sent was accompanied by a list of other videos that Lieberman?s staff claimed were promoting terrorism content, and Lieberman asked YouTube to not only remove the specific videos that violated its policies, but to shut down the accounts of those who posted the videos in the first place.

Decisions for YouTube:

  • How do you determine which content is actually promoting terrorism compared to those that are just discussing it, reporting on it, or highlighting terrorist attacks?
  • How do you distinguish political speech from terrorist content?
  • How do you respond to a sitting US Senator demanding the removal of content?
  • If an account has violated policies against terrorist content once, should the entire account be shut down?

Questions and policy implications to consider:

  • Since an elected US official can potentially change the laws, requests are likely to be taken more seriously.  If the requests are to take down 1st Amendment protected speech, does that raise 1st Amendment issues?
  • Is it possible to readily distinguish content that is promoting terrorism from that which is documenting terrorism and war crimes for the historical record?

Resolution: YouTube chose to push back on Senator Lieberman?s request, putting up a blog post saying that it wished to have a dialogue with the Senator. While the company said it did remove some of the videos Lieberman?s staff highlighted, if they were found to violate its policies, it would not remove them all, nor would it shut down all of the accounts mentioned.

Senator Lieberman’s staff identified numerous videos that they believed violated YouTube’s Community Guidelines. In response to his concerns, we examined and ended up removing a number of videos from the site, primarily because they depicted gratuitous violence, advocated violence, or used hate speech. Most of the videos, which did not contain violent or hate speech content, were not removed because they do not violate our Community Guidelines.

Senator Lieberman stated his belief, in a letter sent today, that all videos mentioning or featuring these groups should be removed from YouTube — even legal nonviolent or non-hate speech videos. While we respect and understand his views, YouTube encourages free speech and defends everyone’s right to express unpopular points of view. We believe that YouTube is a richer and more relevant platform for users precisely because it hosts a diverse range of views, and rather than stifle debate we allow our users to view all acceptable content and make up their own minds. Of course, users are always free to express their disagreement with a particular video on the site, by leaving comments or their own response video. That debate is healthy.

Senator Lieberman continued to pressure Google over these policies over the years, including writing a similar letter to complain about ?terrorist content? on the company?s Blogger platform a few years later. Since then, YouTube has ramped up its efforts to block ?terrorist? content on the platform, but has also been accused many, many times of going too far and actually deleting content from human rights groups that were trying to document war crimes and other atrocities.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , ,
Companies: google, youtube

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow