Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Moderation Of Racist Content Leads To Removal Of Non-Racist Pages & Posts (2020)

from the moderation-mistakes dept

Summary: Social media platforms are constantly seeking to remove racist, bigoted, or hateful content. Unfortunately, these efforts can cause unintended collateral damage to users who share surface similarities to hate groups, even though many of these users take a firmly anti-racist stance.

A recent attempt by Facebook to remove hundreds of pages associated with bigoted groups resulted in the unintended deactivation of accounts belonging to historically anti-racist groups and public figures.

The unintentional removal of non-racist pages occurred shortly after Facebook engaged in a large-scale deletion of accounts linked to white supremacists, as reported by OneZero:

Hundreds of anti-racist skinheads are reporting that Facebook has purged their accounts for allegedly violating its community standards. This week, members of ska, reggae, and SHARP (Skinheads Against Racial Prejudice) communities that oppose white supremacy are accusing the platform of wrongfully targeting them. Many believe that Facebook has mistakenly conflated their subculture with neo-Nazi groups because of the term ?skinhead.?

The suspensions occurred days after Facebook removed 200 accounts connected to white supremacist groups and as Mark Zuckerberg continues to be scrutinized for his selective moderation of hate speech.

Dozens of Facebook users from around the world reported having their accounts locked or their pages disabled due to their association with the “skinhead” subculture. This subculture dates back to the 1960s and predates the racist/fascist tendencies now commonly associated with that term.

Facebook?s policies have long forbidden the posting of racist or hateful content. Its ban on “hate speech” encompasses the white supremacist groups it targeted during its purge of these accounts. The removals of accounts not linked to racism — but linked to the term “skinhead’ — were accidental, presumably triggered by a term now commonly associated with hate groups.

Questions to consider:

  • How should a site handle the removal of racist groups and content?
  • Should a site use terms commonly associated with hate groups to search for content/accounts to remove?
  • If certain terms are used to target accounts, should moderators be made aware of alternate uses that may not relate to hateful activity?
  • Should moderators be asked to consider the context surrounding targeted terms when seeking to remove pages or content?
  • Should Facebook provide users whose accounts are disabled with more information as to why this has happened? (Multiple users reported receiving nothing more than a blanket statement about pages/accounts “not following Community Standards.”)
  • If context or more information is provided, should Facebook allow users to remove the content (or challenge the moderation decision) prior to disabling their accounts or pages?

Resolution: Facebook’s response was nearly immediate. Facebook apologized to users shortly after OneZero reported the apparently-erroneous deletion of non-racist pages. Guy Rosen (VP- Integrity at Facebook) also apologized for the deletion on Twitter to the author of the OneZero post, saying the company had removed these pages in error during its mass deletion of white supremacists pages/accounts and said the company is looking into the error.

Filed Under: , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Moderation Of Racist Content Leads To Removal Of Non-Racist Pages & Posts (2020)”

Subscribe: RSS Leave a comment
45 Comments
Anonymous Coward says:

Yes, they should look at context. Just like DMCA moderation should look at context and… whether or not a notice is even accurate or valid.

They should also supply context to users when accounts or posts are moderated.

They should also have a reasonable appeals system, not one that depends upon such egregious mis-moderation that it makes the news somewhere that they notice it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

DMCA moderation should look at context and… whether or not a notice is even accurate or valid.

It should, but it legally can’t. Anyone who receives a DMCA takedown notice must take down or disable access to the content in question or risk losing their “safe harbor” protections. Companies often automate DMCA takedown systems for that exact reason. And yes, that means those systems will honor bogus takedowns. It is what it is.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re:

"Yes, they should look at context."

They should. Now, do you have an automated method and/or other scalable solution that can do that accurately for the amount of posts they get every minute? That’s the problem.

"They should also have a reasonable appeals system"

Define "reasonable" in a way that’s actionable.

Anonymous Coward says:

It would be so much simpler for individual contributors to just … be polite!
Learn what kind of manners a particular online community expects. Learn by being moderated, if you’re incapable of taking gentler hits. Be patient: nobody wants to know how much you want to blame someone else for your own stupidity or obstinacy. If you’re having trouble communicating online, try communicating face to face. Then come back online when you’ve gotten that figured out.

There! problem solved.

This comment has been deemed insightful by the community.
PaulT (profile) says:

That’s the problem, as it’s impossible to moderate on a granular level at a massive scale, purges like this will create collateral damage. At least they seem to have reacted correctly and quickly when notified of the error.

"so they didn’t even look at his profile"

Because no racist has ever used a fake profile to avoid consequences. /s

The moderation could have been better, but you’re hopelessly naive if you think that all social media accounts contain accurate personal information and real photos of the person they claim to be.

Anon E Mouse says:

What makes a bad term bad

This touches on a related phenomenon I’ve been wondering about recently. Words or expressions being labelled as bad, unwelcome and usually racist terms, even in contexts where I find them perfectly appropriate. In this case it’s the word "skinhead" being used in a very narrow fashion, but others I’ve wondered about include but are not limited to the word color, the ok sign, and frogs. Skinhead picking up a negative meaning from guilt by association I can sort of understand, but the others are mysteries to me.

How do words and concepts go from everyday use to being practically banned? I’ve seen the term dogwhistling used to explain some cases, but now the word dogwhistling’s on the bad word list too so this only confuses me more. I have no idea how this stuff works, but watching it happen in real time is very interesting.

PaulT (profile) says:

Re: What makes a bad term bad

"How do words and concepts go from everyday use to being practically banned?"

Language is fluid. Talk to someone 100 years ago about them being gay compared to someone now. You’ll be talking about 2 totally different things even though you’re using the same word.

"I’ve seen the term dogwhistling used to explain some cases, but now the word dogwhistling’s on the bad word list too so this only confuses me more"

If that term is on a list, whoever made the list is confused.

To explain, we’ve made enough progress in society now that being outwardly bigoted or racist is not acceptable. People who would use the n word constantly 50 years ago now has to keep that in check, as even the most ape-like knuckledragger understands that they can’t use it around non-racists. So, they have to hide their bigotry by using words that don’t necessarily mean anything racist on their own but give a knowing wink to other bigots when used in a certain context.

"others I’ve wondered about include but are not limited to the word color, the ok sign, and frogs"

Colour isn’t a negative word, really, but I suppose it depends on context. The other 2 are simple examples of dogwhistling and co-opting. The OK sign thing started out as a prank, but white supremacists have decided to start using it to provide plausible deniability. If they’re signalling to other Nazis and get caught flashing the OK sign, they can pretend they were using it in ways other than the one they’re really using.

As for "frogs", it’s not all frogs but a specific meme called Pepe The Frog, which was co-opted by white supremacists in their online communications. It sucks for the original creator who despises that stuff, but the association is now as clear as when the swastika was taken from its spiritualist roots and used as a symbol of the Third Reich.

Anon E Mouse says:

Re: Re: What makes a bad term bad

What do the not-actually-an-OK-sign OK-sign and the cartoon frog actually mean? Are they slurs that rightfully deserve to be shunned? Are they just things a certain subset of people use? Should things be banned because someone you don’t like used them? Taking things to their ridiculous extreme, should oxygen be banned because bad people sorry no i can’t write this with a straight face never mind.

There’s clearly some lines to this which I’m still in the process of discovering. It’s all very fascinating.

Now, for the amphibians. I’m aware of Pepe’s more recent symbology and the ties to the Kekistan thing, so I can understand that being on the bad things list. But for some reason I thought it had spread to actual frogs too, but I can’t figure out why I thought so and there’s nothing on Google either. I must’ve dreamed that part up, sorry about that.

PaulT (profile) says:

Re: Re: Re: What makes a bad term bad

"Should things be banned because someone you don’t like used them?"

Did I say they should be banned? No, I didn’t.

I’m simply pointing out that otherwise innocent things are being co-opted by white supremacists, and this historically has had the effect of making even innocent uses of those things questionable. Meaning of things can change over time, and words and symbols that were once wholesome are now offensive.

Put it this way – assuming you’re not in Germany, it’s likely that flying a swastika flag is not illegal. It will, however get you labelled a Nazi, even if you’re flying a flag with the original sanskrit version that existed before the Nazis co-opted it. It’s not illegal for you to fly such a thing, but you will get negative reactions no matter what your original intention was.

" But for some reason I thought it had spread to actual frogs too"

Alex Jones once went on a hilarious rant about chemicals turning frogs gay that went viral, and the sort of dull-witted person who actually takes Infowars seriously might have used that seriously in some way, though I’m not aware of any specific examples.

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

"…others I’ve wondered about include but are not limited to the word color, the ok sign, and frogs."

Apparently frogs are a sensitive issue, being religiously considered by a certain type of alt-right nut to be the hapless subjects of what appears to be chemically induced reversed gay "therapy". Just ask Alex Jones whose genius brainchild might be the origin of the reason frogs have been included among controversial keywords.

"How do words and concepts go from everyday use to being practically banned?"

Two reasons, mainly. One is where a conspiracy theory involving <innocent concept X> actually being the key revelation for <global conspiracy Y> goes viral and since every nine out of ten new online mentions of, for example, "frogs" may be the start of a flame war about "gay frogs" and Alex Jones, canny moderators eager to nip the trolling in the bud include that specific word among the verba non grata for a while.

The second reason is where the innocent word in question has, a few times too often in recent times, been adopted by a certain type of bigot in lieu of using, for instance, the N-word or other ethnic, gender-based or LGBT-phobic slur.

And it’s all because the angry and upset bigot really wants to be able to publicly espouse his opinion about the lesser beings with which he has to share the planet, and thus he leapfrogs from one word to the next, never quite realizing that it is the way he uses those words which eventually give those words a bad online reputation.

Anonymous Coward says:

Re: Re:

"And it’s all because the angry and upset bigot really wants to be able to publicly espouse his opinion about the lesser beings with which he has to share the planet, and thus he leapfrogs from one word to the next, never quite realizing that it is the way he uses those words which eventually give those words a bad online reputation."

And then we go and overreact banning those words, no matter the context or even if they were used as meme before.

This isn’t too different as to how terrorists harm our society the moment it overreacts to them taking away our liberties and fundamental rights.

And thus, the racists win. Sure, it’s some stupid words and memes, but they got their win the moment they took something away from us.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re:

banning those words

A society can’t “ban” a word — American society in particular (thanks, First Amendment). It can “regulate” the use of that word through social shaming and altering social norms around that word, though. And anyone can refuse to associate with people who use that word, including the owners and operators of online services such as Twitter.

This isn’t too different as to how terrorists harm our society the moment it overreacts to them taking away our liberties and fundamental rights.

I don’t know if you’ve noticed, but nobody has made illegal the carrying of a Confederate flag. Sure, lots of people refuse to associate with someone who flies/wears/defends the Stars ’n Bars. But that doesn’t make those people “terrorists”. It makes them people who don’t like someone who flies/wears/defends a flag associated with a failed state that betrayed and fought a war with the United States over the institution of slavery.

they got their win the moment they took something away from us

Which is why we see the “reclaiming” of words and images. “Queer”, for example, was (and kinda still is) a slur against LGBT people. But plenty of LGBT people reclaimed the word as a self-descriptor and a broad descriptor of people who aren’t straight, cisgender, heteroromantic, or any combination thereof (e.g., asexual people, non-binary people). Yes, not every LGBT person agrees with that reclaiming. But enough of them have reclaimed the word that it isn’t odd to see people use “queer” as a shorthand for their queerness when a situation dictates such usage.

Racists can “take” words and images from us. Some we can take back, some we can’t. (You can’t reclaim the swastika, for example.) But it’s not “terrorism” — it’s tribalistic idiocy.

nasch (profile) says:

Re: Re: Re: Re:

I don’t know if you’ve noticed, but nobody has made illegal the carrying of a Confederate flag.

I think what he’s talking about is how the 9/11 attacks, while awful, really did not do all that much damage directly when measured against the scale of the entire US. However our overreaction to them has been immensely harmful to ourselves.

Anonymous Coward says:

Re: Re: Re:2 Re:

Thanks nasch, it still baffles me how he managed to twist my whole post into saying that I’m calling those who fight against racism as "terrorists", when I was associating the supremacists with its effects, not those who fight them.

Stephen, not everyone who disagrees with the finer points is one of Trump’s lackeys, you know.

For me, the end doesn’t justify the means even if the cause is noble as fuck, and I’m not going to shoot on my own foot to make "nigger" disappear from the dictionary.

Racism, among other things, is one of the stupidest things ever, but I’m not going to fight stupid with stupid.

I’m not going to support shit that I rejected in the "war against terrorism".

Scary Devil Monastery (profile) says:

Re: Re: Re:

"And thus, the racists win. Sure, it’s some stupid words and memes, but they got their win the moment they took something away from us."

True enough. Here, you remember the swastika? That old germanic solar wheel connotating fortune, light and prosperity? (as it is still used today in asia where the symbol hasn’t become synonymous with genocide).

The word negro is basically portuguese/spanish for dark. It is an offensive word only when it is used in English – because that’s when it refers to skin color. It gets truly offensive when it is pronounced in southern fashion, with an i and two g’s.

George W Bush Jr. did his level best to remove the word french from any itemry considered positive -anyone recall "freedom fries"?

And apparently the declaration of independence is now seen as Anti-Trump by Trump adherents.

There is, unfortunately, no part of language – from single words to whole declarations – whose meanings does not change depending on individual perception and pre-understanding.

PaulT (profile) says:

Re: Re: Re: Re:

"The word negro is basically portuguese/spanish for dark."

Actually is means the colour black. Oscuro would be the better translation for dark in Spanish. But, yes, the imported word has certainly been co-opted in English by racists.

"George W Bush Jr. did his level best to remove the word french from any itemry considered positive -anyone recall "freedom fries"?"

To be fair… that was more likely redneck morons reacting to the French daring to (correctly it turns out) object to unnecessary war against a country that had not attacked the US. I doubt this was W.’s personal doing, especially since the idiots were calling for things like a ban on French’s mustard (which is, of course, a purely American company with no relation to France). Plus, of course "French fries" are likely Belgian in their original version…

Your central point stands, but it’s worth making sure the criticisms are factually as well as logically accurate 🙂

Scary Devil Monastery (profile) says:

Re: Re: Re:2 Re:

"Your central point stands, but it’s worth making sure the criticisms are factually as well as logically accurate :)"

Mea Culpa.

In my defense so much stupid and moron was spilled by GWB, Rumsfeld, Ashcroft and Cheney it’s, by now, hard to remember the specific stupid they weren’t directly responsible for.

You are right, however, I shouldn’t blame poor dubya for the idiocy perpetrated by the people now wearing MAGA hats…

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Mods

"Moderation fails every time"

Except in the thousands of examples I can think of where it actually does. For example – do you honestly think that your workplace isn’t moderated, assuming you have one, and that this usually works out better for everyone than if it weren’t moderated?

Anonymous Coward says:

Re: Re: Re:2 Mods

It’s not surprising. restless is one of the dumbasses who thinks vaccination is poisoning your children and would rather everyone got shingles and smallpox all over again. The fact that Facebook et al are moderating away his anti-vaccine celebrity posts is something that makes his anti-vaccine erection sad.

I wonder if he realizes that the anti-vaccine/Republican/conservative response to Twitter, Parler, is in itself a very heavily moderated platform for a solution that claims to be "American".

Scary Devil Monastery (profile) says:

Re: Mods

"Moderation is un-American."

Someone ought to tell Americans that. Neither party has ever believed in free speech. It has to be said that only one of the parties consider free speech to be an abomination however.

""Racist" content is as ridiculous and false as "hate" speech."

Duly noted that you are upset you can’t advocate lynching "niggers" anymore on FaceBook.

"Who moderates the moderators? "

On private property? Only the property owner who is of course free to set the rules. I realize this must suck for you in particular given your historical adherence to alt-right stormfront/Alex jones echo chamber narrative.

In the public space? No one. Feel free to enjoy as much of the public space as the rest of society. I realize it must suck if your views are so hideously repugnant to so many people you will end up getting heckled by all the people making use of their free speech to inform you what an asshat you are.

Once again for the benighted lackwits among the bigots and racists who still don’t understand what even small children easily realize – if you are free to speak in public then so is everyone else. In private the host and owner of the premises sets the rules.

Rishiul (user link) says:

latin women date

Choice reveal the best dating apps for your age and lifetime

if you’re looking to dip your toes into the world of online dating but don’t know where to start, It can be hard to know where to go for the best information.

And while you might not know it, There are very different sites and apps to go to depending on your age, Lifestyle and selections.

‘Adding a soup to your day helps to lower your overall. aussie women age up to 20 YEARS faster than ladies in. Can YOU see what’s wrong during this Target doona? mummy.

because of 1.1 million australian members, When Bumble launched onto the market it ‘s primary to ‘disrupt traditional gender roles’ insofar as women must make the first move in sending a message for heterosexual matches.

The consumer organisation say Bumble could be the one for you if you are young and purchasing a site that is easy and free to use.

It’s also good for ladies, because of the policy that means women make the first move.

Bumble is free to get, But it is easy to pay up to $33.99 for several extra features like Bumble Boost and Bumble Coins.

The app also has unique features like Bumble BFF for making friends, And Bumble Bizz for networking and strengthening career chances.

friendly to: Women exceptionally, circumstances aged 22 30.

while having 1.1 million aussie members, When Bumble launched onto the market it proved incredibly popular it’s good for youngsters (original image)one of the older kids on the dating block, Choice say that eHarmony is the dating site for you if you are a bit older and know what you long for.

EHarmony’s selling point is its tailored approach as your profile is created from a detailed questionnaire where you rate your personality and search, As well as what you desire in a partner and from a relationship.

you have access to eHarmony for free, But paying a subscription gives you more concerning other users.

The membership can cost up to $53.95 and it entitles you to view who’s viewed you, Send boundless messages, See lots of photos and see who’s favourited you.

good for: Older daters who know what they really want.

so CUPID

If you’re queer or searching for a non monogamous relationship, Choice reckon OK Cupid is the site for you.

OK Cupid claims to use a math based matching system to help users find partners.

After constructing a basic profile, Users can elect to fill out hundreds of optional broad reaching questions like if they’d date a messy person, if they like dogs, Or even how frequent they brush their teeth.

The more issues and answers you answer on your profile, the best the matching system becomes, an additional says.

OK Cupid costs nothing to $34.90 a month, conditional on your membership.

good to: people who are queer or seeking a non monogamous <a href=https://www.bestbrides.net/signs-that-vietnamese-women-like-you/>how to tell if a vietnamese woman likes you</a> relationship.

Tinder (sell image) Is the dating app for you if you are young and looking for someone within your geographical location.
[—-]

Rishphf (user link) says:

how to tell if a libra woman likes you

Italian woman sues vietnamese casino for not paying out 1 9 mln winnings vnexpress point

as per the lawsuit claim against the Kongkon company filed on Oct. 11, 2021, The anonymous woman visited a head for play cards between May 30 and June 9 last year. Kongkon runs Corona alternative Casino Phu Quoc. She won a number of chips equal to VND54.6 billion ($2.4 million). The level of investment has been confirmed by the company, Local media suggested. The company then paid the Italian woman through two others a total VND10 billion, Sent on three separate parties: perfectly 31, June 7 and as a consequence 10. Kongkon has yet to hand the Italian woman the rest of the money, VND44.6 million. The woman has requested the company multiple times to pay her, But to no avail. The woman is now suing an additional to pay her the money, Plus motivators, Which is about VND1.4 billion dollars, adjusted Oct. 11, 2021. On jan. 10, Kien Giang People’s Court accepted to proceed through the case. Vietnam, Which treats poker as a ‘social evil’, Allowed Vietnamese to enter a casino initially in <read>more.] About Italian woman sues Vietnamese casino for not spending $1.9 mln winningsVit Nam refutes ‘false’ claim on militia implementation in East SeaLk Lake, A serene solitude spot in the Central Highlands16,715 new COVID 19 cases reported on ThursdayMasan Group Top ASEAN consumer pick to be able to Bank of America16,715 new cases said on January 20Vit Nam, Hungary foster parliamentary cooperationApple <a href=https://www.bestbrides.net/how-to-tell-if-a-woman-likes-you-based-on-her-zodiac-sign/>how to know if a libra woman likes you</a> discontinues full-sized HomePod, to focus on HomePod miniiPhone demand weakness just ‘noise,’ outlook carries on strong, Analyst saysAd reinforced HBO Max option coming in JuneApple Watch SE returns to $259, Cellular $309 in today’s Amazon dealsDaVinci Resolve and Fusion now from a technical perspective support M1 Macs.
[—-]

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow