Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017)

from the verified-or-a-stamp-of-approval? dept

Summary: Many social networks have enabled users to use a pseudonym as their identity on that network. Since users could use whatever name they wanted, they could pretend to be someone else, creating certain challenges for those platforms. For example, for sites that allowed such pseudonyms, how would they identify who the actual person was and who was merely an impostor? Some companies, such as Facebook, went the route of requiring users to use their real names. Twitter went another way, allowing pseudonyms.

But what can a company do when there are multiple accounts of the same, often famous, person?

In 2009, Twitter began experimenting with a program to “verify” celebrities.

The initial intent of this program was to identify which Twitter account actually belongs to the person or organization of that Twitter handle (or name). Twitter’s announcement of this feature explains it in straightforward terms:

With this feature, you can easily see which accounts we know are ‘real’ and authentic. That means we’ve been in contact with the person or entity the account is representing and verified that it is approved. (This does not mean we have verified who, exactly, is writing the tweets.)

This also does not mean that accounts without the ‘Verified Account’ badge are fake. The vast majority of accounts on the system are not impersonators, and we don’t have the ability to check 100% of them. For now, we’ve only verified a handful of accounts to help with cases of mistaken identity or impersonation.

From the start, Twitter denoted “verified” accounts with a, now industry-standard, “blue checkmark.” In the initial announcement, Twitter noted that this was experimental, and the company did not have time to verify everyone who wanted to be verified. It was not until 2016 that Twitter first opened up an application process for anyone to get verified.

A year later, in late 2017, the company closed the application process, noting that people were interpreting “verification” as a stamp of endorsement, which it had not intended. Recognizing this unintended perception, Twitter began removing verification checkmarks from accounts that violated certain policies, starting with high-profile white supremacists.

While this policy received some criticism for “blurring” the line between speakers and speech, it was a recognition of the concerns about how the checkmark was seen as an “endorsement” of someone whose views and actions (even those off of Twitter) were not those Twitter wished to endorse. In that way, the removal of the verification became a content moderation tool for a type of subtle negative endorsement.

Even though those users were “verified” as authentic, Twitter recognized that being verified was a privilege and that removing it was a tool in the content moderation toolbox. Rather than suspending or terminating accounts, the company said that it would also consider removing the verification on accounts that violated its new hateful conduct and abusive behavior policies.

Company Considerations:

  • What is the purpose of a verification system on social media? Should it just be to prove that a person is who they say they are, or should it also signal some kind of endorsement? How should the company develop a verification system to match that purpose? 
  • If the public views verification as a form of endorsement, how important is it for a company to reflect that in its verification program? Are there any realistic ways to have the program not be considered an endorsement?
  • Under what conditions does it make sense to use removal of verification as a content moderation tool? Is removing verification an effective content moderation tool? If not, are there ways to make it more effective?

Issue Considerations:

  • What are the consequences of using the verification (and de-verification) process as a content moderation tool to “punish” rule violators?
  • What are both the risks and benefits of embracing verification as a form of endorsement?
  • Are there other subtle forms of content moderation similar to the removal of privileges like the blue checkmark, and how effective can they be?

Resolution: It took many years until Twitter reopened its verification system, and then it did so only in a very limited manner. The system has already ran into problems, as journalists discovered multiple fake accounts that were verified.

However, a larger concern over the new verification rules is that it allows for significant subjective decision-making by the company over how the rules are applied. Activist Albert Fox Cahn explained how the new program is making it “too easy” for journalists to get verified but “too difficult” for activists, showing the challenging nature of any such program.

“When Angela Lang, founder and executive director of the Milwaukee-based civic engagement group BLOC, decided to get a checkmark, she thought, ‘I’ve done enough. Let’s check out how to be verified.’ Despite Lang and BLOC’s nationally recognized work on Black civic engagement, she found herself shut out. When Detroit-based activist and Data 4 Black Lives national organizing director Tawana Petty applied, her request was promptly rejected. Posting on the platform that refused to verify her, Petty said, ‘Unbelievable that creating a popular hashtag would even be a requirement. This process totally misses the point of why so many of us want to be verified.’ Petty told me, ‘I still live with the anxiety that my page might be duplicated and my contacts will be spammed.’ Previously, she was forced to shut down pages on other social media platforms to protect loved ones from this sort of abuse.

“According to anti-racist economist Kim Crayton, verification is important because ‘that blue check automatically means that what you have to say is of value, and without it, particularly if you’re on the front lines, particularly if you’re a Black woman, you’re questioned.’ As Lang says, ‘Having that verification is another way of elevating those voices as trusted messengers.’ According to Virginia Eubanks, an associate professor of political science at the University at Albany, SUNY, and author of Automating Inequality, ‘The blue check isn’t about social affirmation, it’s a safety issue. Someone cloning my account could leave my family or friends vulnerable and could leave potential sources open to manipulation.’” — Albert Fox Cahn

Originally published to the Trust & Safety Foundation website.

Filed Under: , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017)”

Subscribe: RSS Leave a comment
8 Comments
That Anonymous Coward (profile) says:

"creating a popular hashtag would even be a requirement."

Da Faq is this shit?
You can be an important person but since you never managed to make a hashtag trend we have no time to bother with you.

They missed the bus on checkmarks, they let people take them the wrong way for far to long & then kept playing keep away from some people while they approved fake accounts.

I mean I managed to get Tiff v Twitter to trend one day, does that mean I’m worthy of review to be verified?
How in the hell would they manage to verify I am the one true TAC anyways?
Is it like a secret society where other people with blue checkmarks can vouch for a person being the pseudonym they claim to be?

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Bad implementation by Twitter leads to this:

  • “According to anti-racist economist Kim Crayton, verification is important because ‘that blue check automatically means that what you have to say is of value, and without it, particularly if you’re on the front lines, particularly if you’re a Black woman, you’re questioned.’ As Lang says, ‘Having that verification is another way of elevating those voices as trusted messengers.’ According to Virginia Eubanks, an associate professor of political science at the University at Albany, SUNY, and author of Automating Inequality, ‘The blue check isn’t about social affirmation, it’s a safety issue. Someone cloning my account could leave my family or friends vulnerable and could leave potential sources open to manipulation.’” — Albert Fox Cahn *

Look, these things are important, but the Blue Check Mark™ addresses none of them, nor should it. Especially a safety issue, wow. That’s what you’re counting on?

Seriously, if they weren’t into the whole brevity thing, they’d have a check mark or whatever with the text, "This person is who they say they is," or something unambiguously to that effect, and only that effect. That should be pretty darn clear.

Anonymous Coward says:

One of the greatest satirists of the XX century, Jaroslav Hašek, wrote about the taking back blue checkmarks back in 1920s:

‘…he was the first in his regiment to have his leg torn off by a shell. He got an artificial leg and began to boast about his medal everywhere and to say he was the first and very first war cripple in the regiment. Once he came to the Apollo at Vinohrady and had a row with butchers from the slaughterhouse. In the end they tore off his artificial leg and clouted him over the head with it. The man who tore it off him didn’t know it was an artificial one and fainted with fright. At the police station they put Mlicko’s leg back again, but from then on he was furious with his Great Silver Medal for valour and went and pawned it in the pawnshop, where they seized him and the medal too. He had some unpleasantness as a result.

There was a kind of special court of honour for disabled soldiers and it sentenced him to be deprived of his Silver Medal and later of his leg as well . . .’

‘ How do you mean ? ‘

‘Awfully simple. One day a commission came to him and informed him that he was not worthy of having an artificial leg. Then they unscrewed it, took it off and carried it away.’

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow