Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020)

from the political-moderation dept

Summary: For years now there have been concerns raised about the possibility of ?deep fake? videos impacting an election. Deep fakes are videos that have been digitally altered, often to insert someone?s face onto another person?s body, to make it appear that they were somewhere they were not or did something they did not. To date, most of the more sophisticated deep fake videos have been mainly for entertainment purposes but there has been a concern that they could lead to faked accusations against politicians or other public figures. However, so far, there has been little evidence of deep fake videos being used in elections. This may be because the technology is not yet good enough or because such videos have been easy to debunk through other evidence.

Meanwhile, there has been increasing concern about something slightly different: cheap fake or shallow fake videos, which are just slight modifications and adjustments to real videos?less technically sophisticated, but also potentially harder to combat.

One of the most high profile examples of this was a series of videos that went viral on social media of House Speaker Nancy Pelosi that were modified by slowing down the video to 75% of the original speed. The modified videos were spread with false claims that they showed Pelosi slurring her words, possibly indicating intoxication. Various media organizations fact checked the claims, noting that the videos were altered and therefore presented a very inaccurate picture of Pelosi and her speech patterns.

Social media companies were urged by some to delete these videos, including by Speaker Pelosi herself, who argued that Facebook in particular should remove them. Both Facebook and Twitter refused to take down the video, saying that it did not violate their policies. YouTube removed the video.

In response to concerns raised by Pelosi, some highlighted that it would be impossible to expect social media to remove every misleading political statement that either took the words of an opponent out of context or presented them in a misleading way, while others suggested that there?s a clear difference when it comes to manipulated video as compared to manipulated text.

Others highlighted that it would be difficult to distinguish manipulated video from satire or other attempts at humor.

Company Considerations:

  • Where should companies draw the line between misleading political content and deliberate misinformation?
  • Under what conditions would a misleading cheap fake video separately violate other policies, such as harassment?
  • What should the standards be for removing political content that could be deemed misleading?
  • Does medium matter? Should there be different rules for manipulated videos as compared to other types of content, such as taking statements out of context? 
  • Should there be exceptions for parody/satire?
  • Are there effective ways for distinguishing videos that are manipulated to mislead vs. those that are manipulated for humor or commentary? 
  • Should the company have different standards if the subject of a cheap fake video was not a political or public figure? 
  • What are other alternative approaches, beyond blocking, that could be used to address manipulated political videos?

Issue Considerations:

  • What are the possible unintended consequences if all ?manipulated video? is deemed a policy violation?
  • What, if any, is the value of not removing videos of political or public figures that are clearly misleading? Would there be any unintended consequences of such a policy?
  • What are the implications for democracy if manipulated political videos are allowed to remain on a platform, where they may spread virally?
  • Are misleading ?cheap fake? videos about politicians considered political speech?
  • Who should decide when ?cheap fake? political speech is inaccurate and inappropriate?should it be social media platforms, a general public consensus, or a third body??
  • How might ?cheap fake? videos be used for harassment and bullying of non-public figures, and what are the potential implications for real life harm? 
  • If the cheap fake video didn?t originate from a public source (as in the Pelosi video) but a private video, how could a company determine that those videos were manipulated?   

Resolution: After public demands that Twitter, YouTube and Facebook do something about the modified Pelosi video, the three major social media platforms each responded differently. YouTube took the video down, saying that it violated its policies. YouTube also noted that unlike the other platforms, the modified Pelosi video did not appear to go viral or spread widely on its platform. Facebook kept the video up but, in accordance with its fact-checking policies, once the video was deemed ?false? Facebook de-ranked the video in its news feed, limiting its spread.

Twitter left the video up. However, by the time a very similar event happened a few months later, Twitter announced a new plan for dealing with such content, saying that it would begin adding labels to ?manipulated media,? offering context for people who came across such videos so they would understand that the video is not being shown in its original context. One of the first examples of Twitter applying this ?manipulated media? label was to a comedy segment by late night TV entertainer Jimmy Kimmel who used some manipulated video to make fun of former Vice President Mike Pence.

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , , , , , ,
Companies: facebook, twitter, youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020)”

Subscribe: RSS Leave a comment
9 Comments

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: All this "Copia" stuff is just Maz with another fr

That didn’t go in until "Resend". The "spam filter" is another lie here — kept vague and mysterious — because more likely explaination is blocked by Admin, then new IP address got around it. So in addition to malice, looks (to me, seeing what happens out of sight to you) like Techdirt is incompetent too, bad combination, except for amusement.

PaulT (profile) says:

Re: Re: All this "Copia" stuff is just Maz with anothe

"The "spam filter" is another lie here — kept vague and mysterious "

Vague and mysterious to you. Everyone else just rolls your eyes and laughs at you every time you do things to trigger it, followed by spamming the site to complain about your first bit of spam being filtered.

"then new IP address got around it"

Wow… so you post anonymously until the spam filter is triggered for your current IP. You then switch to another IP where it’s initially not blocked, then gets blocked after you’ve spammed from it a few times.

What could possibly be going on here… it’s a mystery lol

PaulT (profile) says:

Re: All this "Copia" stuff is just Maz with another front.

It’s propaganda! It’s also not published anything recently enough to make it relevant!

I’ll take a break from my usual question of what control you think the MacArthur Foundation and Automattic have, as I always do when you post the freely and publicly available image as if it’s proof of something nefarious, and laught at the fact that even your own internal logic is less effective than normal.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re:

A fine idea… until you understand that the target audience for this laughable propaganda will reject the warnings as "fake news" and believe the fabrication.

This is what happened with Trump – when his tweets were left up and Twitter fact checked the tweets, the reaction wasn’t people reading the fact checks and learning from them. It was Trump and his cult whining about liberal bias because the fact checks appeared.

Leaving this stuff up only leaves the more gullible morons they’re aimed at in the first place being able to watch them.

Anonymous Coward says:

Re: Re: Re:

Yes, well, the problem with declaring the general public "gullible morons" that need some higher authority to do all the thinking for them is that most of us are inevitably members of the general public and there is no guarantee the authority that would make our decisions for us is going to agree with us.

This is why democracy, despite all its failings, is still the best form of governance ever implemented.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow