Commercial Content Moderation And Worker Wellness: Challenges & Opportunities

from the challenges-and-opportunities dept

Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event. Between last week and this week, we’re publishing a bunch of these essays, including this one.

After a difficult few weeks of media attention and criticism surrounding the discovery of a spate of disturbing, exploitative videos either aimed at or featuring young children, YouTube’s CEO Susan Wojcicki released a blog post on December 4, 2017 that described the platform’s recent efforts to combat the problem. Central to the proposed plan to remedy the issue of unwanted user-generated content, or UGC, on YouTube was Wojcicki’s announcement of the large-scale hiring of additional human content moderation workers to complement those already working for Google, bringing the total number of such reviewers in Google’s employ to over 10,0001.

Wojcicki also went on to refer to the platform’s development of its automated moderation mechanisms, artificial intelligence and machine learning as key to its plans to combat undesirable content, as it has in other cases when disturbing material was found on the platform in large amounts. Importantly, however, and as indicated by Wojcicki, the work of those thousands of human content moderators would go directly to building the artificial intelligence required to automate the moderation processes in the first place. In recent months, other major social media and UGC-reliant platforms have also offered up plans to hire new content moderators by the thousands in the wake of criticism around undesirable content proliferating within their products.

These recent public responses that invoke large-scale content moderator hires suggest that social media firms who trade in UGC still need human workers to review content, now and into the foreseeable future, whether that activity is to the end of deleting objectionable material, training AI to do it, or some combination thereof. As most anyone familiar with the production cycle of social media content would likely stipulate, these commercial content moderators – paid, professional workers who screen at scale following internal norms and policies – perform critical functions of brand protection and shield platform and user alike from harm. CCM workers are critical to social media’s operations and yet, until very recently, have often gone unknown to the vast majority of the world’s social media users.

The specifics of the policies and procedures that they follow have also been mostly inaccessible (beyond simplified public-facing community guidelines) and treated as trade secrets. There are several reasons for this, not the least of which being the very real concern on the part of platforms that having detailed information about what constitutes allowable, versus disallowed, content would give people intent on gaming the platform using its own rules ample ammunition to do so. There may be other reasons, however, including the fact that the existence of the problem of disturbing content circulating online is one that most mainstream platforms do not care to openly discuss due to the distasteful nature of the issue and the subsequent questions that could be asked about if, how, and by whom such material is addressed.

CCM work is both rote and complex. It requires engagement with repetitive tasks that workers can come to perform in a routinized way, but often under challenging productivity metrics that require a high degree of both speed and accuracy in decision-making. The job calls on sophistication that cannot currently be fully matched by machines, and so a complex array of human cognitive functions (e.g., linguistic and cultural competencies; quick recognition and application of appropriate policies; recognition of symbols or latent meanings) is needed. A further challenge to CCM workers and the platforms that engage them is the fact that the job, by its very nature, exposes workers to potentially disturbing imagery, videos and text from which mainstream platforms wish to shield their user and remove from circulation, the latter which may even be a legal dictate in many cases.

In other words, in order to make its platforms safe for users and advertisers, platforms must expose its CCM workers to the content that it considers unsuitable for anyone else. It is a paradox that illustrates a primary motivation behind the automation of moderation practices: under the status quo, CCM workers put their own sensitivity, skills and psyches on the line to catch, view it and delete material that may include images of pornography, violence against children, violence against animals, child sexual exploitation, gratuitously graphic or vulgar material, hate speech or imagery and so on. It is work that can lead to emotional difficulty for those on the job, even long after some have moved on.

To this end, industry has responded in a variety of ways. Some workplaces have offered on-site counseling services to employees. The availability of such counseling is important, particularly when CCM workers are employed as contractors who may lack health insurance plans or might find mental health services cost-prohibitive. Challenges are present, however, when cultural barriers or concerns over privacy impede workers from taking full advantage of such services.2

When it comes to CCM worker wellness, firms have been largely self-guiding. Several major industry leaders3 have come together to form the self-funded “Technology Coalition,” whose major project relates to fighting child sexual exploitation online. In addition to this key work, they have produced the “Employee Resilience Guidebook,” now in a second version, intended to support workers who are exposed to child sexual exploitation material. It includes information on mandatory reporting and legal obligations (mostly US-focused) around the encountering of said material, but also provides important information about how to support employees who can be reasonably expected to contend emotionally with the impact of their exposure. Key to the recommendations is beginning the process of building a resilient employee at the point of hiring. It also draws heavily from information from the National Center for Missing and Exploited Children (NCMEC), whose expertise in this area is built upon years of working with and supporting law enforcement personnel and their own staff.

The Employee Resilience Guidebook is a start toward the formation of industry-wide best practices, but in its current implementation it focuses narrowly on the specifics of child sexual exploitation material and is not intended to serve the broader needs of a generalist CCM worker and the range of material for which he or she may need support. Unlike members of law enforcement, who can call on their own professional identities and social capital for much-needed support, moderators often lack this layer of social structure and indeed are often unable to discuss the nature of their work due to non-disclosure agreements (NDAs) and stigma around the work they do. The relative geographic diffusion and industrial stratification of CCM work can also make it difficult for workers to find community with each other, outside of their immediate local teams, and no contracting or subcontracting firm is represented in the current makeup of the Technology Coalition, yet many CCM workers are employed through these channels. Finally, I know of no long-term publicly-available study that has established psychological baselines for or tracked CCM worker wellness over their period of employment or beyond. The Technology Coalition’s guidebook references information from a 2006 study on secondary trauma done on NCMEC staff. To be sure, the study contains important results that can be instructive in this case, but it cannot substitute for a psychological study done specifically on the CCM worker population in the current moment. Ethical and access concerns make such an endeavor complicated. Yet, as the ongoing 2016 Washington state lawsuit filed on behalf of two Microsoft content moderators suggests, there may be liability for firms and platforms that do not take sufficient measures to shield their CCM workers from damaging content whenever possible and to offer them adequate psychological support when it is not.

Although the factors described above currently serve as challenges to effectively serving the needs of CCM workers in the arena of their health and well-being, they are also clear indicators of where opportunities lie for industry and potential partners (e.g., academics; health care professionals; labor advocates) to improve upon the efficacy, well-being and work-life conditions of those who undertake the difficult and critical role of cleaning up the internet on behalf of us all. Opening the dialog about both challenges and opportunities is a key step toward this goal.

1 It is not clear what status the moderators at Google, or most other firms, hold, or if they are actually full employees of the platforms and parent companies, contractors, outsourced workers, or some combination thereof. This information was not given in the blog post and is usually not offered up by social media firms when announcing numbers of CCM workers they have on staff or when describing a new mass hire.

2 In interviews that I have conducted with CCM workers, some have reported feeling that others, including co-workers and supervisors, might negatively judge their engagement with an on-site counselor as evidence of an inability to perform requisite job tasks, so although the counselor would arrive at the worksite, these workers self-selected out of making use of the services when others would see them taking advantage of them.

3 As of late 2017, firms who participate in the Technology Coalition include: Apple; Dropbox; Facebook; GoDaddy; Google; Kik; LinkedIn; Microsoft Corporation; PayPal; Snap; Twitter; Yahoo! Inc. Oath, Adobe. More information can be found here: http://www.technologycoalition.org/coalition-projects/

Sarah Roberts, PhD, is an Assistant Professor at UCLA in the Department of Information Studies

Filed Under: , , ,
Companies: facebook, google, twitter, youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Commercial Content Moderation And Worker Wellness: Challenges & Opportunities”

Subscribe: RSS Leave a comment
3 Comments
fairuse (profile) says:

Moderation : Forum era had information locked. Media era has central information unlocled.

The policies in the article are needed for the people that swim in the sewage so general population does not have to.

Currently if a user, me or you, trips over a photo/video that is obviously physical child abuse or sexual assault on _tube or _search it can be flagged as objectionable (_tube has it on the page). The community may not want to click that button — click equals users identity is sent also. Most people on any platform are not going to shield their “being a good citizen” act by sanitizing the box. Police are not going to give a dam, all the gear the user has can be taken and picked apart as part of the investigation. (If said photo is in cache somewhere you/me are in handcuffs.)

I used an example from the users POV because police have no reason to treat flagger as victim, they treat everyone as perpetrator until proven innocent. I can’t quote actual cases because I am just a cat using human to type for me. OK, bad humor is bad humor.

All the backend work described is needed so users of a platform don’t have to make the no win choice to flag or not-flag the most horrendous thing ever seen.

The people that have to look at all the insanity must see Doctor.Shrink per a schedule without having option to pass — sign here at hiring, failure is dismissal.

I watched Human Trafficking Panel on C-SPAN last week and was shocked at the PR nature of it. Much said about victims need medical help in a timely fashion. Police need to realize the girls are not “just another illegal junky whore”. The panel means well but it didn’t even address international level. It was like a pick this simple part of a problem and defer the hard stuff. I could have misunderstood what the panel was doing. See C-SPAN video archive,

My point is when there are, as you said, people on _tube or _search monitoring the platform for the worst media that can be dreamed up need support. This function should be a black box to the staff not doing the page flipping — should not even be aware that humans are the filter. So, HR can generate the legal stuff. Tech part is in an isolated area with no admittance by other employees. I have been in such a structure (nuclear power plant control simulator floor of (..) company.)

All the questions about getting the odd stare and whispering by other staff in the open areas is eliminated if “Special Ops” is invisible. Fried emotions, sick-n-tired-of-seeing-this depression, home problem are Doctor.Shrink’s problem.

I tried to make it 12hr filtering legal porn urls at OpenDNS. Didn’t make it even though I switched back and forth between miss-tagged kittens & G-rated video and really hard legal porn.

I pinned this so I check back. Oh, I worked in DC at a couple of letter-soup places — retired now. Until policy making folks can say the words “child porn” without tossing a tactical nuke at companies I will not bet against ridiculous laws being passed.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...