Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020)

from the for-the-children dept

Summary: Roblox is an incredibly popular online platform for games, especially among younger users. In 2020, it was reported that two-thirds of all US kids between 9 and 12 years old use Roblox, and one-third for all Americans under the age of 16. The games on the platform can be developed by anyone, as Roblox has set up a very easy environment, using the scripting language Lua, so that many of the games themselves are developed by Roblox?s young users.

Given the target market of Roblox, the company has put in place a fairly robust content moderation program designed to stop content that the company deems inappropriate. This includes all kinds of profanity and ?inappropriate? language, as well as any talk of ?dating,? let alone sexual innuendo. The company also does not allow users to share personal identifiable information.

The content moderation extends not just to players on the Roblox platform, but to the many game developers that create and release games on Roblox as well. Roblox apparently uses AI moderation from a company called Community Sift as well as human moderators from iEnergizer. Recent reports say that Roblox has a team of 2,300 content moderators.

Given the competing interests and incentives, there are both widespread reports of adult content being easily available (including to children) as well as developers complaining about having their content, projects, and accounts shut down over perfectly reasonable content, leading to widespread complaints that the moderation system is completely arbitrary.

Roblox is then left trying to figure out how to better deal with such adult content while simultaneously not upsetting its developers, or angering parents who don?t want their children exposed to adult content while playing games.

Decisions to be made by Roblox:

  • How do you monitor so much content to make sure that adult content does not get through? How do you make sure that kids are not exposed to adult content?
  • If the moderation systems are too aggressive, will that drive developers (and possibly some users) away?
  • Should all games go through a human review process before they can be offered through Roblox?
  • Are there better ways to communicate how and why content is moderated?

Questions and policy implications to consider:

  • Which is a more important constituency: the kids/families using Roblox or the developers who produce content for it? Is being aggressive in content moderation about finding a balance between those two groups?
  • Is it worth ?overbanning? if it means families feel safer using Roblox?

Resolution: Roblox has continued to evolve and try to improve its content moderation practices. As this case study was being written, the company announced plans to start a content rating system for games, to better inform parents which games may be more appropriate (or inappropriate) for children. However, the company has been promising to improve its efforts to stop adult content from reaching children for many years — and every few months more reports pour in. At the same time, developers who feel that their content has been blocked for no clear reason continue to take to forums to complain about the lack of clarity and transparency regarding the moderation system.

Originally published on the Trust & Safety Foundation website.

Filed Under: ,
Companies: roblox

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow