Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017)

from the kids-will-be-kids dept

Summary: The messaging service Kik was founded in 2009 and has gone through multiple iterations over the years. However, it seemed to build a large following for mostly anonymous communication, allowing users to create many new usernames not linked to a phone number, and to establish private connections via those usernames. This privacy feature has been applauded by some as being important for journalists, activists and at-risk populations.

However, the service has also been decried by many as being used in dangerous and abusive ways. NetNanny puts it as the number one dangerous messaging apps for kids, saying that it ?has had a problem with child exploitation? and highlighting the many ?inappropriate chat rooms? for kids on the app. Others have said that, while the service is used by many teenangers, many feel that it is not safe for them and full of sexual content and harassment.

Indeed, in 2017, a Forbes report detailed that Kik had a huge ?child exploitation problem.? It described multiple cases of child exploitation that we found on the app, and claimed that it did not appear that the company was doing much to deal with the problem, which seemed especially concerning given that over half of its users base was under 24 years of age.

Soon after that article, Kik began to announce some changes to its content moderation efforts. It teamed up with Microsoft to improve its moderation practices. It also announced a $10 million effort to improve safety on the site and named some high profile individuals to its new Safety Advisory Board.

A few months later the company announced updated community standards, with a focus on safety, and a partnership with Crisis Text Line. However, that appeared to do little to stem the concerns. A report later in 2018 said that, among law enforcement, the app that concerned them most was Kik, with nearly all saying that they had come across child exploitation cases on the app, and that the company was difficult to deal with.

In response, the company argued that while it was constantly improving its trust & safety practices, it also wanted to protect the privacy of its users.

Decisions to be made by Kik:

  • How can a company that promotes the privacy-protective nature of its messaging also limit and prevent serious and dangerous abusive practices?
  • How closely should Kik work with law enforcement when they find evidence of crimes on the platform?
  • Are there additional tools and features that can be implemented that would discourage those looking to use the platform in abusive ways?

Questions and policy implications to consider:

  • Are there ways to retain the benefits for journalists, activists, and at-risk groups that do not put others — especially children — at risk?
  • What are the tradeoffs between enabling useful private communications and making sure such tools are not used in abusive or dangerous ways?

Resolution: Despite the claims from Kik that it was improving its efforts to crack down on abuse, reports have continued to suggest that little has changed on the platform. A detailed report from early 2020 — years after Kik said it was investing millions in improving the platform — suggested that it was still a haven for sketchy content, even noting that just posting a Kik address publicly (on Twitter) resulted in near immediate abuse.

Despite an announcement in late 2019 that the company was going to shut down the messaging service to focus on a new cryptocurrency plan, it reversed course soon after and sold off the messenger product to a new owner. In the year and half since the sale, Kik has not added any new content to its safety portal, and more recent articles still highlight how frequently child predators are found on the site.

Originally published on the Trust & Safety Foundation website.

Filed Under: , , ,
Companies: kik

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017)”

Subscribe: RSS Leave a comment
1 Comment
james (profile) says:

The outline can be said to be the main framework of Vancouver homework writing https://ca.liuxuesavior.com/wen-ge-hua-dai-xie It can be structured and organized into paragraphs to make it easier for everyone to write homework. We Chinese students often ignore the importance of the outline, but if you can spend half an hour writing the outline, you can get twice the result with half the effort for subsequent homework. Speaking of homework outlines, in fact, many students know that the thesis outline is the framework of the research paper. With the main frame of this article, the main points are listed under each part of the frame, and then all the main points are structured into paragraphs, which can ensure that students will not miss anything when writing. Although there are many types of assignments in foreign universities, the common structure of most papers is five paragraphs. Every article needs an introduction, body (paragraph with argument), body (paragraph with argument), and conclusion.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow