The Scale Of Moderating Facebook: It Turns Off 1 Million Accounts Every Single Day

from the not-an-easy-issue dept

For years now, we’ve discussed why it’s problematic that people are demanding internet platforms moderate more and more speech. We should be quite wary of internet platforms taking on the role of the internet’s police. First, they’re really bad at it. As we noted in a recent post, platforms are horrendously bad at distinguishing abusive content from those documenting abusive content and that creates all sorts of unfortunate and bizarre results, with those targeted by harassing content often having their own accounts shut down. On top of that, the only way to actually moderate content at scale requires a set of rules, and any such set of rules, as applied, will create hysterically bad results. And that’s because the scale of the problem is so massive. It’s difficult for most people to comprehend even slightly the scale involved here. As a former Facebook employee who worked on this stuff once told me, “Facebook needs to make one million decisions each day — one million today, one million tomorrow, one million the next day.” The idea that they won’t make errors (both of the Type 1 and Type 2 category) is laughable.

And it appears that the scale is only growing. Facebook has now admitted that it shuts off 1 million accounts every single day — which means that earlier number I heard is way low. If it’s killing one million accounts every day, that means it’s making decisions on way more accounts than that. And, the company knows that it gets things wrong:

Still, the sheer number of interactions among its 2 billion global users means it can’t catch all “threat actors,” and it sometimes removes text posts and videos that it later finds didn’t break Facebook rules, says Alex Stamos.

“When you’re dealing with millions and millions of interactions, you can’t create these rules and enforce them without (getting some) false positives,” Stamos said during an onstage discussion at an event in San Francisco on Wednesday evening.

That should be obvious, but too many people think that the answer is to just put even more pressure on Facebook — often through laws requiring it to moderate content, takedown content and kill accounts. And, when you do that, you actually make the false positive problem that much worse. Assuming, for the sake of argument, that Facebook has to kill 10% of all the accounts it reviews, that’s 10 million accounts every day. If the punishment for taking down content that should have been left up is public shame/ridicule, that acts as at least some defense to get Facebook to be somewhat careful about not taking down stuff that it shouldn’t. But, on the flip side, if you add a law (such as the new one in Germany) that puts criminal penalties on social media companies for leaving up content that it wants taken down, you’ve changed the equation.

Now, the choice isn’t between “public ridicule vs. bad person on our platform” it’s “public ridicule v. criminal charges and massive fines.” So the incentive for Facebook, and other platforms changes such that it’s now encouraged to kill a hell of a lot more accounts, just in case. So suddenly the number of “false positives” is going to sky rocket. That’s not a very good solution — especially if you want platforms to support free speech. Again, platforms have every right to moderate content on their platforms, but we should be greatly concerned when governments are forcing them to moderate in a way that may have widespread consequences on how people speak, and where those policies can tilt the scales in often dangerous ways.

Filed Under: , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The Scale Of Moderating Facebook: It Turns Off 1 Million Accounts Every Single Day”

Subscribe: RSS Leave a comment
33 Comments
SirWired (profile) says:

Per usual, more than a bit of context left out

Most (in fact, nearly all) of those 1M accounts per day are not for abuse, censoring, hate speech, what-have-you; most are just spammers and other robo-accounts and just like their creation, their deletion requires no human interaction.

Figuring out what social media platforms should do about unpleasant content is an important question to ask, but neither the question nor the answer has anything to do with that 1M number.

Designerfx (profile) says:

Re: Per usual, more than a bit of context left out

This is no fault other than explicitly facebook. There are lots of anti-bot measures they could be doing and naturally do not have an interest in doing (because facebook’s total user count would plummet).

I wish I could find posts about analysis of how much of FB is bots but it is a significant amount.

I’d hope people would realize that lack of action on preventing bots goes hand in hand with lack of quality in understanding things like filtering/free speech.

Anonymous Coward says:

Re: Re: Per usual, more than a bit of context left out

Bot detection methods have been known for about 15 years and VERY well understood for 10. There’s no excuse for Facebook allowing those accounts to exist in the first place — except that (a) they’re ignorant newbies who have no idea to how to run a service even 1% the size of Facebook (b) they’re lazy and cheap (c) the user count inflates Facebook’s stock price.

I have no sympathy for them and explicitly reject the argument that gosh, it’s soooo hard. They should have never built something beyond their capabilities — but they chose to, because they’re greedy assholes who only care about profit and don’t give a damn about the impact on the Internet, its users, and the real world.

The most ethical course of action for them right now would be to shut the whole thing off and apologize for their hubris. They won’t, of course: sociopathic monster Mark Zuckerberg will see to that.

Scote (profile) says:

Re: Re: Re: Per usual, more than a bit of context left out

“Bot detection methods have been known for about 15 years and VERY well understood for 10. There’s no excuse for Facebook allowing those accounts to exist in the first place”

They are using bot detection. How do you think they delete 1,000,000 accounts per day?

FB is likely the biggest bot account target on the internet, so bot detection isn’t going to be perfect, especially when many of the fake accounts may have human farms doing some of the sign up.

Anonymous Coward says:

Re: Re: Re:2 Per usual, more than a bit of context left out

My point is that they shouldn’t have to delete 1M accounts/day: they should never have allowed them to be created. Those of us who’ve been paying attention learned a long time ago that reactive measures are too little, too late, and the proactive measures actually have a fighting chance.

Cdaragorn (profile) says:

Re: Re: Re: Per usual, more than a bit of context left out

You act like bot detection is a solved problem. That is sooooo far from fact.

In fact, one of the most damning facts that proves that is simply not true is that most bot detection today REQUIRES HUMAN INTERACTION. Good luck automating that so called “solved problem”.

The fact that you want to slap ridiculous, unrealistic expectations on them does not mean they built something “beyond their capabilities”. It means you’re being unrealistic and ridiculous.

Anonymous Coward says:

Re: Re: Re:2 Per usual, more than a bit of context left out

Bot detection IS, for the most part, a solved problem. If you don’t know this, then you’re out-of-touch with the contemporary security environment and should probably avoid commenting on things beyond your inadequate understanding.

Yes, there are edge cases that are tough: we’re working on those. But the overwhelming majority are not only identifiable, they’re EASILY identifiable.

And here’s the kicker: the bigger the operation you run, the easier this gets. (Why? Because small operations only have visibility into sparse data sets. Large operations can see enormous ones and exploit that to identify bots more accurately and faster.) So this is a case where FB’s scale works highly in their favor — if only they weren’t too pathetically stupid and too lazy and too cheap to exploit it.

Anonymous Coward says:

Re: Re: Re:3 Per usual, more than a bit of context left out

Bot detection IS, for the most part, a solved problem.

The solution being what?

Once the account has existed for a while, they can see whether it matches "normal" patterns. During creation there’s not a lot of obvious difference between real and fake users, especially because many "fake" ones aren’t entirely fake (CAPTCHAs can be farmed out to actual people).

But the overwhelming majority are not only identifiable, they’re EASILY identifiable.

How do you know they’re not blocking 99 million creation attempts per day?

Anonymous Coward says:

Re: Re: Per usual, more than a bit of context left out

I’d hope people would realize that lack of action on preventing bots goes hand in hand with lack of quality in understanding things like filtering/free speech.

Really? So all we need to do is get all of our judges/lawyers training on bot-prevention technology and suddenly they would all agree on free speech? Wow, think of how much court time we could save with this new method. Not that bot prevention technology means much, seeing as there is almost nowhere on the internet that’s actually free of bots.

Or maybe the fact is that if we can’t even get widespread agreement on free speech within the US court system, then a company which operates in substantially every country in the world might have a wee bit of difficulty with the problem. After all, free speech in Germany and free speech in the US are vastly different animals.

Anonymous Coward says:

Re: Per usual, more than a bit of context left out

Figuring out what social media platforms should do about unpleasant content is an important question to ask, but neither the question nor the answer has anything to do with that 1M number.

The number that should be considered is how many posts are made on Facebook every day, as some of those are what trigger shutdowns. It is guaranteed that those are well beyond Facebooks ability to examine individually.

Anonymous Coward says:

Re: Re: Per usual, more than a bit of context left out

Errr… yes, I have a citation. Literally the very first sentence in the linked article to CNBC. “Facebook closes more than 1 million accounts every day, with most of those created by spammers and fraudsters, security chief Alex Stamos says.”

PaulT (profile) says:

Re: Re: Re: Per usual, more than a bit of context left out

“Literally the very first sentence in the linked article to CNBC.”

Ah, my apologies I did miss that for some reason.

But, I still don’t see actual citations for the claim they’re mostly spammers. I do see a caveat that it’s impossible to stop kicking off legit users, and complaints that it’s both too strict and too lax.

Given the actual visible evidence, I don’t see why the assertions in the article are incorrect.

OrwellsPast says:

The future

Well think about it this way, censoring the internet is providing jobs.

The way things are going with net neutrality and pulling information off the internet, it won’t be long until each of us is locked in a walled garden…

In that garden only the garden tenders can push things over the fence to feed the masses…

The garden tenders will be AI and won’t know how to differentiate between what’s healthy for consumption or what will cause shock to the gardens roots…

Eventually the garden tenders will logically decide that we can only grow within our own garden to prevent infestation of ideas.

Shane (profile) says:

Self Regulation (Twitter Sucks Worse)

The technology is already there to simply let us filter who we want. Why can’t I selectively block whoever I want, selectively remove people’s posts from MY threads, and so forth?

What’s really odd to me is, why has no one DONE this already?

P.S. I know several people who still manage to keep their FB pages that have been banned from Twitter. And I have seen full blown porn on Twitter, so what exactly offends these people?

Narcissus (profile) says:

Selfgovernance

I’m kind of confused why Facebook would be responsible for what appears on my feed, except for the ads obviously.

I am the one that agrees to follow people and I can always unfriend or unfollow them. If somebody is posting stuff I find offensive I unfollow them, it’s 2 seconds work.

I guess they could make an more obvious way to flag stuff and for me it would be enough if they just hid flagged stuff behind a link (like TD does). I also think some kind of algorithm could be made that if it clears a certain number of flags in a short time it should be put in review since it seems something serious is going on.

For the rest I can manage just fine on my own thank you.

Anonymous Coward says:

facebook policies are stupid

My brother posted a “Titty Sprinkles” image of a woman with candy sprinkles all over her bared breasts. (to the point that you can’t even see skin) Someone complained, and he got banned for a week for that. He re-shared the same picture a year later because it popped up in his “what you were doing a year ago” automatic facebook popup. He got banned for a month this time.

The picture is fairly innocuous, too. If you want to see it, here it is on another site. https://cdn1.lockerdomecdn.com/uploads/cfb6620dfbb1a11cc26d4c21a352b86d3d49404740190907000cee55f43ee202_large

Yet they allow all kinds of worse things to remain.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »