Facebook Releases First-Ever Content Moderation Report

As part of its pledge to be more transparent with users, Facebook released its first-ever content moderation report this week. In it, the company shared how many incidents of spam, hate speech, violence, and nudity it has removed over the past two business quarters.

There are many interesting tidbits in the report, including the fact that Facebook removed an astounding 1.3 billion fake accounts over that two-quarter time period. According to the company, most of these accounts were created “with the intent of spreading spam or conducting illicit activities such as scams.” Facebook also said it took action on 837 million spam posts, up 15 percent from the previous quarter.

While it’s a good thing that Facebook is sharing more of its internal processes with users, many experts are still concerned that the company has so much power over global communications.

“Facebook has grown to a size and scale that significant harms are in the offing to some proportion of its users no matter what approach it takes to moderating content,” Blake Reid, a professor at Colorado Law, told WIRED. “Every tweak it makes has the ability to influence elections, spread propaganda, effectively suppress expression, or cause other effects of similar magnitude.”

In the end, it doesn’t really matter why Facebook decides to remove content — the issue is that it has the power to do so in the first place.