Facebook knows it needs to be more transparent with users about how it tackles controversial issues. That’s why the site introduced its “Hard Questions” blog series, addressing everything from terrorism and censorship to the site’s influence on democracy itself. The second installment of the series was published this week, and it dealt with how the site does – and does not – deal with hate speech.
Facebook claims they delete 66,000 posts per week, so unsurprisingly, moderating a platform of 2 billion users is no small task.
With such a massive user base located all around the world, Facebook explained how difficult it is to create a universal definition of hate speech. That’s why the site tries to examine context and intent before removing content.
“People who live in the same country – or next door – often have different levels of tolerance for speech about protected characteristics,” wrote Richard Allan, Facebook’s VP of Public Policy for Europe, the Middle East and Asia. “Sometimes… there isn’t a clear consensus – because the words themselves are ambiguous, the intent behind them is unknown or the context around them is unclear.”
Even though Facebook is genuinely trying to remove hateful content, the task it set for itself is pretty much impossible – and the site won’t get any sympathy from users. After all, this worldwide reach is exactly what the social media giant asked for.