Facebook May Be Stepping Up Censorship Of Extremist Content

facebook_blackbg_logoFacebook has been accused of not doing enough to help fight terrorism — and in one case, the site was even sued for it. However, the world’s largest social network may be stepping up its efforts to censor extremism in a big way. According to an exclusive report in Reuters, Facebook and Google may be using automated technology to find and delete extremist content.

The technology was originally developed to remove copyright-protected material from the site. It works by identifying videos that have already been flagged as unacceptable, allowing sites to remove these repeat offenders automatically.

Facebook and Google have not publicly acknowledged using these tools, nor is it clear the extent to which these sites are hunting down and blocking terrorist content. It’s likely that both want to stay out of the political fray, and according to sources, the sites are hesitant to announce it for fear that terrorists could learn how to manipulate the system.

“There’s no upside in these companies talking about it,” Matthew Prince, chief executive of content distribution company CloudFlare, told Reuters. “Why would they brag about censorship?”

However, the move to fight terrorism makes a ton of sense. Facebook has faced political pressure all over the world to police itself for extremism, and relying on automated systems could be a smart and simple way to achieve that. The problem, as discussed in the post, is that extremist content is subjective and companies draw the lines in different places.