Facebook and other major tech companies were called to the carpet by U.S. lawmakers this week to explain what steps they’re taking to combat hate and extremism online.
While Facebook, YouTube and Twitter faced tough questions from the Senate committee, lawmakers also acknowledged the difficulty these tech giants face in respecting user privacy and freedom while also policing and censoring content.
“[The tech companies] have a very difficult task: preserving the environment of openness upon on which their platforms have thrived, while seeking to responsibly manage and thwart the actions of those who would use their services for evil,” Committee Chairman Sen. John Thune said in his opening statement.
Much of Facebook’s testimony focused on how the company uses artificial intelligence to automatically remove terrorist posts. According to Facebook, it removes about 99 percent of these posts before they’re even flagged. Perhaps more surprisingly, Facebook also revealed that it creates counterpropaganda of its own in attempt to change hearts and minds.
“We believe that a key part of combating extremism is preventing recruitment by disrupting the underlying ideologies that drive people to commit acts of violence. That’s why we support a variety of counterspeech efforts,” Monika Bickert, Facebook’s head of global policy management, said in her testimony.
It’s a little creepy that Facebook is creating propaganda of its own, and that it’s so responsible for determining how users think and feel — but in this case at least the company is using its power for good.