Facebook increasingly relies on artificial intelligence to detect and remove hate speech, malware and other kinds of offensive content. However, leaning on automation can also backfire in a big way. This week, a report from the Associated Press detailed how Facebook is still auto-generating pages and videos for terrorist groups, despite becoming aware of the problem months ago.
These details emerged when the National Whistleblower Center updated a complaint to the SEC regarding Facebook’s content policies. According to the filing, there are almost 200 auto-generated pages on Facebook that reference the Islamic State, Al-Qaida and other terrorist groups. What’s worse, Facebook hasn’t made much headway on the issue in months. In June, U.S. lawmakers blasted Facebook for the very same problem.
“Instead of preventing terrorist content from spreading on their platform, Facebook has been making videos and promoting terrorist content on its own system,” U.S. Rep. Max Rose said at the time. “For instance, an al-Qaida-linked terrorist group has an autogenerated Facebook page that has nearly 4,500 likes. This case… serves as yet another glaring example of Facebook’s inability to police itself. But what is even more striking is… I checked and this profile is still up there.”
If these pages were still on Facebook in June, and still there now, it begs the question: what will it take to motivate the social media giant to remove them?
The Choice of Tech Experts Worldwide. Try 90 days free of Bitdefender 2020 and experience the highest level of digital safety.
Private Internet Access is an award-winning, cost-effective VPN solution. The use of an anonymous and trusted VPN is essential to your online privacy, security and identity protection.