Report: Facebook Failed To Block Ads Containing Death Threats To Election Workers

Facebook explicitly bans threats of violence in its Community Standards. However, according to a report released this week from watchdog group Global Witness and NYU, the social media giant doesn’t always enforce its own rules. As a test, these researchers submitted 20 ads featuring violent content and calls to “lynch,” “murder,” and “execute” election workers around Election Day this year. Of those 20 ads, 15 were approved by Facebook.

“It was really quite shocking to see the results,” one of the researchers said. “I thought a really simple keyword search would have flagged this for manual review.”

For its part, Facebook said that these ads are a “small sample size,” and that its ability to deal with this problem “effectively exceeds” other social media platforms. However, the researchers behind the report questioned this claim, pointing out that other platforms detected their fake ads and removed them.

“The fact that YouTube and TikTok managed to detect the death threats and suspend our account, whereas Facebook permitted the majority of the ads to be published shows that what we are asking is technically possible,” they continued.

No matter what Facebook says in its defense, it’s clear the platform has a big problem allowing content like this to slip through its moderation system.




Recommended Resources

bitdefender Choose what the experts use: award-winning cybersecurity you can trust and rely on.

PIA Surf the web truly incognito. Try Bitdefender Premium VPN, the ultra-fast VPN that keeps your online identity and activities safe from hackers, ISPs and snoops.

System Mechanic 14 – Make your computer run like new. Winner of 200+ Editor’s Choice awards!

Report: It Can Take Years To Recover A Hacked Facebook Account Previous post Report: It Can Take Years To Recover A Hacked Facebook Account Lawsuit Claims Facebook Job Ads Illegally Discriminate Next post Lawsuit Claims Facebook Job Ads Illegally Discriminate