Facebook has long struggled to contain the spread of hate speech on its platform, particularly when the content is written in languages other than English. The social media giant failed yet another crucial test of this recently when two nonprofit groups ran 14 blatantly hateful ads targeted in Ethiopia to see if Facebook would detect them. Not only did it not, it approved all of them for publication.
In response, Facebook pointed to its work “building our capacity to catch hateful and inflammatory content in the most widely spoken languages.” However, as the nonprofit groups behind the ads pointed out, they were so obviously hateful that if Facebook couldn’t detect them, it’s unclear what it actually can detect.
“We picked out the worst cases we could think of,” Global Witness campaigner Rosie Sharpe told PBS. “The ones that ought to be the easiest for Facebook to detect. They weren’t coded language. They weren’t dog whistles. They were explicit statements saying that this type of person is not a human or these type of people should be starved to death.”
It isn’t surprising that Facebook is still having these troubles with content moderation. However, the company should be a bit more humble about its ability to fix the problem, and hopefully take more meaningful steps to make it happen.
Choose what the experts use: award-winning cybersecurity you can trust and rely on.
Surf the web truly incognito. Try Bitdefender Premium VPN, the ultra-fast VPN that keeps your online identity and activities safe from hackers, ISPs and snoops.