In recent years, Facebook has committed itself to stopping the spread of bullying and harassment on its platform — and according to the company’s community standards enforcement report, it’s doing a better job. Released this week, the Facebook report revealed that the company took action on 6.3 million pieces of content in the fourth quarter of last year, up from 3.5 million the previous quarter.
Facebook attributes much of this success to its revamped artificial intelligence technology. However, Facebook Chief Technology Officer Mike Schroepfer was quick to note that the AI still has a lot of room for improvement, especially in the way it operates around the world.
“There is still so much to be done, despite these encouraging improvements,” Schroepfer wrote in a blog post. “One particular area of focus is getting AI even better at viewing content in context across languages, cultures, and geographies. The same words can often be interpreted as either benign or hateful, depending on where they’re published and who is reading them, and training machines to capture this nuance is especially challenging.”
Facebook will never come close to perfect when it comes to content moderation. However, at least the company has invested in it and is making measurable strides towards protecting its users.
The Choice of Tech Experts Worldwide. Try 90 days free of Bitdefender and experience the highest level of digital safety.
Surf the web truly incognito. Try Bitdefender Premium VPN, the ultra-fast VPN that keeps your online identity and activities safe from hackers, ISPs and snoops.