When it comes to content moderation, Facebook relies heavily on artificial intelligence to detect, flag, and label posts correctly. However, when the technology backfires, it can have embarrassing consequences. That’s what happened last week when a year-old video featuring Black men re-surfaced on the site, with Facebook’s algorithm asking users if they “want to see more videos about primates.”
A former Facebook employee first noticed the error last week, and it was then reported by The New York Times. For its part, the social media giant profusely apologized for the mistake and said that its AI still has a lot of room for improvement.
“This was clearly an unacceptable error,” a Facebook spokesperson said. “As we have said, while we have made improvements to our AI we know it’s not perfect and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations.” The spokesperson also went on to say that the company is investigating the cause of this issue to prevent it from happening again.
Of course, it’s obvious at this point that Facebook’s AI needs to be improved. However, a major question remains: what caused the company’s technology to make such an offensive error in the first place?
The Choice of Tech Experts Worldwide. Try 90 days free of Bitdefender and experience the highest level of digital safety.
Surf the web truly incognito. Try Bitdefender Premium VPN, the ultra-fast VPN that keeps your online identity and activities safe from hackers, ISPs and snoops.