It’s no secret that Facebook relies heavily on artificial intelligence to detect and delete malicious content. Of course, technology like this is the only possible way to police a platform with over two billion active users. However, Facebook’s AI has proven to be controversial, sometimes flagging content for no reason while other times allowing questionable posts to remain up. That’s why Facebook has added thousands of real people to its review process. However, according to a report from Reuters this week, employing real people to look at content has created a privacy downside for users.
The report details the activities of 260 Facebook contract workers, who are paid to comb through posts and annotate them in order to train the platform’s AI. These criteria include whether or not an image is a selfie, depicts a major life event, and what the author’s intention could’ve been in posting it. However, this observation is occurring without user permission.
Facebook defended itself by arguing there’s no more effective way to train its systems.
“It’s a core part of what you need,” Nipun Mathur, Facebook’s director of product management for AI, told Reuters. “I don’t see the need going away.”
However, given that Facebook is already under the microscope for its privacy practices, this revelation could only add to the social media giant’s headaches.
Bitdefender 2019 solutions stop attacks before they even begin. Try 90 days free of Bitdefender Total Security 2019
Private Internet Access is an award-winning, cost-effective VPN solution. The use of an anonymous and trusted VPN is essential to your online privacy, security and identity protection.