Facebook Removed 8.7 Million Photos Of Child Exploitation In Three Months

Because it’s completely open and free to use, Facebook is a platform that’s easy to exploit for cybercriminals, hackers and scammers. And that means Facebook has its hands full when it comes to hunting down and eliminating offensive content. For example, the company said this week that its moderators have removed 8.7 million images of child nudity in the past three months alone.

According to the company, it detected the images using previously-undisclosed artificial intelligence software that has been rolled out over the past year. The AI has been taught to recognize both images of children and nudity (though it’s programmed with an exception for art and history), and because the AI is built around a machine learning model, it gets smarter the more cases it detects. The company has even created a tool to recognize inappropriate interactions between adults and minors on the platform.

Thankfully, the company also has a large network at its disposal to report its findings to law enforcement.

“Our job is then to make that [Facebook] report available to the appropriate law enforcement agency,” Michelle DeLaune, chief operating officer of the National Center for Missing and Exploited Children, told NBC News. “At this point we have the ability to transfer the information to more than 100 law enforcement agencies around the globe.”

It’s encouraging that Facebook has developed this AI, and it seems to be working. But the sad fact is that even the best AI on earth can’t catch every piece of offensive content. That’s why users need to pitch in and report it when they see it, too.