When Facebook launched its live streaming video platform in 2015, the company never could’ve imagined how much controversy it would cause. In the years since, Facebook Live has been used to stream everything from graphic violence, sexual content and even terrorism, as in the case of the tragic mosque shooting in New Zealand last week. Following this most recent horrific act, Facebook has vowed to improve its AI technology to better detect videos like it in the future.
“AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect,” Facebook’s VP of Product Management Guy Rosen wrote in a blog post. “People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us.”
According to Facebook, its AI did not catch the shooting video because it has not received enough “training data” — aka, videos of other shootings — to do so. Thankfully, incidents like this are rare, so Facebook will have to think of another way to train its tech. Additionally, Facebook says it did not receive a single user report during the live stream, and the company relies heavily on users self-policing content to catch bad actors. The video was also copied and shared so quickly, across so many different platforms, that Facebook struggled to keep up.
All of this is not to make excuses for Facebook. The company created this video platform, and now it has to deal with the consequences. But it does show just how much Facebook has its work cut out for it in cleaning up the platform.
Bitdefender 2019 solutions stop attacks before they even begin. Try 90 days free of Bitdefender Total Security 2019
Private Internet Access is an award-winning, cost-effective VPN solution. The use of an anonymous and trusted VPN is essential to your online privacy, security and identity protection.