Facebook Rolls Out Tool To Detect Suicidal Posts Before They’re Even Reported

Facebook is often accused of behaving like Big Brother, closely monitoring and tracking user behavior across the web. However, that creepy watchfulness can have positive ramifications too. For instance, the company announced this week that its suicide prevention AI technology has advanced to the point where it can detect suicidal posts from users even before they’re reported.

This “proactive detection” technology could save lives by shaving valuable seconds off the response time to posts from users threatening self-harm. The system works by identifying a post or Facebook Live broadcast using a set of keywords, then routing the concerning content to specially-trained Facebook reviewers. From there, the Facebook employees can contact first responders and send help — all before the post has even been reported by the user’s friends.

Perhaps anticipating privacy concerns, a Facebook executive took to Twitter this week to explain the company’s thought process behind the detection technology.

“The creepy/scary/malicious use of AI will be a risk forever, which is why it’s important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in,” Facebook chief security officer Alex Stamos wrote.

Indeed, if Facebook is going to possess this much information about us, it might as well use it for a positive goal.