Facebook introduced new suicide prevention tools this week for users who may be considering suicide, along with concerned friends and family. The site also said it wants to use artificial intelligence (AI) to help detect potential suicide risks.
With the new tools, users who are watching a concerning video stream on Facebook Live will be able to report it if they think the person in it is at risk for “suicide or self-injury.” The user who makes the report will then receive a link to resources to help the broadcaster, and the broadcaster will receive a message asking if they want to contact a friend or helpline. Facebook also said it wants to use AI technology to identify potential suicide posts based on pattern recognition from posts that have been previously flagged. Facebook founder and CEO Mark Zuckerberg himself acknowledged the company needed to do more about the issue in an essay published last month.
“There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner,” Zuckerberg wrote. “To prevent harm, we can build social infrastructure to help our community identify problems before they happen.”
It’s admirable that Facebook is stepping up to do something about this problem. But it’s nonetheless troubling that Facebook’s AI is powerful enough that the site thinks it can identify events before they even happen. Facebook might be Big Brother, but in this case, it’s Big Brother looking out for the common good.