Facebook originally created its Safety Check feature for people in the midst of natural disasters to let their family and friends know they were safe. The tool eventually expanded to include human-made disasters like terrorist attacks and, like so many of Facebook’s other tools, the triggering of Safety Check became automated using an algorithm. That became painfully obvious this week when the feature was activated during protests in Charlotte, North Carolina, regarding the fatal police shooting of an African American man.
Of course, what makes this so problematic is that a protest over racial and political issues was equated in Facebook’s eyes to a disaster. According to Facebook itself, Safety Check can be triggered if enough people “post about a specific incident” in a “crisis area.” In the wake of the Safety Check activation in Charlotte, many journalists and experts were critical of the tool and the many ways its automation could backfire.
“This is a move now typical of Facebook: An attempt to wash its hands of editorial and political judgment, to hand off all such responsibility to an opaque ‘algorithm’ we’re supposed to trust as impartial and democratic,” Sam Biddle wrote in The Intercept.
In its rush to automate everything, Facebook seems to have forgotten something important: the human touch can go a long way toward preventing senseless or callous incidents like this.