Facebook launched its “Safety Check” feature in fall 2014 that allowed users to check in with friends and family in the wake of natural disasters, shootings or other dangerous incidents that occur in the same geographic area as them. The tool has come under some scrutiny for the kinds of calamities it does and does not recognize, and one such strange incident occurred last week when Facebook issued a “violent crime” safety check notification to some users in Chicago.
According to a Facebook spokesperson, the notification was a community-driven message that was generated by an algorithm that detected multiple people in one area posting about a similar shooting incident. Facebook then sent the safety alert to people who posted about the shooting or were tagged by someone who received the safety check.
However, one Chicago-area expert alleges that the safety alert was targeted specifically at African-American users, asking on his Facebook page for white users to notify him if they received it. Not many did.
“We already know Facebook segregates people by their political views, even among their own friends,” digital media consultant Brady Chalmers said. “But the idea that they can sort us into these boxes where you get different information based on your race…that’s slightly terrifying.”
There’s no doubt that Safety Check is a useful and well-intentioned feature, but when it comes to automating it, Facebook may have some kinks to work out first.