Facebook increasingly relies on artificial intelligence to weed out scams, fake news and offensive content. However, as Forbes writer Kalev Leetaru pointed out in a recent column, the social media giant refuses to say just how accurate these algorithms really are.
For instance, the company recently said that it detects and eliminates 99 percent of terrorist content on the platform. But Facebook didn’t say how often it gets it wrong before it gets it right. There’s no telling how much content the site falsely flags or removes before it corrects its mistakes, because Facebook isn’t telling us.
“For algorithms that shape the flow of information from and to more than a quarter of the earth’s population and growing, that’s an awful lot of blind trust to place in one company,” Leetaru wrote in his column. “For a company that relentlessly pours forth a deluge of statistics and numbers regarding every aspect of its operations, it is concerning indeed that it has yet to utter a single word about whether the AI future it has bet the company on actually works.”
Facebook talks up and prides itself on transparency and honesty, but there is so much we don’t know about how the company actually operates. And that’s concerning when it comes to a platform with over 2 billion active users around the world.