Two years ago, Facebook received widespread condemnation for allowing its platform to be weaponized and abused in the country of Myanmar. The government of the small Asian nation used the social media giant as a tool to repress its Rohingya minority population. Now, Facebook says it is ramping up its efforts to detect hate speech and misinformation in the country ahead of an election on November 8 — a potential preview of policies that the company could roll out around the world.
According to Facebook, it will work with local partners on the ground to remove any misinformation or rumors that could “suppress the vote or damage the integrity of the electoral process.” That includes misleading or false images that Facebook says it will partially detect using artificial intelligence.
“Out-of-context images are often used to deceive, confuse and cause harm. With this product, users will be shown a message when they attempt to share specific types of images, including photos that are over a year old and that may come close to violating Facebook’s guidelines on violent content,” the company announced. “We warn people that the image they are about to share could be harmful or misleading will be triggered using a combination of AI and human review.”
Of course, critics of Facebook will say these changes come far too late to protect vulnerable populations within Myanmar. But hopefully they serve as a test case for how Facebook can stop hate speech around the world.
The Choice of Tech Experts Worldwide. Try 90 days free of Bitdefender and experience the highest level of digital safety.
Surf the web truly incognito. Try Bitdefender Premium VPN, the ultra-fast VPN that keeps your online identity and activities safe from hackers, ISPs and snoops.