Without a doubt, the spread of fake news is one of the biggest problems Facebook must grapple with on a daily basis. Some of the worst examples of this are what happens when fake medical info goes viral — including posts about the recent coronavirus outbreak. Thankfully, the social media giant announced this week that it will take aggressive steps to fight this misinformation across both Facebook and Instagram.
Mere hours after the Global Health Organization declared a global public health emergency over the coronavirus outbreak, Facebook said it will begin to combat fake posts about it.
“Our global network of third-party fact-checkers are continuing their work reviewing content and debunking false claims that are spreading related to the coronavirus,” Facebook announced in a blog post. “We will also start to remove content with false claims or conspiracy theories that have been flagged by leading global health organizations and local health authorities that could cause harm to people who believe them.”
In addition to limiting and removing content, Facebook also said that it will notify people if they have shared an inaccurate post. It will then point users toward accurate info through educational pop-ups and messages at the top of the news feed.
Of course, all of these steps are vitally important to fight fake news. But it does raise the question: why can’t Facebook do this for the rest of the spammy content on its platform?
The Choice of Tech Experts Worldwide. Try 90 days free of Bitdefender 2020 and experience the highest level of digital safety.
Private Internet Access is an award-winning, cost-effective VPN solution. The use of an anonymous and trusted VPN is essential to your online privacy, security and identity protection.