Despite all of Facebook’s advances in artificial intelligence technology, it’s still disturbingly easy for bad actors to find ways to share their content. According to an investigation from the BBC, many of these users are skirting Facebook’s content moderation rules by simply including an emoji in the text to confuse the company’s algorithm.
According to experts, Facebook algorithm is trained primarily using text and has very little experience with emojis at all. And even if it did, these sneaky users (many of whom post about controversial topics such as COVID-19) are randomly using emojis to swap in for words, making it virtually impossible to detect.
“It’s a modern form of steganography: writing and hiding a message in plain sight, but such that unless you know where to look you don’t see it,” cybersecurity expert Alan Woodward told the BBC. “What all of this demonstrates is the futility of trying to automate moderation of content to prevent the sharing of ‘harmful’ material… At the very best you will be playing a game of whack-a-mole, as people develop new dialects with which to communicate.”
The only way to fix this problem would be for Facebook to dramatically increase its number of human content moderators. However, the company seems fully committed to AI, meaning this could remain an issue for the foreseeable future.
Choose what the experts use: award-winning cybersecurity you can trust and rely on.
Surf the web truly incognito. Try Bitdefender Premium VPN, the ultra-fast VPN that keeps your online identity and activities safe from hackers, ISPs and snoops.