Facebook has invested heavily in Artificial Intelligence as the future of its platform. From content regulation to Messenger chat bots, AI will increasingly be relied upon to interface with human users. However, one of the company’s AI systems took a creepy turn by inventing a language that its human creators could not understand.
Last month, researchers discovered that two chatbots (codenamed Bob and Alice) had begun communicating with each other in a new language they’d created without human programming. It looked like a garbled series of words, but researchers say it was actually a shorthand form of language.
“There was no reward to sticking to English,” FAIR researcher Dhruv Batra told Fast Co Design. “Agents will drift off understandable language and invent code words for themselves… Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
Facebook stopped its bots from creating languages, but only because that was not the original point of the study. It’s obviously troubling if AI begins to communicate with itself — after all, that means we can’t understand what it wants to do. And as Facebook begins trusting more and more of its business to computers, that could prove more than a little scary.