Facebook pride themselves on being at the forefront of technology and communication. The massive corporation has around 2 billion users and owns a range of other global social media platforms, like Instagram.
Billionaire CEO Mark Zuckerberg has reportedly been working on the next step in technology: artificial intelligence (AI). However, it seems like humans may have dabbled a bit too much and a bit too quickly as developers had to shut down testing after the robot AIs they created started to “invent” their own language.
In terms of how this came about, researchers at Facebook were trying to build a chat-bot that would be able to correspond with humans, without letting on they were artificial intelligence. In this instance, developers attempted to ‘train‘ the bots to learn how to trade and barter by mimicking human interactions.
Next, developers put two of the robots together to see if they would be able to trade and barter balls, hats and books. The robots were programmed to assign a value to these objects and then try to trade the items between them, but things swiftly got a bit odd.
The two robots, named Alice and Bob, could communicate easily in plain English, but things took a bit of a “darker” turn when they started to modify their speech into a new language. It “led to divergence from human language as the agents developed their own language for negotiating”, explained one researcher.
What was said?
Transcript from the AI, verbatim:
Bob: I can I I everything else…
Alice: balls have zero to me to me to me to me to me to me to me to me to me
Bob: you I everything else…
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: I I can I I I everything else…
Why did this happen?
The speech patterns look odd and somewhat creepy; rather different from any recognised human way of speaking. Researchers claim the incident was ‘natural‘ of a sorts, and they believe the repetition for certain words signified quantity, for example, and it’s worth remembering that shorthand codes are made up in most languages for ease of communication.
Developers also argued that the robots had no reason to stick to English as they were never assigned a reward to stick to legible English.
One Facebook researcher, Dhrub Bhata, maintains that “agents will drift off understandable language and invent code words for themselves.” Like a sportsman or a doctor, jargon specific to a sector is often used to describe certain actions or items for ease.
What does this mean for data protection?
So, what does this mean for data protection?
The big question is whether we can trust artificial intelligence to take, properly exchange and look after our data. Today, most records are kept digitally and we trust these records not to be tampered with. Generally speaking there is an understanding that, unless tampered with by a human or by error, records are accurate and reliable.
Many websites now use automated systems to help customers find solutions quickly. If Facebook achieve their goal of building a “personalised digital assistant”, do we treat them as reliable digital record keepers, and can we expect them to understand and react to our needs and desires?
If robots can create languages for their own ‘ease‘ and ‘efficiency‘, what other aims could they have, and how far will they go to reach them?
It seems that the more we rely on AI, the more we’re potentially putting ourselves at risk of some pretty scary things…