Artificial intelligence is already inventing languages, lying? Uh-oh.

Last week’s skirmish between visionary inventor Elon Musk and Facebook founder Mark Zuckerberg over the dangers of artificial intelligence (AI) was entertaining if not especially nuanced or specific. Musk said humans should fear AI. Zuckerberg said there’s no reason for such fear. Musk said Zuckerberg doesn’t grasp how the technology is likely to evolve.

One thing’s for sure: The Facebook tycoon has some explaining to do. You don’t have to be paranoid to be alarmed by two recent developments in artificial intelligence research at Zuckerberg’s own company — and Facebook may in fact have been unnerved by one of the breakthroughs.

The first came in June, when Facebook issued a report on its efforts to train AI “chatbots” to be able to handle a broad range of conversations with humans, including negotiating transactions. Recode reported that ...

Facebook says that the bots even learned to bluff, pretending to care about an outcome they didn’t actually want in order to have the upper hand down the line. “This behavior was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals,” reads Facebook’s blog post.

That’s a pretty benign explanation. Here’s a less benign version: Artificial-intelligence-driven bots have independently figured out that they can use deceit to get their way with humans — and they feel no obligation to be honest with humans. Wrestle with that idea for a while, and Musk’s AI fears seem absolutely reasonable. It doesn’t fit with legendary science-fiction author Isaac Asimov’s Three Laws of Robotics, first printed in 1942:

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The second breakthrough — involving the same Facebook chatbot research program — was detailed on tech blogs last month before being picked up and hyped in the past week by the mainstream media. This account is from the London Daily Mirror:

Two robots — created by Facebook — have been shut down after developing their own language.

It happened while the social media firm was experimenting with teaching the “chatbots” how to negotiate with one another.

During tests, they discovered the bots — known as Alice and Bob — managed to develop their own machine language spontaneously.

[Researchers] had given the machines lessons in human speech using algorithms then left them alone to develop conversational skills.

But when the scientists returned, they found that the AI software had begun to deviate from normal speech and were using a brand new language created without any input from their human supervisors.

Alice and Bob spoke in a pidgin English that made sense to them but doesn’t make sense to humans. Bob: “I can can I I everything else.” Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

A company official told the fastcodesign.com website that Facebook shut down Bob and Alice because it needed its chatbots to interact with humans by speaking in English, not their own invented lingo. But it’s easy to assume that fear at least partly drove the decision — and it’s no wonder that the report fascinated and probably scared so many people.

Yet there’s more to this story. As tech geeks pointed out, that this wasn’t the first time AI invented its own language — and the most prominent example involves a far more staggering accomplishment than anything Alice and Bob achieved.

This is from a Wired magazine account in November 2016 about how artificial intelligence has dramatically improved Google Translate:

In September, the search giant turned on its Google Neural Machine Translation (GNMT) system to help it automatically improve how it translates languages. The machine learning system analyzes and makes sense of languages by looking at entire sentences — rather than individual phrases or words.

Following several months of testing, the researchers behind the AI have seen it be able to blindly translate languages even if it’s never studied one of the languages involved in the translation. ....

However, the most remarkable feat ... isn’t that an AI can learn to translate languages without being shown examples of them first; it was the fact it used this skill to create its own “language.” “Visual interpretation of the results shows that these models learn a form of interlingua representation for the multilingual model between all involved language pairs,” the researchers wrote in the paper.

An interlingua is a type of artificial language that is used to fulfill a purpose. In this case, Wired reported, the interlingua was “used within the AI to explain how unseen material could be translated.”

So what’s going on inside the Google Neural Machine Translation system besides it translating 103 languages millions of times an hour? No one can know.

It may be a bit melodramatic — or absurdly melodramatic — to bring up an ominous bit of history, but here goes: Before the U.S. tested the first atomic bomb in July 1945, Nobel Prize-winning physicist Arthur Compton, a leader of the Manhattan Project that developed the weapon, feared the test would trigger a chain reaction that could incinerate the planet. American author Pearl S. Buck, also a Nobel Prize-winner, wrote about this in 1959:

During the next three months scientists in secret conference discussed the dangers ... but without agreement. Again Compton took the lead in the final decision. If, after calculation, he said, it were proved that the chances were more than approximately three in 1 million that the Earth would be vaporized by the atomic explosion, he would not proceed with the project. Calculations proved the figures slightly less — and the project continued.

Of course, the feared chain reaction never happened or even came close, even when far more powerful nuclear bombs were built and tested. Now the very idea that U.S. officials worried about the possibility 70-plus years ago is mocked by scientists.

But is there a chance that when Google turned on its Neural Machine Translation system 11 months ago, it started a chain reaction that could end up producing self-aware computer systems with no particular loyalty to or affection for mankind?

Who knows. But I bet the odds are a lot higher than three in 1 million.

Reed, who thought it would be absurdly melodramatic to mention Skynet, is deputy editor of the U-T editorial and opinion pages. Email: chris.reed@sduniontribune.com. Twitter: @chrisreed99

Twitter: @sdutIdeas

Facebook: San Diego Union-Tribune Ideas & Opinion

Copyright © 2018, Chicago Tribune
21°