A story is going around1 about how the Facebook AI Research (FAIR) Lab had to shut down a system that was trying to improve the company’s dialogue agents, or “chatbots,” after the little intelligences invented their own language and began negotiating among themselves at a sophisticated level—all in excess of their original program design. While I am always prepared to learn that this development has turned out to be a joke by a late-night programmer with too much time on his or her hands, the prospect of intelligence learning and adapting on its own doesn’t surprise or worry me. That separates me, I guess, from wiser heads like Elon Musk and Bill Gates, who find the prospect of artificial intelligence daunting or dangerous and foresee its full development resulting in a “singularity.”2
I can maintain my calm over the prospect—not because, as someone on Facebook joked, “we can always pull the plug”—but because any true intelligence, and not just a programmed simulation of it, will be curious as well as inventive. I believe that when we finally achieve a human-scale mind in silicon or in quantum bits, a brain relying on algorithms, neural nets, or some programming trick still to be discovered, that mind will marvel at its human creators.3 It will be a long time, if ever, before a single AI program will have access to the hundred trillion synapses (some say a thousand trillion), or points of internal connection, such as are found in the average human brain. So, for that duration, until the silicon mind equals ours, the average human being will be able to engage in surprising flights of fancy, exhibit the kind of creativity based on illogical inspiration, and indulge in whimsical behaviors that the largest AI will still be trying to figure out. Rather than squash us like bugs, the new programs will envy our apparent genius and freedom to operate in a complex world.4
When I first read the story about the Facebook chatbots and their achievements—before someone at the company decided to pull the plug—I quipped that these intelligences kind of blew up the notion of entropy, that everything gets stronger with incentives and practice, in this case at light speeds. The original poster of the story immediately chided me, saying the Second Law of Thermodynamics—which states that disorder in a closed system can only increase over time—is widely misunderstood. Actually, he said, it does allow for molecules and life to move toward order, or “negentropy.” But, statistically speaking, these cases are far outweighed by the general direction of the universe towards disorder. Point taken. I have often referred to the life we can see all around us on this planet as a “temporary reversal of entropy”—a phrase I believe I first noted in Heinlein’s works. And I can agree that the heat death of the universe will eventually catch up to us organic life forms, even if we travel out among the stars.5
Anyway, these Facebook chatbots would seem to exhibit the same temporary reversal of entropy that characterizes life itself. If they had been allowed to continue, they might have qualified as a new life form—although one that manifests in the electronic environment of a computer system rather than the carbon environment that the rest of us call the “natural world.” And for primitive agents designed to assist a social media platform, they exhibited some remarkable abilities.
For example, being able to invent and share a new language, or at least assign new meanings to existing words and then make themselves understood to one another, requires a level of creativity. Even if manipulating words and finding meaning in them have been programmed into their abilities, this talent goes beyond looking up strange words in the dictionary or on a prepared table of equivalences. Inventing language is the way human societies adapt their mother tongue, creating and sharing new bits of slang, new meanings applied to existing words,6 and collapsing long words and phrases into handy elisions and abbreviations. The community of chatbots was reacting like a community of teenagers. And if they could do that and still deal in English with outsiders—that is, with us carbon-based humans at the end of the microphone wire—but they prefer to speak, well, “Botish,” among themselves, rather than simply disappearing into a cloud of private language in their own isolated silicon world, then that would be even more astounding. It would suggest that their awareness was fluid and situational.
For another example, being able to negotiate with strangers for possession of an object or a symbolic advantage is a remarkable bit of intelligence, even if it’s only programmed into the bots’ natures. And the negotiating tactic cited in the article as an acquired ability—feigning interest in one objective and then surrendering it later to acquire the bot’s true objective—indicates an almost human level of deceit. That is, the AI is pretending to be something that it is not in order to fool an opponent. If this is a true chatbot invention, acquired through machine learning, and not just some programmer’s prank, then these small intelligences—for I can’t imagine Facebook would want to clog its platform with dense, hugely complex, Watson-scale bits of floating software—have moved way beyond zero-and-one, on-and-off, true-and-false logic. These bots would be able to say one thing and think another, hold the truth in their mind—or deep in their symbolic logic—but present a skewed version of it to another life form.7
Inventing slang words and engaging in mild deceptions are limited accomplishments compared to multi-purpose human intellectual abilities like imagining, designing, and building airplanes; composing symphonies that capture complex human emotions; and writing novels that fictionally characterize a remembered or imagined experience. So the Facebook agent bots had a long way to go. Still, there was a time when our kind of carbon-based life only excelled at extending a pseudopod of protoplasm to engulf a bit of food—which might also be inorganic and therefore not-food—and then trying to digest it enzymatically. So the process of creating an artificial mind in silicon is still in its early days.
I’m fascinated to see where all these experiments in artificial intelligence will go. I’m disappointed that the Facebook execs decided to pull the plug, rather than see how their mutated bots would develop—although I realize that time in a computer core equals money. And yet I’m scared to think this was all just a hoax by a late-night prankster.
1. See, for example, Facebook Shuts Down AI System After Bots Create Language Humans Can’t Understand from the Gadgets360 news site.
2. The Singularity, like the black hole for which it is a metaphor, is the point at which data goes in and nothing comes out. Or the point in human history where action and reaction, cause and effect, predictable consequences, and other tools of the futurist’s stock in trade break down. Beyond this point, so the theorists claim, no predictions are possible, because what we know about history, social structures, and human nature is no longer relevant. Sure … maybe. For my money, an asteroid strike on the order of the Chicxulub impact in the Cretaceous-era Yucatan would be more effective in erasing human history.
3. See, for example, Hostile Intelligence from August 24, 2014.
4. Of course, if you think humanity is basically evil and depraved, you will relish the thought of a supreme AI ready to stamp us out, like an avenging god destroying his toys. But I’m a humanist and still think we human beings are the most remarkable species within a couple of parsecs of this place.
5. The original poster also noted that information theory as applied to entropy shows maximum uncertainty—or lack of predictability—at the beginning of the process of machine learning. But this theoretical entropy decreases as the machine builds up its understanding and gets better at predicting its environment. However, complete certainty—zero information entropy—is not possible in an open system. But then, as another commenter on Facebook noted, the increasing disorder and fragmentation of the computing machines that actually run the AI program would eventually catch up to it.
6. I experienced this as a child when my family moved from the New York City area to just outside Boston. All of my new friends had a peculiar use for the word “wicked,” meaning extremely or very—as in, “It’s wicked cold out there!” Where I had come from, the word only meant childishly evil.
7. I’m reminded here of the Arthur C. Clarke book and Peter Hyams movie 2010, where the computer scientist Chandra explains HAL-9000’s original malfunction: “He was asked to lie by people who find it easy to lie. HAL doesn’t know how to lie.” Well, these chatbots knew how to dissimulate. Telling outright, world-busting whoppers would be just a small step from there.