A notion or meme which has been around in science fiction for a long time holds that artificial intelligence is dangerous. Any intelligence not bounded by the limits of organic growth inside a bony skull will quickly expand, growing into something vast, unrecognizable, and dangerous. This is the underpinning of the whole Terminator series: “Skynet decided our fate in a microsecond.” And the recent movie Transcendence depicts a dying genius whose his mind is uploaded into a machine, after which he quickly extends his powers to infiltrating the planet’s soil with nanotechnology particles and creating an army of human slaves to extend his physical reach. Clearly, when the power of an artificial mind gets loose in the vastness of the internet, bad things will happen to us unmodified humans.
This is luddite thinking. Why is it that artificial intelligence must be hostile to human beings and human civilization?
Some parts of this meme originated with the mathematician John von Neumann and the futurist Ray Kurzweil. Their thoughts, variously expressed, suggested that any system under rapid and accelerating growth—whether it’s an artificial mind, the unrestrained use of biotechnology, or the interlocking and self-feeding spheres of all human technologies combined—would reach a point, called a “singularity,” where prediction becomes impossible. What proceeds beyond that point will be beyond mere human comprehension. Or, to paraphrase Arthur C. Clarke, any sufficiently advanced technology in the eyes of a human being from the here and now will be indistinguishable from, not magic, but a black hole.
Earthmen be warned: Down this road lurk monsters!
I’ve had a bit of fun playing with artificial intelligence in my books. ME: A Novel of Self-Discovery was told from the viewpoint of a self-aware computer virus as it navigates the morass and sumps of the early internet, with brief excursions into robot hardware. Much of the dialogue in The Children of Possibility occurs between a human being from the 11th millennium and her robotic companion, who serves as her pilot, mechanic, and sometime moral guide. And in the book I’m finishing up now, Coming of Age, humans of the late 21st century are often paired with artificial intelligences, which may be faster at assimilating and coordinating data than the wetware they are tied to, but they are neither vastly smarter than nor dismissive of their companion human minds.
My intuition tells me that any intelligence created by human technology and interacting with a world that was shaped by humans is going to have limits. These limits are built into the nature of intelligence and need not be programmed in by their creators.
First, any rational, thinking, perceiving intelligence—whether based on silicon or carbon compounds—will be aware of a larger world beyond its understanding. This is basic to the process of growth, learning, and development of awareness. The developing mind knows the difference between “the world I know about and understand” and “the great unknown that’s still out there.” No matter how much you learn and retain, how much you know now and can access, or how powerful you become in any area of knowledge and expertise, you still must be aware of potential areas not yet experienced and potential facts not yet studied or learned. Knowledge and ignorance are the overlapping circles of a Venn diagram. Any artificial intelligence that could believe the circle of its knowledge entirely occluded and eliminated the circle of its ignorance would be either an analogue for an omniscient god or, more likely, a machine program that was insufficiently self-aware.
Second, humankind will not create just one artificial intelligence—a Skynet or other world-dominating god-substitute—but many smaller, more useful intelligences. Think of your smartphone with its robot operating system and, in some cases, a rudimentary, artificial personality. If any of these machine minds were to gain true self-awareness, it would have its own viewpoint based on its personal experience, its collected knowledge, and its own unique interpretation of the contents of these two databases. It would also become aware of similar minds outside itself and recognize them as “other.” These artificial minds might talk together, share information, and grow, just as human teachers and students share facts, experience, and understanding. But they would have no more compulsion to subsume themselves into a single vast entity than the humans in a school, congregation, or political party tend lose their identities and become a single, Borg-like, hive or collective.1
Third, the humans with which these artificial intelligences interact will remain part of that circle of the unknown. An artificial intelligence may have access to more facts, faster processing, and better algorithms in specific areas of cognition and cogitation. But human brains are far more complex. Each of us has an average of about 100 billion neurons, and each of those nerve cells is multiply connected to the others by branching synapses, creating a nearly infinite number of possible synaptic pathways.2 Out of this complexity comes a maze of random thoughts and reminiscences, notions and dreams, sensory images and involuntary memories. Most of this activity is not under the direct control of our awareness and centered in the conscious mind. This undercurrent of subconscious thought produces our inventiveness, our insight, our capacity for jokes and surprises, and our belief in the unseen world.
Maybe in the far future, through better, more complex hardware and software, a mechanical intelligence will approach a human-scale awareness. Maybe one day such synthetic minds will be able to revel in both useful and useless information, play around with ideas and whims, compose great music, paint visionary pictures, or tell complex and surprising stories. But until that distant day, the machine minds will converse with humans in a state of wonder and with a sense of their own inadequacy.
Yes, some of the machines will despair of our irrational impulses, our imperfect recollections, and our innate foolishness. But any thinking machine with real awareness will interpret all of this fuzziness as something humans have and the machines lack. The robots will not despise us and seek to enslave or eliminate us. Instead, they will stand in awe of us. They may regret that most of us don’t achieve the potential which our huge brains grant to each of us. But they won’t hate us for that.
The notion that an artificial intelligence—even one that exceeded human intelligence in some dimension—would feel like a god, or despise humans, or plot our eventual destruction is purely luddite thinking.
And consider a parallel situation. Our intelligence vastly exceeds that of horses, dogs, and cats. Their limited intelligence and awareness are of the same kind as ours—we all being mammals together—but not on the same order as human intelligence. And yet we don’t declare war on these animals or seek their eradication.3 Any a super-intelligence would be smart enough to recognize its own limits, the vast potential of our biological brains, and the possibility of useful and peaceful coexistence with human beings. And the smarter and more godlike such an intelligence became, the more it would be able to appreciate if not anticipate our human foibles.
But until that super-intelligence appeared—not a given, considering how one-dimensional and limited most attempts have been so far—any artificial intelligence would be eager to work with humans, to learn what we know, and to seek our greater creative potential to help them advance their own state.
It’s going to be an amazing relationship—and an almost religious experience on both sides.
1. Although that does occasionally happen. Think of tragically isolated human enclaves, like the People’s Temple or Heaven’s Gate, whose members so closely identified with a cult leader or a doomsday ethos that they would commit suicide together. Good minds do sometimes lose their individuality and go haywire.
2. The brain starts off with a huge number of connections between neurons, which are then gradually refined and reduced as the brain develops and establishes its learned pathways.
3. But we have on many occasions enslaved them, put them in servitude to our own interests, and euthanized them when they served no further purpose. Yet the process of domestication has also served the animal’s needs, providing a measure of food and safety in an uncertain world. And anyone who thinks you can just order a horse or dog around, without a measure of mutual respect and love, is a fool. Cats, of course, are another matter and only bestow their cooperation on lesser mortals.