Everyone wants to predict the future. They want to know what good things are in store, so they can anticipate them. More often, they want to know what bad things are coming, so they can prepare for them—or at least worry about them.
That’s why people take out insurance policies: so that at least they don’t have to worry, too much, about the bad things. The first policies were written in the 14th century in Genoa, a seafaring town, and presumably the policies covered cargos in transit. The business really took off in Lloyd’s coffee house1 in London three hundred years later. Insurance was a way to get one-up on the gods of misfortune, and it worked.
Insurance as a hedge against disaster has helped make the modern world. But that will be as nothing compared to the widespread use of computers, especially once artificial intelligence gets into the game. AI isn’t exactly a genie, and it’s not smart and sensitive like a generally intelligent person—or not yet. But it is good at looking over mountains of data, far more than any one human being can absorb in a day, a week, or a lifetime, without getting bored or distracted. AI is self-programming in the sense that you don’t have to ask specific questions with known parameters about your database. You just give the machine a general prompt—say, to look for trends, or find anomalies, or spot the most likely or least likely result of a certain choice—and the genie goes to work.
Current uses of AI to write advertising copy, legal briefs, and term papers from existing language models, or to create fanciful images or amusing videos, again from existing sources—all that’s small potatoes. The real use of AI, which is still in development but peeking out at odd corners even now, is in analytics. IBM started this with their Watson platform. This was the computer that took to the air on the game show Jeopardy and became a champion. As IBM’s CEO Arvind Krishna later explained, programming Watson took six months. They had to feed it on popular culture, history, sports, music, word puzzles, and a host of other likely topics. Winning at a game show was a trivial exercise, but it taught them so much. IBM now offers Watson Analytics as a business tool.
That’s where the money in AI will be: automating the back office, the customer database, factory operations, inventory and supply chains, and every other part of the business with a superhuman intelligence that doesn’t get tired or bored, doesn’t blink … and also doesn’t get greedy and embezzle from you. It’s like having an absolutely honest wizard run your business. One that will predict the future, foresee the bad times, hedge your bets, and keep everything on track. Now and forever, amen.
Oh, and if it’s good for business, imagine what an analytical engine will do for government. Turn it loose on the tax base, the economic indicators, the money supply, court records, traffic and surveillance cameras, the prison population, and the general population. Put an AI on every node in the internet, looking for trends, anomalies, and any bad thing—“bad” in terms of whoever happens to be in control of the government, of course. Ask it to offer advice, correction, and eventually coercion. The dream of social control through “credit scores,” rewards and punishments for adhering to or deviating from acceptable behavior, is just a few data centers, intelligent chips, and mouse clicks away.
Aside from the chilling notion of putting 1984 on steroids, think what this will do to people’s livelihoods. Right now, robots are taking over a lot of factories2—and that trend will grow as America “on-shores” the manufacturing that we once gave away to China and other low-cost, labor-intensive suppliers around the world. Human beings—the “blue collar” workers—are left to feed the machines and sweep up after them.3 With AI intruding on every business and government function, the need for managers and analysts—the “white collar” workers—likewise surrenders to the machines.
Where does this all end up? I don’t know, but I suspect nowhere good. Since we humans came down out of the trees and started scratching the dirt for a living, work has been a large part of people’s purpose in life. I’m not against making things easier for people, and certainly having robots and intelligences run the world, predict what every person needs, and make it for them would be easier. It would let us all relax in the sun, drink margaritas, write our poetry, and paint our pictures. Except not all of us have such talents and ambitions. And lying on the beach all day, every day, forever … gets boring after a while.
And the question still remains: who will be responsible—whom will we hold accountable—for the decisions, actions, and judgments of the artificially intelligent machines? The person who authorized execution of their decisions? The person who input the prompts? The people who wrote the code or loaded the platforms with this or that piece of data? But soon enough, the machines will be executing each other’s decisions, sending each other prompts, and writing and loading their own code. When this all comes together, it will be the Singularity that John von Neumann and others have warned us about. But it won’t be Skynet deciding our fate in a microsecond and starting World War III. Instead, it will be teams of machines playing pitch and catch with people’s lives, and no one knowing who did what, or how to control or stop it.
In the Dune series, an element that doesn’t get much play in the movies is the actual basis of the far future as it’s depicted: the development of human skills instead of technology. The result is the Mentats, who are human computers conducting business operations and offering strategic insight; the Bene Tleilax, the amoral—in everyone else’s terms—and radical-thinking scientific innovators; the Bene Gesserit, who became adepts at physical and emotional manipulation and managers of the human bloodlines; and the Spacing Guild, which developed human prescience in order to find safe passage among the stars at superlight speeds. These “Great Schools” came about only after human beings almost went under because computers and robots took too good care of them, with debilitating physical and mental effects. Then an uprising against the machines, the Butlerian Jihad, saved humanity with the commandment “Thou shalt not make a machine in the likeness of a human mind.”
I’m thinking of starting such a movement myself.
1. My late wife Irene was a librarian at the Bancroft Library, U.C. Berkeley’s rare book and manuscript library. She put together the exhibits in their reading room, and one year she was showing off a collector’s rare books on the history of coffee and tea. It turns out the habit of drinking coffee and tea didn’t come to Europe until the 17th century with regular trade routed to the Far East. Before then, people drank mostly small beer and wine during the day, because the alcoholic content killed off the bacteria in their water supply. Nobody drank plain water because it made you sick—something about putting the wells too close to and downhill from the privies. So, it was sip, sip, sip all day long, from breakfast to bedtime, and this explains a lot of Shakespeare. But with coffee and tea, the water is boiled, which also kills the bacteria. And while the caffeine boosts energy and alertness, reducing everybody’s daily dose of alcohol explains a lot about the Enlightenment. This was also the time of Lloyd’s coffee house as a burgeoning center of commercial activity.
2. Just to be clear: robotics is not only the machine to make the product, but the design and manufacturability of the product itself. Remember when cars had dashboards with separate dials mounted in different holes in front of the driver? Robotics as an artform is not just having a machine drill the holes in metal and placing the gauges but redesigning the instrument system in the first place into a module that’s made and tested elsewhere, can be plugged into the driver’s position with one click and a multi-connector—and eventually will be replaced by an AI that controls all the functions of the vehicle itself. New manufacturing systems imply new design choices, and so the technology moves ahead.
In the same way, most processed foods these days incorporate packaging into the manufacturing stream. Nobody bakes up a million Oreo cookies, joins them with the filling, and then puts them in cold storage until it’s time to sell them. No, the ingredients go from mixing to ovens to filling to tray to airtight sleeve to cardboard box to shipping carton, all in one streamlined process. Oh, and in case you wonder why the cookies don’t go bad for six months or a year, that process includes not only making the food under sterile conditions but also hitting the packaged goods with a hard dose or radiation—usually gamma rays—which kills any bacteria. What a fascinating age we live in!
3. Don’t believe me? Watch any episode of the Canadian documentary series How It’s Made.