Buddhist prayer wheels
Note Well: It was a powerful temptation to turn this week’s blog into a meditation on the 2016 presidential election, which has been marked by an unbelievable pair of candidates and an historic last-minute upheaval. Those who know me well know my position and have already heard my arguments. For the rest, what has been done is now done, and let’s move forward.
In a recent post1 I considered the ways that two systems, a human being and a robot, would approach the task of hitting a baseball. At the most basic level, both would observe the pitcher’s release and the flight of the ball and then apply either a learned response or an algorithm to interpret the ball’s actual trajectory and select the ideal swing. The difference is that the robot would wait patiently to perform this task, while the human being—with so much else going on his or her body and mind—will fidget, glance around, take practice swings, and remain physically and mentally ready for so much more to happen than simply meeting the oncoming ball with the barrel of the bat.
Having just observed the major league playoffs and the World Series, with their ups and downs,2 I could see another difference between humans and machines—or the artificial intelligence that will run them. Humans have an excess of spirit that no analytical intelligence has yet attained. We express this spirit in terms of expectations, beliefs, hopes and fears, confidence and insecurity—all of which take known or discoverable facts into account and yet sometimes cause us to think and believe otherwise.
This comes up most strongly in differences between the commentary from the announcers and the action on the field. The men and women in the broadcast booth today have instant access to a fantastic computer memory. They not only know and can tell you which teams have met before and what were the outcomes. No, that’s just the sort of statistic an old-time radio announcer could look up in a sports almanac. Today’s broadcaster can tell you how many times and when each batter has faced each pitcher, how many balls and strikes the pitcher has thrown against him, and how many hits for how many bases, or runs batted in, or home runs the batter has made. And these statistics go back for years and across the player’s affiliation with every team in his career. If a batter makes an unusual home run—or an outfielder makes an unusual diving catch—the announcer can find a similar instance from play earlier in the season, or even from years ago, and run a video clip of it before the next player comes to bat.
All of this reminds me of Han Solo in the Star Wars movies: “Never tell me the odds, kid.” The past is only prelude. And, as the financial disclaimers say, “Past performance is not a predictor of future results.” Insurance actuaries, baseball announcers, and robots might live and die by statistical nuance. Human beings almost never do. “I can win this one!” “I can make that jump!” “I can beat that guy!” “This time will be different!” This is the spirit that the human mind—at least in its healthy state—and the instinct for survival generates when faced by daunting and difficult situations and by long odds.
I imagine that, to achieve something like this with an artificial brain, the designers would have to insert a counterfactual circuit that kicks in whenever the algorithm produces negative or undesirable outcomes. Such a circuit would amend or ignore previous experience, or accentuate only certain aspects of that experience that would tend to support a positive outcome. “Yes, eight times out of ten I have struck out against this pitcher, but twice I got a hit—and one of them was a home run.” It would not do to change the performance algorithm itself, because then all sorts of unexpected actions might result, and the system might never find its way back into equilibrium. No, the adjustment would come in the decision-making process: to go ahead and try when the algorithm and previous experience predict a negative outcome.
Computer programmers would be loath to design and install such a circuit. Right now, artificial intelligences are designed for maximum reliability and caution. You want the program that routes your request through the bowels of Amazon.com’s order system to read the tag, make the selection, send the bill, and ship the product. If the product is out of stock, on back order, or no longer available, you don’t want the computer system to engage some kind of I-Can-Do-This! circuit and make an unauthorized substitution. The system is supposed to flag anomalies and put them aside for decision either by a human being or a higher-level system that will query the customer for a preferred choice.
You don’t want the expert system that is reading your blood tests and biometrics, consulting its database of symptoms linked to causes and disease types, and making a diagnosis to suddenly engage an It’s-All-For-the-Best! circuit and suddenly opt for diagnosing a rare but essentially benign condition when the patient is staring a fully developed, stage 4, metastasized cancer in the face. If there is hope to offer, you want the expert system to display and rank all the possibilities, then let a human doctor or a higher-level system explain their meanings and the correct odds to the patient.
You don’t want a self-driving car to look at a gap in traffic that’s just millimeters wider than the car’s fenders and, ignoring deceleration rates, cross winds, and tire traction, switch to the We-Can-Make-This! circuit and lunge for the gap. Not ever—and not even as a possible option that the system would present to the human driver, who might suddenly want to put his or her hands on the wheel and make a wild and death-defying correction. When a ton or two of moving metal is involved, and multiple lives are at stake, you always want the system to err on the side of caution and safety.
Perhaps human beings, when left to operate the order system, make the expert diagnosis, or take the steering wheel, will put hope before either experience or caution and then select the substitute product, offer the most cheerful guidance, or lunge for the gap. But human society has also instituted programs of training and ethics to temper an excess of spirit. We expect human professionals to react more like machines: rules based, odds driven, and cautious. And we expected that of ourselves long before anyone thought of turning complex operations and decisions over to mechanical systems.
But that is in dealings with other human beings, who put their trust in another person’s performance accuracy and decision power to achieve outcomes of life-and-death or even mere customer satisfaction. When dealing for our own sakes—when confronting the possibility of receiving a surprise package, or beating a cancer diagnosis, or squeezing into a narrow gap—we feel at liberty to err on the side of hope.
And we certainly expect our team, our players, and ourselves to express that excess spirit and make a gallant try when life and safety are not on the line. In a baseball game, the batter might know the odds of hitting against a tough pitcher, but who would expect him to pause, reflect on past performance, step out of the box, and refuse to even try? One team might have lost to the other a dozen times in the past, but no one expects them to give up and forfeit. Spirit, hope, and confidence in the face of long odds are what make the rest of us cheer harder when the batter makes a home run or our team wins against the moneyline bet. They let us forgive more easily when the past does indeed turn out to be a predictor of performance.
And when our own life and safety are on the line—when you must jump from the third floor or stay on the ledge and burn, when the gap between two trucks colliding ahead of you is no wider than your fenders, when the doctor pronounces a disease that has every chance of taking your life—then the excess of spirit, the can-do attitude, the refusal to follow the odds are survival traits. When death is likely but not certain, then it’s best to err on the side of hope and take action. We make up stories about this, and in every story the reader wants the hero to strive against the odds. He or she may not succeed—the actual outcome is left to fate and the author’s skilled hand. But for the hero to face reality and give up before the crisis point would not make a good story. Or it would be the story of a depressed or insecure person who is no sort of hero, no role model, who doesn’t deserve to be the focus of a story in the first place.
Excess of spirit is not just an oddity that we find in the human psyche, it’s something we expect from any healthy person.
1. See Excess Energy from July 24, 2016.
2. Yes, and my hometown Giants went down in the fourth game of the National League Division Series, when the bullpen collapsed in the top of the ninth inning. And we had such hopes.