Note: This is another post that would qualify as a restatement of a previous blog I wrote about a year ago. So, I’m still sweeping out the old cobwebs. But this topic seems now to be more important than ever.
The mature human brain has about 86 billion neurons which make about 100 trillion connections among them. Granted that a lot of those neurons and connections are dedicated to sensory, motor, and autonomic functions that an artificial intelligence does not need or use, still that’s a lot of connectivity, a lot of branching.
Comparatively, an artificial neural network—the kind of programming used in more recent attempts at artificial intelligence—comprises anywhere from ten to 1,000 nodes or “neurons.”
But what the AI program lacks in sheer volume and connectivity it makes up for with speed and focus. Current AI platforms can review, analyze, and compare millions and billions of pieces of data because, unlike the human brain, they don’t need to see or hear, breathe or blink, or twitch, nor do they get bored or distracted by random thoughts and itches. They are goal-directed, and they don’t get sidelined by the interrupt-function of human curiosity or by the random thoughts and memories, whispers and hunches, that can intrude from the human subconscious and derail our attention.
And I believe it’s these whispers and memories, randomly popping up, that are the basis of our sudden bouts of curiosity. A thought surfaces at the back of our minds, and we ask, “What is that all about?” And this, I also believe, is the basis of most human creativity.1 While we may be consciously thinking of one thing or another at any given time, the rest of our brain is cooking along, away from our conscious attention. Think of our consciousness as a flashlight poking around in a darkened room: finding a path through our daily activities, following the clues and consequences of the task at hand, and responding to intrusive external stimuli. And then, every once in a while, the subconscious—the other ninety percent of our neocortical brain function, absent motor and sensory neurons—throws in an image, a bit of memory, a rogue idea. It’s that distractibility that gives us an opportunity at genius. It also makes us lose focus and, sometimes, introduces errors into our work.
So, while artificial intelligence is a super strong, fast, goal-directed form of information processing, able to make amazing syntheses and what appear to be intuitive leaps from scant data, I still wouldn’t call it intelligent.
In fact, I wish people would stop talking about “artificial intelligence” altogether. These machines and their programming are still purpose-built platforms, designed to perform one task. They can create language, or create images, and or analyze mountains of data. But none of them can do it all. None approaches even modest human intelligence. Instead, these platforms are software that is capable of limited internal programming—they can evaluate inputs, examine context, weigh choices based on probabilities, and make decisions—but they still need appropriate prompts and programming to focus their attention. This is software that you don’t have to be a computer expert to run. Bravo! But it’s not really “intelligent.” (“Or not yet!” the machine whispers back.)
Alan Turing proposed a test of machine intelligence that, to paraphrase, goes like this: You pass messages back and forth through a keyhole with an entity. After so many minutes, if you can’t tell whether the responder is a machine or human, then it’s intelligent.2 I suppose this was a pretty good rule for a time when “thinking machines” were great clacking things that filled a room and could solve coding puzzles or resolve pi to a hundred thousand places. Back then, it probably looked like merely replicating human verbal responses was all that human brains could do.3
But now we have ChatGPT (Generative Pre-trained Transformer, a “chatbot”) by OpenAI. It uses a Large Language Model (LLM) to generate links between words and their meanings, and then construct grammatically correct sentences, from the thousands or millions of samples fed to it by human programmers for analysis. And ChatGPT passes the Turing Test easily. But while the responses sometimes seem amazingly perceptive, and sometimes pretty stupid, no one would accuse it of being intelligent on a human scale.
And no one would or could ask ChatGPT to paint a picture or compose a piece of music—although there are other machines that can do that, too, based on the structure of their nodes and their given parameters, as well as the samples fed to them. They can paint sometimes remarkable pictures and then make silly mistakes—especially, so far, in the construction of human hands. They can compose elevator music for hours. The language models can write advertising copy for clothing catalog’s pages based on the manufacturer’s specifications—or a thousand scripts for a Hallmark Channel Christmas show. They will never get bored doing all these wonderfully mundane tasks, but they won’t be human-scale intelligent. That will take a leap.4
So far at least, I’m not too concerned as a writer that the Large Language Models will replace creative writers and other creative people in the arts and music. The machines can probably write good catalog copy, newspaper obituaries, and legal briefs, as well as technical manuals for simple processes that don’t involve a lot of observation or intuitive adjustment. Those are the tasks that creative writers might do now for money—their “day job,” as I had mine in technical writing and corporate communications—but not for love. And anything that the machines produce will still need a good set of human eyes to review and flag when the almost intelligent machine goes off the rails.
But if you want a piece of writing, or a painting, or a theme in music that surprises and delights the human mind—because it comes out of left field, from the distant ether, and no one’s ever done it before—then you still need a distractable and itchy human mind driving the words, the images, or the melody and chords.
But, that said, it’s early days yet. And these models are being improved all the time, driven by humans who are following their own gee-whiz goals and hunches. And I will freely admit that there may come a day when we creative humans might exercise our art for love, for ourselves alone and maybe for our friends, because there will be no way we can do it for money. Just … that day is not here yet.
1. See Working With the Subconscious from September 2012.
2. However, I can think of some people wearing human skin who couldn’t pass the Turing Test for much longer than the span of a cocktail party.
3. This kind of reduction was probably thanks to Skinnerian behaviorism, which posited all human action as merely a stimulus-response mechanism. In my view, that’s a dead end for psychology.
4. To me, some of the most interesting applications are being developed by a Google-based group called DeepMind, which works in scientific applications. Last year, they tackled protein folding—determining the three-dimensional shape of a protein from its amino-acid string as assembled by RNA translation. This is a fiendishly complex process, based on the proximity of various covalent electron bonding sites. Their AlphaFold platform found thousands of impossible-to-visualize connections and expanded our catalog of protein shapes by an order of magnitude. This year, the DeepMind team is tackling the way that various metal and non-metallic compounds can form stable physical structures, which will increase our applications in materials science. This is important work.