The Human Condition:

The Dollar Value of Technical Advances – May 5, 2013

I’ve written several times before that we’re all on an express escalator to the future—and a future that we may not even be able to imagine.1 Technology has a way of building on itself. Whatever one person may invent for one purpose, another may improve upon, adapt for some other purpose, or reinvent using different techniques and processes. This is how swords become plowshares, as anyone who has studied the history of technology since the Second World War will attest. All it takes is free minds prepared with the right education, an open flow of information, and the natural human desire to make something new appear—usually with a monetary reward attached to the invention.

How can we place a value upon this process? Not upon the single invention, but upon the whole chain of creation and re-creation that it may ignite? Well, we can’t—we can only dream.

When Thomas Edison, in competition with other creative thinkers, worked on the electric light2—finding a filament that would glow with electric current inside a glass bulb—he could pretty easily calculate the riches that awaited a successful product. Simply take the nation’s annual investment in whale oil, natural gas, and candles used for lighting purposes, beat the costs of these various technologies, and rake in the difference.

But Edison would have been thinking only of area lighting—for one or more rooms in the home, for the front porch, for the street beyond, possibly as some kind of beam to light the way ahead of a carriage at night, and for searchlights and lighthouses. At the time, he probably never imagined the thousands of other uses to which a switchable light might be put: advertising signs and marquees, projectors of still and moving images, or signaling the state of circuits in complex technical environments like control panels. Some of those applications—like displays and signage, or the kerosene-burning “magic lantern” projector—already existed in other forms, and Edison or some later inventor might have foreseen the possibilities for development and expansion once a switchable light source was perfected. But other applications—like the flashing signal bulbs on a telephone switchboard or in a power plant control room—could not exist until the sealed and undying electric light bulb made them possible.3

The point is, Edison might have predicted fairly accurately the value of electric bulbs in house and street lighting, but he could only guess—and probably guess low—the value of a switchable source in all the other applications it has enabled over the years. Whole industries have grown up around the light bulb and its offshoots in the cathode ray tube and the light-emitting diode.

Consider, then, the potential value of the internet. It originated as a convenience for scientists working on government projects who wanted to share research findings. Rather than waiting to publish their papers in a journal somewhere, they could put their results out on the Advanced Research Projects Agency Network (ARPANet), which was initially funded by the U.S. Department of Defense. That was back in the 1960s, before the invention of the microprocessor,4 when scientists were the only hands-on users of computers. Linking their basements full of computing power to share a manageable amount of data—under some conditions of access control and secrecy—seemed like a good idea.

Anyone working on the protocols for connection, packet switching, transmission control, and other processes required to wire up an undefined number of separate computers and then trade files among them could clearly see the immediate benefits. This was a faster—virtually instantaneous—way to publish large amounts of information among a large population of users than printing the information in periodicals or bound books and maintaining them on the shelves of one or more libraries. The network was like printing one book and making it appear simultaneously on the shelf of every library in the network.

But did anyone, in those days when computers were data-hoarding dragons that lived in the basement, with core memories the size of walk-in closets and disk drives the size of washing machines, foresee the ultimate impact? Could they imagine that one day, when computers were the size of typewriters—or even the size of a bar of soap—their network would put the information resources of practically the entire planet at the fingertips of virtually every man, woman, and child? And more than a library function, making hard facts and electronic books available worldwide, did anyone guess that their network would ultimately support a planetwide commerce in soft goods like music, movies, novels, and newspapers; tickets to entertainment and sporting events; reservations for restaurants, hotels, airlines, and other modes of travel; transactions between banks and investment advisors and their customers, as well as direct trading in shares and commodities; and the advertising, display, sales, payment, and shipping of every other product and commodity? Did they understand that their network would link every person in a massive exchange of messages, confidences, whispers, and rumors? Did they guess that simply managing the flow of all this information and conversation with search engines like Google and social exchanges like Facebook and Twitter would launch some of the largest, most profitable enterprises on the planet? Or that their inventors and progenitors would, in many cases, be undergraduates still in college and working from their dorm rooms?

Did they understand that this network of computers, which started out by trading secret information on defense projects, had the power to change the face of commerce and bring down or force the transformation of old, established businesses? The internet has changed forever the dynamics and business models of telephone companies, television networks, newspaper, magazine, and book publishers, recording studios, banks, stock brokerages, and travel agencies. But aside from this “creative destruction,” could the ARPANet inventors have understood how much of a boost, how much pure acceleration, their infant network and its extension worldwide would give to any of these functions, how many new business opportunities and new jobs—even new types of jobs, like “web designer,” “database engineer,” and “app developer”—would result from creating so much of this free connectivity?

Anyone trying to guess the value of a thing called “the internet” back in the 1960s would, like Edison trying to assess the value of a long-burning light bulb, have started with the world’s libraries and the books contained in them, plus the annual revenues of the world’s publishers and newspapers, come up with an astounding figure, and stopped right there.5 No one would have foreseen the business opportunities available today—nor a rising cottage industry in writing, directing, and filming three-minute movies for YouTube.

More important, at that time, you could not have sold this vision of the future to the average person. Try describing how, with an easy to use search engine like Google, anyone could have a personal librarian at his elbow who provided instant access to thousands of libraries, repositories, databases, companies, and other resources. How, with social media like Facebook and Twitter, he could trade opinions, confidences, jokes, pictures, and home movies on the fly with friends, family, and thousands of barely recognized strangers. This level of access is something the average person would never have thought of wanting or paying for, because it’s something no billionaire, king, or president could have obtained even twenty years ago.

Consider the power built into a smart phone, which is both an outgrowth and enabler of the internet. It is actually a computer in the palm of your hand. It wirelessly connects outward to the resources of the whole world through radio-based voice and data systems. It wirelessly connects with peripheral devices in your immediate vicinity, like earphones and keyboards, through another radio-based system called Bluetooth. It contains a television screen, various kinds of software-embedded keyboard and other inputs through a touch screen, plus speakers, microphone, combined still-image and video camera, and GPS-sensing antenna.

With its built-in software applications it becomes an appointment book, messaging center, voice recorder and general secretary, to-do list keeper, stock ticker, weather channel, internet connection, plus a competent photo and video recording studio, a library of your favorite music, movies, books, and periodicals, a repository of your personal thoughts, photos, family videos, and other informational memorabilia, and a navigator both by established roads and cross country. And it also serves its basic function as a telephone—but with your address book and the Yellow Pages embedded on speed dial.

More than that, with the right selection of software applications—usually available for a few dollars each6—this handheld device becomes a video arcade, card game player, pedometer, exercise trainer and diet coach, heart rate monitor,7 health statistician, and any other function that can be seen, known, and displayed. A dozen years ago, you would have paid hundreds of dollars for each of these applications—if they were even available. More, you would have had to carry a dozen different devices to obtain them as single functions. Now, they all fit in your pocket in one slim device. What is that worth? Nothing if you didn’t ask for it. Everything if you think you need it. And no one, even ten years ago, would have foreseen a rising cottage industry in writing software to fling Angry Birds across a telephone’s touch screen.

What is the dollar value of all this connectivity? What is the dollar value of all this potential creativity and invention? A trillion dollars? Two trillion? Ten? It’s growing all the time. And twenty years ago the nature of this technological advance and its impact on our global society could hardly have been imagined, let alone estimated.

This rush of technology that I’ve described is taking place only in two small areas: the interconnection of computers via the internet, and the interconnection of people with new ideas, transactions, and entertainments via the internet and the personal devices that enable it. Consider now that in academic institutions and research centers across the world, scientists, engineers, and inventors are simultaneously at work in areas like chemistry to improve the materials out of which our products and infrastructure are built, physics to improve our use and storage of energy, life sciences to improve our general health and medicine, and data science to improve our use of all those other sciences.

This vast human collaboration which we variously call “science and technology” or “art and science” is changing the world faster than we can imagine.

An ancient Roman, brought forward to about 1750, would have recognized most of the technology in use at the court of Louis XV of France. He would have marveled at various individual inventions, like the stirrups on a saddle and the leaf springs in a carriage. But about the only real surprise would have been the explosive power of gun powder—and that bit of chemistry could have been explained to him in about an hour.8 But a French philosopher and scientist brought forward a mere two and a half centuries to 2013 would be hopelessly lost. To catch up and truly understand the underlying technologies of our modern age would take a lifetime of study among new concepts and established sciences. And one of today’s scientists, taken forward to about 2050, a mere twenty-five years, would have to learn disciplines and techniques for which we don’t even have names yet.

No one can put a dollar value on all this. But, I promise you, the potential wealth and power that will be showered upon the average person of the future will be vast. It’s going to be a wonderful ride!

1. See Coming at You from October 24, 2010; The Language of Technology from July 29, 2012; and In the Palm of Your Hand from October 21, 2012.

2. Edison didn’t invent the idea of turning electricity into substitute daylight. Eighty years before Edison’s first successful bulb, the English scientist Humphry Davy created the electric arc by passing a current between carbon electrodes, and the process is still in use today when you need a really powerful light. Twenty years before Edison another Englishman, James Wilson Swan, made electric lamps with filaments of carbon paper, which burned up too quickly. The prize that Edison chased was a filament that would not burn out and so would provide light indefinitely.

3. Or maybe Edison was dreaming of these and other applications all along. He certainly thought and invented widely across the spectra of physics and chemistry. Who at the time could say how much of our future was locked away in the imaginations of men like Thomas Edison and Nikola Tesla?

4. Microprocessors—“computers on a chip”—were originally conceived as a way to add distributed computing power to complex systems, or to add digital processing to existing electro-mechanical devices like elevator controls and automobile fuel injection. It was bright boys like Nolan Bushnell, Steve Wozniak, and Steve Jobs—our modern-day Edisons—who conceived of attaching these processing chips to memory chips, a cathode ray tube, and a keyboard to make the “personal computer.” And that led eventually to every person on the planet—who wanted one—having the computing power of an IBM 360 on the desktop.

5. In the late 1960s, while still an English major in college but being interested in science fiction, I began writing a story based on a far future library that controlled all of the world’s knowledge with a vast laser storage and retrieval system. In my imagination, it worked something like a planetarium, which somehow wrote data on the interior surface of a great dome and then, in recall mode, could pick facts and folios off the ceiling—rather like a curved, interactive hologram. Along with everyone else at the time, I was thinking of a single huge repository, a vast installation, like a Fort Knox of the world’s information. Nobody in the paying public yet understood that huge size still had limits, that true power lay in networks of distributed and interconnected nodes, and that by chopping the information up and scattering it around—the earliest glimmer of “cloud computing”—humankind could achieve virtually unlimited storage.

6. Remember when software used to cost a couple of hundred dollars? Some still does: the heavyweight, industrial-grade packages for office productivity, graphic studio work, or video editing. But the cost of a single-purpose application has dropped to next-to-nothing, or it’s free.

7. Through an app that uses the camera’s flash to light up a fingertip you place over the camera lens, and then the imaging chip reads color changes in the skin as your heart beats. How cool is that for making clever use of available resources?

8. Well, to really understand explosives, you need to grasp molecular bonding between atoms, which depends upon an entire realm of knowledge and theory in chemistry. But the ancient Chinese who mixed the first gun powder and the Renaissance Europeans who used it didn’t have that knowledge either. They just knew that when you mixed certain materials in certain ways and then touched them with a spark, interesting things happened. An old Roman could learn that, too.