What 'New' Economy?
As the 20th century ends, legions of the powerful--politicians, intellectuals, journalists, business leaders, and visionaries--are embarking on what can only be called pilgrimages. They are traveling to an arid promised land between San Francisco Bay and the Pacific Ocean, some 40 miles south of San Francisco: Silicon Valley. They invariably return with visions of a technological and economic future full of endless promise. Their exuberance should give us pause.
There have been similar pilgrimages in the past. In the 1830s and 1840s, Alexis de Tocqueville, Benjamin Disraeli, and Friedrich Engels journeyed to Manchester, England, to see the shape of the future emerging from the factories (and the smog and the slums) of the rising British cotton textile industry. In the 1920s, another generation's seekers traveled to Detroit to see Henry Ford's assembly lines, which had unleashed such an extraordinary surge in productivity that automobiles, once luxuries available only to the rich, had become commodities that most Americans could afford. The mass production revolution that began in Detroit may have sparked a bigger improvement in the material conditions of life than the original Industrial Revolution in Manchester. In Brave New World (1932), Aldous Huxley wrote of a future in which people dated years from the production of the first Model T Ford, and in which the major problem facing governments was how to brainwash people into consuming enough of the bounty created by the miracle of mass production to keep the economy growing.
Today's pilgrims are very much like those of the past, convinced that new technologies are creating a fundamentally different future--a new society, a new culture, and a new economy. But what, exactly, is new about the "new economy" rising today from Silicon Valley?
Each of today's pilgrims seems to bring back a slightly different report. Some, lacking historical perspective, see patterns of entrepreneurship and enterprise that are in fact quite old. Others fail to understand the most important fact about economic growth: that ever since the Industrial Revolution there have always been dazzling new industries, inventions, and commodities. Still others misinterpret what our economic statistics are telling us about the impact of information technology.
Nevertheless, there is something to the idea that we live in a "new economy." What is new about it is not the rapid pace of invention and innovation nor the rise of living standards beneath the radar of official statistics. What is new is the potential of information goods to defy the very principles of scarcity and control over commodities that have convinced economists that the market is the one best system for guiding the production and distribution of goods. If that challenge materializes, we will indeed be confronted with a new economy, but one very different from the promised land of the pilgrims' dreams.
The first dimension of "newness" that the pilgrims hail--the one that strikes almost all observers immediately, and leads to breathless descriptions of technological revolution--is the sheer speed of technological progress in the semiconductor and semiconductor-derived industries. From his post at Wired magazine, executive editor Kevin Kelly writes of the new economy as a "tectonic upheaval . . . [driven by] two stellar bangs: the collapsing microcosm of chips and the exploding telecosm of connections. . . . tearing the old laws of wealth apart." Business Week editor-in-chief Stephen Shepard declares that information technology and the computer- and network-driven international integration of business constitute "the magic bullet--a way to return to the high-growth, low-inflation conditions of the 1950s and 1960s. Forget 2 percent real growth. We're talking 3 percent, or even 4 percent." Computers and telecommunications are "undermining . . . the old order" and triggering a "radical restructuring" that leaves traditional analysts of the economy "unable to explain what's going on . . . wedded to deeply flawed statistics and models."
Since the invention of the transistor in the 1950s, the onrush of invention, innovation, and industrial expansion in information technology has been constant and rapid: transistors, integrated circuits, microprocessors, and now entire computers on a single chip and a high-speed worldwide network with the potential to link every computer within reach of a regular or cell phone. Year after year, Moore's Law (named for Intel Corporation cofounder Gordon Moore) continues to prove itself: the density of silicon on a single chip doubles (and thus the cost of silicon circuits to consumers is halved) roughly every 18 months. Moore's law has been at work since the early 1960s. It will continue until--at least--the middle of the next decade.
Observers note the enormous fortunes made on the stock market by founders of start-up corporations that have never turned a profit: how Internet bookseller Amazon.com is seen by Wall Street as worth as much as established bookseller Barnes and Noble. They see how last year's high-end products become this year's mass-market products and then are sold for scrap two years later.
Hence this vision of this "new economy": a future of never-ending cost reductions driven by technological innovation, "learning curves," "economies of scale," "network externalities," and "tipping points." In the old economy, you made money by selling goods for more than they cost. In the new economy, you make money by selling goods for less than they cost--and relying on the learning curve to lower your costs next year. In the old economy, you boosted your company's stock price by selling valuable goods to customers. In the new economy, you boost your company's stock price by giving away your product (e.g., a Web browser) to customers--and relying on network externalities to boost the price you can charge for what you have to sell next year to people who are now committed to your product. In the old economy, the first entrant often made big mistakes that followers could learn from and avoid: it was dangerous to be first into a market. In the new economy, the first entrant to pass a "tipping point" of market share gains a nearly unassailable position in the market.
There are pieces of the world that fit this vision of the new economy very well. Think of the fortune Bill Gates made by beating Apple past the tipping point with Windows. Think of the rapid price declines of silicon chips. Think of the rocketlike Wall Street trajectory of new companies that did not exist a decade ago. Think of the rise of companies that did not exist three decades ago--such as Intel--to industrial prominence.
Yet, somewhat paradoxically, it is along this marveled-at dimension that our economy today is perhaps the least new. For what this particular set of returning pilgrims to the future are describing are the standard economic dynamics of a "leading sector," of a relatively narrow set of industries that happen to be at the center of a particular decade's or a particular generation's technological advance. There have been such leading sectors for well over a hundred years. Manchester was the home of the leading sectors of the 1830s. It was the Silicon Valley of its day--and saw the same creation of large fortunes, the same plummeting product prices, and the same sudden growth of large enterprises. Every leading sector goes through a similar process. Consider the automobile. The average car purchased in 1906 cost perhaps $52,640 in 1993 dollars. By 1910 the price had dropped to $39,860, even as technical improvements had pushed the quality up by at least 31 percent. By the time the heroic, entrepreneurial age of the American automobile came to an end during World War I, an average car cost 53 percent less in inflation-adjusted dollars than a 1906-vintage vehicle, and its quality had doubled. Consumers were getting more than four times as much car for their (inflation-adjusted) dollar than a mere decade before.
The development of the automobile does not match the pace of innovation in semiconductors under Moore's Law, which generates at least a 32-fold, not a fourfold, increase in value over a decade. But it is in the same ballpark, especially if one allows for the fact that the tremendous improvements in semiconductors have not been matched by changes in the other components used in making microelectronics products.
Thus, a citizen of the late 19th century would not have had to wait for the arrival of our age in order to see the "new economy." A trip to Detroit would have done the job. During the 1920s, authors writing articles in popular magazines such as the Atlantic Monthly confidently declared that mass production made not just for greater efficiency or higher productivity but "a better world," and demanded the rapid creation of a "Fordized America."
The automobile industry is not alone: other industries have had similar transformations during their times as leading sectors. In our dining room, my wife and I have a four-bulb chandelier. If we go to Monterey for the weekend and leave the light on, we will have consumed as much in the way of artificial illumination as an average pre-1850 American household consumed in a year. Such consumption would have cost that household about five percent of its income in candles, tapers, and matches. But because of the technological revolutions that made possible the cheap generation and transmission of electricity, it makes no perceptible difference in our Pacific Gas and Electric bill.
Some of the Silicon pilgrims make an elementary mistake. Seeing autos and other goods of the Industrial Revolution in much the same form today as they existed in their own childhood decades ago, they assume that such "industrial" goods must have emerged almost fully formed, that the pace at which they changed must have always been glacial. But we have had a succession of productivity revolutions in leading sectors since the start of the Industrial Revolution, sweeping through everything from textiles to medical care. That's why we call it a revolution--it kicked off the process of staggered sector-by-sector economic transformation of which Silicon Valley is the most recent instance.
With a dose of realism and historical perspective, the pilgrims returning from Silicon Valley might change their vision in several ways.
First, they would recognize that in microelectionics, as in every leading sector, the "heroic" period of rapid technological progress will come to an end. Henry Ford perfects the Model T. Britain's Cable and Wireless company figures out how to properly insulate submarine telegraph cables. The first easy-to-find antibiotics, such as penicillin, are all discovered. Moore's Law exhausts itself. Thereafter, computers and communications will become a much more mature industry, with different focuses for research and development, different types of firms, and different types of competition.
Second, in every leading sector the true productivity revolution occurs before the heroic period has come to an end. The first railroads connected key points between which lots of bulky, heavy, expensive materials needed to move. Later railroads provided slightly cheaper substitutes for canals, or added redundant capacity to the system in the name of marginal economic advantages. The first three TV networks came amazingly close to sating Americans' taste for audiovisual entertainment. The first uses of modern telecommunications and computers--telephone service, music and news via radio, the first TV networks, Blockbuster Video, scientific and financial calculations, and large database searches--had the highest value. Thus, it is unwise to extrapolate the economic value added by semiconductors, computers, and telecommunications far into the future. Later uses will have lower value: if they were the most fruitful uses, with big payoffs, someone would have applied technology to them already. This is a version of the standard economist's argument that the $1,000 bill you think you see on the sidewalk can't really be a $1,000 bill: if it were, someone would have already picked it up. But this economist's argument is rarely false: there aren't many $1,000 bills to be found lying on sidewalks.
Third, after the heroic age, the form of competition changes. During the heroic age, technology alone is the driving force. After the heroic age, what matters is figuring out exactly what customers want and giving it to them. As long as automobile prices were falling and quality was rising rapidly, Henry Ford could do very, very well by making a leading-edge car and letting customers choose whatever color they wanted, in the famous phrase, as long as it was black. As long as computer prices are falling and quality is rising rapidly, Bill Gates likewise can do very, very well even if his software programs crash and show their users the Blue Screen of Death twice a day. After the 1920s, however, the Ford Motor Company was overwhelmed by Alfred P. Sloan's General Motors, which figured out how to retain most of Ford's economies of scale while offering consumers a wide variety of brands, models, styles, and colors--a worthwhile undertaking, but not the stuff of economic revolution. Before GM, no one knew what kind of options car buyers really wanted. Today, no one knows what kind of options computer and Internet access purchasers really want, but they know that there are fortunes to be made by the new GMs of the information age. The company that plays GM to Microsoft's Ford likely will succeed by providing access--to computing power, to research materials, to the yet-to-be-built high-bandwidth successor to the Internet, to information uniquely valuable to you. But no one yet knows exactly how to do this.
Finally, each leading sector does produce a technological revolution. It does leave us with previously unimagined capabilities. The railroad gave us the ability to cross the continent in a week instead of months. Electric power gave us the ability to light our houses and power appliances (and computers). Microelectronics has given us extraordinary intellectual vision. More than a generation ago, when economists William Sharpe, Merton Miller, and Harry Markowitz did the work on how a rational investor should diversify an asset portfolio that won them the 1990 Nobel Prize in economics, they assumed that their labors were of purely theoretical interest. The calculations required to implement their formulas were beyond the reach of humanity. Today, however, the computing power to carry out calculations many times more complicated resides on every desk on Wall Street.
But a technological revolution is not an economic revolution. Just because microelectronics revolutionizes our capability to process information doesn't mean that it will dominate our economy. The economy, after all, focuses its attention on what is expensive--not on what is cheap. In every leading sector, the story has been the same. Once the exciting new product is squeezed into a relatively inexpensive commodity, economic energy flows elsewhere. Business Week does not run cover stories hailing electric lighting and exulting in its vast superiority over whale-oil lamps. Thus, as our capability grows, the salience of our expanded capability to the economy--which is, after all, the realm of things that are scarce--does not.
A second group of pilgrims, overlapping somewhat with the first, returns from Silicon Valley proclaiming that the American economy is poised to grow much faster than it has in a generation. They believe that the revolutions in microelectronics and telecommunications are producing a surge of productivity growth that could dramatically lift the American standard of living--if only the Federal Reserve and other government economic authorities would recognize what is going on.
These pilgrims hearken back to the postwar golden era of 1945-73. The drastic productivity slowdown that began in 1973 was a shock to America. It caused a deep slump in the stock market in the mid-1970s. It meant that government promises of future benefits that had been based on assumptions of steadily rising tax revenues (without rising tax rates) could not be fulfilled. It made false the basic American assumption that each generation would live significantly better than its parents' generation, with bigger houses, better jobs, and markedly easier lives. But all of that is behind us now, today's advocates of the new economy announce, or will be if policymakers recognize the potential not yet reflected in productivity statistics and other data. Business Week's Stephen Shepard writes that information technology is a "transcendent technology" that affects everything: it "boosts productivity, reduces costs, cuts inventories, facilitates electronic commerce." The "statistics are simply not capturing what's going on" because "we don't know how to measure output in a high-tech service economy."
Speaker of the House Newt Gingrich opines that technological progress should give us an economy capable of measured annual economic growth of four or five percent without rising inflation, instead of the 2.5 percent deemed possible by the tightfisted central bankers at the Federal Reserve. Many good things will follow: real wages and living standards will rise rapidly, the Federal Reserve will be able to cut interest rates and expand the money supply more quickly without boosting inflation, and the stock market will boom unto eternity.
As best as I can tell, this group of returning pilgrims seems to have failed to recognize the importance of the word measured in the phrase "measured economic growth." They insist that true economic growth is greater than measured economic growth. And they are right: thanks to a large number of statistical and measurement problems that are built into our official economic statistics, "measured" economic growth understates real growth by one percentage point per year or so.
But these pilgrims overlook a crucial fact: official data have always understated growth. For more than 50 years, the national-income accountants at the Commerce Department have known that their numbers don't capture all the economic benefits flowing from inventions and innovations in the economy's leading sectors. Yet they have continued to follow their established procedures, partly because they lack the information to do a better job and partly because they prefer to report numbers they can count reliably rather than numbers that are based on guesswork. The problems of measurement today are probably bigger than in the past, but not vastly bigger.
This means that the numbers we use in steering the economy have not suddenly developed huge distortions that require a change in navigation. The data have always been distorted, yet have supplied adequate guidance. How would we tell if they were not reasonably accurate? How would we tell if economic growth were too slow? One guide would be the rate of investment: are bad economic policies stealing capital that should be going to expand the productive capacity of the economy? While one can argue that the budget deficits of the 1980s hobbled economic growth by crowding out investment, the budget deficits of the Reagan and Bush administrations are now gone. But if, as the new economy enthusiasts insist, productive capacity were growing faster than production, businesses would be firing workers on a large scale: unemployment would be increasing as firms used technology to economize on labor. Yet nothing like that is happening. The unemployment rate is low and steady--a good indicator that the economy, now expanding at a measured rate of about 2.5 percent annually, is growing at the sustainable rate of its productive potential.
Nevertheless, despite the hype, delusion, and misunderstandings that surround the "new economy," it would be unwise to completely dismiss the concept. The pilgrims are not mad. They have seen something.
While it is true, for example, that the economic drama of the rising microelectronics and telecommunications industries resembles the stories of other leading sectors, the pace of productivity improvement today does appear to be faster than in most, if not all, cases in the past. And this productivity edge often does escape measurement.
For an example from telecommunications, look at the spread of network television throughout America, which began in the 1950s. In its heyday, network television dominated American culture--as, in some ways, it still does--occupying perhaps a fifth of the average American's leisure hours. But nobody ever paid a cent to receive network television. So its product received--and still receives--a value of zero in the national income and product accounts used to calculate the nation's gross domestic product (GDP).
The salaries and profits of the networks, of the production studios, of the actors, and of the advertising managers do appear--but they appear as a cost of the production of the goods being advertised, not as an increase in the economic value produced. In other words, the growth of broadcast television increased the size of the denominator in productivity calculations, but not the size of the numerator. Each worker who moved into the network television industry (broadly defined) thus decreased officially measured productivity.
For a contemporary example, look at the Internet: a source of entertainment and information that does not (or does not yet) rival network television, yet is assessed the same way. Consumers pay a toll to telephone companies and to Internet service providers in order to access the network. But then the overwhelming bulk of information is free (and is likely to remain so in the future). Once again the national income accountants at the Department of Commerce are, when they estimate real GDP, subtracting one-tenth of a percent from American productivity for each one-tenth of a percent of the labor force employed in creating and maintaining the World Wide Web.
The pilgrims are also right insofar as this particular leading sector may indeed have broader consequences for the economy, at least over the very long run, than other leading sectors of the past. Other leading sectors have revolutionized conditions for relatively small groups of people. In the 19th century, the automatic loom bankrupted handloom weavers, who wove cloth in their homes, and transformed the weaving business from one in which lone entrepreneurs rode from village to village dropping off yarn and collecting cloth to an industry dominated by large factories and powered by steam engines. But it left the conditions of life of others largely unchanged, save for the significant fact that clothing became much cheaper. Today's leading sectors, however, might--but might not--radically change the conditions of life of nearly everyone: those who use information to direct enterprises (managers), who process information in their jobs (white-collar workers), and who use information to decide what to buy (consumers).
But why should the fact that today's leading sectors revolutionize the production and distribution of information make a difference? Why is information special? The new economy's advocates give a number of answers emphasizing the limitless possibilities of an economy dominated by goods that are almost impossibly cheap to produce and distribute. But there is one answer they don't give: it is special because the invisible hand of the market may do a much poorer job of arranging and controlling the economy when most of the value produced is in the form of information goods.
For the past 200 years, relying on competitive markets to produce economic growth and prosperity has, by and large, proven a good bet. But the invisible hand of the market does a good job only if the typical commodity meets three preconditions. The commodity must be excludable, so that its owner can easily and cheaply keep others from using or enjoying it without his or her permission. It must be rival, so that if I am using it now, you cannot be. And it must be transparent, so that purchasers know what they are buying.
Commodities that are not "information goods" take the form of a single physical object--hammers, cars, steaks--and are rival and excludable by nature. If I am using my hammer, you are not. Their transparency is straightforward: if I am buying this car at this showroom, I can see it, touch it, drive it, and kick it before writing a check.
But if a commodity is not excludable--if I, the owner, cannot block you from using it if you have not paid for it--then my relationship to you is not the relationship of seller to buyer, but much more that of a participant in a gift exchange: I give you something, and out of gratitude and reciprocity you give something back to me. Think of an economy run like a public radio pledge drive. It doesn't work very well--the revenue raised is a small fraction of the value gained by consumers--and the process of collecting the revenue is very annoying.
If a commodity is not rival, then the market will not set its price correctly. If my using it does not keep you from doing so, as is the case with software and other information goods, then there is a sense in which its price should be zero. But no producer can make a profit selling a commodity at a price of zero. Only a producer with substantial market power can keep the price up. So in a world of nonrival commodities, we could expect monopoly to become the rule rather than the exception.
The logic of nonrival goods provides a large part of the explanation for the rise of Microsoft. Only firms that establish a dominant position in their markets can charge enough to make even normal profits. Firms that don't do so plunge into a downward spiral: with low sales volume and costs of writing software code that remain the same no matter whether they sell one copy or one million, their cost per program shipped is high.
If a commodity is not transparent, then markets may fail completely. If you don't know what's in that used car or health insurance policy you are considering, you don't know how much it is really worth. Sellers also need transparency. An insurer required to sell health insurance policies without knowing anything about its customers would face a nightmarish prospect. Worrying that all potential customers would already have costly illnesses, it would raise prices--until in fact only those who had costly illnesses would want to try to buy insurance. The market would break down. Yet information goods are highly nontransparent: in the case of many or most information goods, the entire point of buying something is to learn or see something new--and so you cannot know exactly what you are buying or how much you will like it until after the fact.
All three of these conditions--goods must be excludable, rival, and transparent--must be met if the invisible hand is to work well, and there are many reasons to be concerned that the new economy won't meet them.
Words distributed in electronic form (and, with improvements in scanner technology, words distributed in books and magazines as well) are becoming nonexcludable. Information goods are by definition nontransparent: if you know what the piece of information is that you are buying, you don't need to buy it. Software is becoming non-transparent as well: when you purchased Microsoft Word or WordPerfect for the first time did you realize that you were committing yourself to a long-run path of upgrades and file-format revisions? Finally, computerized words, images, and programs are nonrival: a file doesn't know whether it is the second or the two-thousandth copy of itself.
How far will the breakdown of these preconditions of viable profit-making markets extend? Will it be confined to a relatively small set of e-goods, or will it expand to embrace the rest of the economy as well? We do not really know. But it is possible that we are moving toward an information age economy in which the gap between what our technologies could deliver and what our market economy produces will grow increasingly large as companies devote themselves increasingly to securing monopolies. It is possible--although how likely we do not know--that in an information age economy the businesses that enjoy the most success will not be those that focus on making better products but those that strive to find ways to induce consumers to pay for what they use. Some may succeed through superior advertising campaigns, others by persuading consumers to enter into a gift-exchange relationship: the public radio syndrome. Recently, after downloading a demonstration version of a software maker's flagship product, I received an e-mail from the company's marketing department. It said that while the program was billed as a time-limited demonstration version that would stop working after 60 days, it was in fact a complete and unencumbered working program. The company hoped that I would find it valuable enough to pay for and register. But even if I didn't, the message said, the company would be pleased if I would tell my friends how wonderful its program was.
Other companies will follow a different strategy. Rather than giving their product away in hopes of receiving payment in return, they will try to make money by suing everybody in sight. They will seek to use the law to create stronger legal controls over intellectual property--everything from software to films--and spend freely to track down those who are using their products without paying for them. From society's point of view, this is a wasteful path--driving up profits, dampening demand, and reducing consumer welfare.
If the information age economy winds up looking much like the one sketched here, the role of government, far from shrinking into near irrelevance, as many of today's pilgrims airily assume, might grow in importance. In such a world, the tasks of government regulators would become infinitely more difficult. The very nature of the commodities produced would be constantly undermining the supports the market economy needs in order to function well. It would then be the job of government to shore up these supports: to do whatever it could to create a simulacrum of market competition and to restore the profitability of useful innovation. The Antitrust Division of the Justice Department might become the most important branch of the federal government.
This vision of the future information age economy--if it should become reality--would certainly qualify as a new economy. But it would be a dark mirror image of the new economy we hear so much about today.
This article originally appeared in print