Has Futurism Failed?

To be human is to ponder the future. From their very beginnings, human beings have tried to anticipate tomorrow. They noted the cycles of the seasons and fertility, the phases of the moon, and the changing of the tides. They looked for omens and portents, consulted seers and oracles, read entrails, and strove to find their fate in the stars. Many of these methods were, to put it mildly, suspect. In millennia of human existence, celestial calendars such as those erected at Stonehenge and New Mexico’s Chaco Canyon stand out as rare examples of methods that transcended superstition and guesswork.

A fundamental change in human thinking about the future began in the 18th century, as technological change accelerated to a point where its effects were easily visible in the course of a single lifetime, and terms such as progress and development entered human discourse. Today, with the human species beginning to change the earth on a vast scale—altering climate and genetic structures, harboring weapons that can annihilate the planet—we have forever forfeited our ability to duck responsibility for thinking about the long-term future. But the responsibility to think does not automatically bring with it the capacity to do so.

Speculation about the future became more common as human beings increasingly reshaped the world during the 19th and early 20th centuries, though it was seen largely as entertainment, a diversion from the often stark realities of everyday life. Yet some of that speculation proved surprisingly close to the mark. In preparation for the 1893 World Columbian Exposition in Chicago, for example, luminaries from across the United States were asked to share their predictions for the next 100 years. Among the developments they foresaw: “Each well-to-do man will have a telephone in his residence”; “We will navigate in the air”; and “The entire world will be open to trade.”

With the publication of his best-selling Anticipations of the Reaction of Mechanical and Scientific Progress Upon Human Life and Thought in 1906, H. G. Wells became one of the first writers to examine seriously the social consequences of technological change (he was particularly acute in anticipating the pathology of urban sprawl). In 1926, Austrian filmmaker Fritz Lang gave the world what was perhaps the first science-fiction film, Metropolis. Set in the year 2026, Lang’s masterpiece imagines the possible outcome of 100 years of industrial progress: a profoundly inequitable and mechanized world, in which hordes of workers labor in a subterranean city to maintain the pleasant existence of their masters in light-filled Metropolis.

It is difficult to mark with precision when studying the future became a serious business, but the change can be set somewhere soon after the end of World War II. In 1945, The Atlantic Monthly published an article that was, in retrospect, stunning in its scope and prescience. Written by Vannevar Bush, who was then director of the White House Office of Scientific Research and Development, the essay was titled simply “As We May Think.” Bush portrayed—two years before the invention of the transistor—the coming information revolution, describing everything from the personal computer, which he dubbed the “memex,” to hypertext, digital imaging, and search engines. Here was the future as seen through the eyes not of a journalist, novelist, or huckster but of a scientist and government bureaucrat—who happened to be an adviser to the president of the United States. Though Bush’s predictions were largely in the realm of technology, his overarching message concerned the need to organize the growing scientific enterprise and apply newfound knowledge to an ever-expanding set of national needs. This focus on planning coincided with a realization that the development of the atomic bomb had created the first truly existential threat to the entire planet.

The increasing power of government and the experience of totalitarianism provided fodder for a new generation of negative futures that blended technological forecasting with the dark underside of geopolitics, most famously in George Orwell’s 1984 (1949). With its powerful image of a world in which people find themselves under constant surveillance, the novel is still unfailingly disconcerting to those living in today’s digital panopticon. In the classic 1951 science fiction movie The Day the Earth Stood Still, the alien Klaatu delivers a stark choice to Earth’s Cold War leaders: “If you threaten to extend your violence, this Earth of yours will be reduced to a burned-out cinder.” Many movies and comic books turned on the theme of government’s failure to control atomic weapons and other new technologies, which inevitably fell into the hands of evildoers.

With the existential threat of nuclear weapons and the growing perception of superior Soviet science and planning after the launching of Sputnik in 1957, nervousness about America’s place in the world spread beyond public-sector technocrats. In the late 1950s, The New York Times, in association with Life magazine, tried to stimulate a discussion on “national purpose” with a series of articles about the need for a clear national mission and long-term resolve in the face of the growing communist threat. Yet much of the public discussion about the American future was still based on the informed speculation of elites and intellectuals rather than on any rigorous quantitative analysis of trends.

The demand for greater clarification of the future after World War II occurred just as new tools for quantitative and qualitative forecasting were becoming available. The complex technological challenges of the war had jump-started whole new fields of inquiry, such as systems analysis, operations research, and cybernetics, and the onset of the Cold War stimulated the need for further strategic planning on a large scale. Military and civilian planners were contemplating new weapons systems with such long development horizons that they needed new methods for assessing the capabilities of potential enemies decades into the future. One response came from the U.S. Air Force, which created a new think tank called, simply, RAND (for Research and Development).

A key member of the early RAND staff was Herman Kahn, a man whose enormous intellect was nearly matched by his impressive physical proportions. Kahn stressed the need to bring together multiple disciplines to examine the future, a process he dubbed “interactive speculation.” In his work exploring the possibilities of the use of fusion-based superweapons such as the hydrogen bomb, famously summarized in his 1962 book On Thermonuclear War, Kahn developed and applied “scenarios”—plausible stories of the future designed to tease out the assumptions of military planners and confront them with the possible outcomes of their decisions. (Kahn is often said to have been one of the models for director Stanley Kubrick’s alarming Dr. Strangelove.) A new methodology was born, the first of many to emerge from RAND. In 1964, RAND researchers Theo­dore Gordon and Olaf Helmer introduced a second methodology, called the Delphi technique, with the publication of a study of the future based on the carefully assembled conclusions of more than 100 experts in areas such as space exploration, scientific breakthroughs, and weapons technology.

RAND continued to shape futures research when key staff members, believing that their methods could be more broadly applied for the good of society as a whole, left to form other organizations—the Institute for the Future, in San Francisco; the Futures Group, in Connecticut; and Kahn’s own Hudson Institute, in the suburbs of New York City. These and other groups brought new techniques to bear on problems of increasing technological and managerial complexity.

In retrospect, we can see that there was a certain amount of arrogance and overselling of these approaches in the early days—as, for example, when a small group of RAND “whiz kids” migrated to Washington to work for Robert McNamara in the Department of Defense during the early 1960s. They wove together a number of systems-analysis and cost-benefit techniques to create the Pentagon’s short-lived planning-programming-budgeting system and gave us the Vietnam War’s obsession with “body counts.” The limitations of these quantitative methods became even more obvious when they were applied to messy social problems. As historian Hugh Thomson observed, the systems-analysis enthusiasts learned during the era of Lyndon Johnson’s Great Society that analyzing  America’s national defense needs was a lot easier than trying to solve ordinary urban problems in the city of Philadelphia.

More eclectic methods for exploring the future emerged between the mid-1960s and early 1970s, ranging from computer modeling to approaches drawn from the social sciences. At the Stanford Research Institute in California, Willis Harman developed methods combining systems theory with insights from academic disciplines such as sociology and the intuitions of some of the era’s great minds. (Anthropologist Margaret Mead and Joseph Campbell, the noted student of myth, were among the celebrity intellectuals Harman persuaded to meditate, literally, on the future in the quiet chambers of his institute.)

The growing futures movement found a foothold in the private sector, initially through the activities of a group of thinkers working at Royal Dutch Shell in the late 1960s who brought Kahn’s scenario planning to the corporation. Scenario planning is not designed to produce a single prediction but, rather, to prepare an organization for a number of  plausible futures. No scenario can anticipate tomorrow’s circumstances exactly, but by thinking through the consequences of different possibilities, a corporation (or a person or society) can be better prepared to meet the unexpected. One member of the group described the process as “planning as learning.”

The Royal Dutch Shell team’s experience illustrated one of the truisms of futures work, in the public sphere as well as the private sector: Devising scenarios and forecasts is perhaps the easiest part of futurists’ work. Persuading others of the need to prepare systematically for the future is a much harder task. At Royal Dutch Shell, top executives slowly adopted the idea of mentally “practicing” for events they hadn’t thought about and putting themselves in a better position to recognize early signals of such events as they approached—and the company was well served. As a result of its scenario exercises, for example, it faced up to the possibility of disruptions in the supply of oil from the Middle East and diversified its sources before the 1973–74 OPEC oil embargo. (Later, Shell was better prepared than its competitors to deal with the collapse of oil prices in the 1980s.)

By the beginning of the 1970s, the futures movement was attracting a good deal of public attention. “Future shock,” the idea embodied in Alvin Toffler’s best-selling 1970 book of that name, became a household term. Sociologist Daniel Bell’s more scholarly The Coming of Post-Industrial Society (1973) reinforced the movement’s academic legitimacy. Frightening predictions, such as those in The Population Bomb (1968), by Stanford University biologist Paul Ehrlich, stirred public controversy. Despite the war in Vietnam, it was a time of general optimism in the social sciences: Economists aspired to engineer uninterrupted prosperity; sociologists hoped to address the root causes of poverty. In this intellectual climate, dozens of futurist courses and a number of degree programs on the future were created in colleges and universities around the country.

At the same time, the federal government began in earnest to embrace long-term thinking in fields beyond defense. Three government institutions began to devote serious resources to looking ahead: the Congressional Office of Technology Assessment (OTA), the Congressional Research Service (CRS), and the Congressional Budget Office (CBO), which produces the long-term federal budget projections that guide much of our political debate. By 1975, CRS had a Futures Research Group, with five analysts dedicated to helping Congress deal with longer-term issues.  OTA produced assessments of emerging technologies in areas ranging from aging, agriculture, and alternative fuels to waste management. In 1978, Edward Cornish, the president of the World Future Society, declared that “Congress is definitely out ahead of the rest of the government in its futures activities. . . . Congressmen and their staff are searching for new ways to make government more anticipatory.” Congress was not the only arm of government with an interest in future studies. The National Science Foundation, for example, commissioned an overview of the emerging field under its Research Applied to National Needs Program.

In 1977, President Jimmy Carter asked the White House Council on Environmental Quality and the State Department to prepare a report on “probable changes in the world’s population, natural resources, and environment through the end of the century.” Published just before Carter’s defeat in the 1980 election, the sobering Global 2000 Report to the President fed shredders in the Reagan White House yet went on to become one of the most popular reports ever produced by the U.S. government, appearing in seven foreign languages and selling 1.5 million copies.

The futures movement reached what was arguably its high-water mark in the United States in 1980. As The  Global 2000 Report circulated among policy elites, Toffler’s The Third Wave, a compelling sketch of the information revolution’s social and economic ramifications, brought futures thinking to a mass audience. Images of an emerging “information society” were appearing in every future-oriented publication, and a general assembly of the World Future Society set an attendance record that has never been broken.

Yet a reaction against futures thinking was already under way. Critics could point to failed prophecies (what ever happened to the “leisure society” that Bell and others had predicted as a result of growing automation in industry?), conflicting forecasts (growth versus eco-catastrophe), and many examples of studies that lacked methodological rigor. Perhaps more important, many people were disturbed by some of the field’s images of the future. Economists, business leaders, and politicians had no problem with Herman Kahn’s optimistic scenarios of rapid worldwide economic growth, but most of them rejected the growing gloom-and-doomism in some futures work, such as the famous 1972 report to the Club of Rome, The Limits to Growth, with its headline-grabbing declaration: “If the present growth trends in world population, industrialization, pollution, food production, and resource depletion continue unchanged, the limits to growth on this planet will be reached sometime within the next 100 years. The most probable result will be a rather sudden and uncontrollable decline in both population and industrial capacity.”

Bell and other thinkers had once hoped for the rise of a disinterested discipline of future studies, but critics increasingly complained about the rise of pop futurism and politicization that yielded predictions that seemed too conveniently to suit their authors’ existing policy preferences. One such critic dismissed Carter’s Global 2000 report as “globaloney.”

Frontal political attacks on the size and role of government, crystallized in the election of Ronald Reagan as president in 1980, reduced public confidence in the government’s ability to plan for and shape the future. The growing enthusiasm for market-based solutions undercut the very premises of public-sector long-term planning. The Futures Research Group at CRS was eliminated in the early 1980s, and Congress put OTA out of business in 1995, acting on a suspicion that its studies had a liberal bias and that its version of technology assessment was really about “technology arrestment.” In 1989, a former director of the Congressional Clearinghouse for the Future told an interviewer, “I think most people in the Reagan administration believed you didn’t really need to think through future problems if you didn’t see the government as being one of the big players in solving them.”

Another cause of decline in futures thinking has been the passing of many leading figures. The first generation of people to explore the future seriously included a high proportion of brilliant men and women who were eminent in their own disciplines but were attracted to the field because it allowed them to think on a larger scale. The loss of people such as Kahn, Mead, Harman, John McHale, Donella Meadows, Kenneth Boulding, and Buckminster Fuller lowered the IQ level, visibility, and legitimacy of the whole field.

Then came the roaring 1990s. American capitalism was vindicated, globalization was in full swing, inflation was down, and the only trend that mattered was the direction of the NASDAQ. The touchstone year 2000 had been the subject of countless prognostications, from Edward Bellamy’s 1887 novel Looking Backward to Herman Kahn’s The Year 2000 and The Global 2000 Report. But ironically, when 2000 arrived, long-term thinking in the United States was in sharp decline, and we were preoccupied with immediate problems such as the Y2K crisis. It didn’t help the case for a more forward-looking orientation that the biggest future issue in the public eye was a widely predicted meltdown of the world’s data systems as calendars turned over to the new millennium. The world held its breath, and nothing happened.

Though myopic hedonism had American culture and politics in its grip throughout much of the 1990s, important developments were under way that would deeply affect thinking about the future. The epicenter of methodological innovation left the think tanks on the two coasts and shifted to a brilliant group of eccentrics in the New Mexico desert, at the Santa Fe Institute. Drawing on lessons from phenomena as diverse as ant colonies, Internet traffic, and life at Irish pubs, they began to develop theories and tools to take on the most critical weakness in our understanding of our evolving world: the concept of complexity.

The Santa Fe Institute attracted top-level people in many different fields, from neuroscience to meteorology. Their shared focus was an effort to understand the common underlying structural and behavioral features of complex systems that display properties such as self-organization. ‘’We are trying to understand how patterns emerge from total randomness,’’ then-president Ellen Goldberg explained a few years ago.

This work on complexity has not solved the intrinsic difficulties in looking ahead, but it has brought something important to the effort: a sense of humility and awe before the difficulty of the task, and a better understanding of the limits of human cognition. It has highlighted the inability of trend extrapolation and mechanistic models of the world to capture the inherent uncertainties of open, nonlinear systems with complex feedback loops, in which small perturbations can sometimes cause large and unpredictable effects.

While it dampens hopes that prediction will ever achieve a high degree of accuracy, complexity theory points to better approaches in dealing with surprise, disruption, and uncertainty. We must both prepare for the unexpected, in part by constantly revising our “situational” awareness of the present, and work toward creating the kinds of long-term outcomes we want by crafting well-considered images of the future.

Simply being more attuned to the world around you is one of the best insurance policies against a surprise-filled future. Karl Weick, a professor of organizational behavior and psychology at the University of Michigan, has studied organizations that do a good job of “managing the unexpected” and found that they share a number of traits that have little to do with traditional notions of futures research. These “highly reliable organizations,” as he calls them, focus on failures and learn from them, do not simplify the complex, are hyperaware of their operations and surroundings, build in resilience to keep errors from cascading out of control, and distribute decision-making down and around, making sure that experts get heard, not just the boss. These characteristics make an organization “mindful” and better able to detect surprises when they are new, small, and insignificant—before they become five-alarm fires.

A recurrent theme in efforts to view social systems through the lens of complexity is that seemingly small perturbations in widely shared images of the future can sometimes open up large new realms of behavior possibilities, creating chain reactions of self-organizing change. This insight actually emerged in some of the early work in future studies. The economist Kenneth Boulding put the matter clearly: “The human condition can almost be summed up in the observation that, whereas all experiences are of the past, all decisions are about the future. The image of the future, therefore, is the key to all choice-oriented behavior. The character and quality of the images of the future which prevail in a society are therefore the most important clue to its overall dynamics.”

The Dutch historian Frederick Polak, one of the founders of the futures movement in Europe, argues in his intellectual history of Western civilization, The Image of the Future (1973), that the heights of classical civilization, Judaic culture, Islamic culture, the Renaissance, the Enlightenment, and the early industrial era were all preceded by daring imaginative leaps toward new visions of human possibility. Turning to the present age, however, Polak offers a terrifying depiction of modern cultures that repress fears of what tomorrow may bring, their imaginative capacities crippled by pervasive cynicism, lacking any compelling vision of human possibilities beyond riches and technological power. Polak argues that the only hope for cultural revitalization lies in rekindling the social imagination and once again exploring the possibilities of a better society.

If what Boulding calls the “character” of our images of the future needs to be more positive and inspirational, what he calls the “quality” of those images needs to be realistic. Research by psychologists such as Nobel laureate Daniel Kahneman at Princeton University and Martin Seligman at the University of Pennsylvania has shown that optimists often believe that they have much more control over events than they actually do. They tend to underestimate (often by orders of magnitude) the costs and effort needed to accomplish longer-term objectives. A willingness to dare is an indispensable quality, but in a nation of optimists the cautionary understanding of Kahneman and his colleagues is a useful tonic. It underscores the need to combine strongly positive images of the future with a willingness constantly to check reality against one’s convictions and perceptions.

At the beginning of a new millennium, the future’s opportunities and dangers are calling, but we are largely deaf to them. We pay less attention to the long run today than we did in the 1970s. Michael Marien, who edits Future Survey, the leading review of books and articles related to the future, estimates that roughly half as many writings on the future are being published today as in the mid-1970s.

But this is not the whole picture. While formal study of the future declined in the United States, dozens of other countries launched elaborate foresight exercises to examine their futures in the post–Cold War order. These countries included Norway (Norway 2030), Germany (Futur), Great Britain (UK Foresight Project), Finland, Australia (Australia 2013), New Zealand (The Foresight Project), the European Commission (Europe 2010), Poland, and Kenya (Kenya Scenarios Project). The future is also being seriously explored through work on other topics, such as “sustainable development”—but again, more outside the United States than within.

These efforts have surprising parallels in the private sector. While long-range planning in the public sector is frequently denigrated in the United States, many corporations are intensely interested in thinking about the future. Management schools and professional journals are full of discussions about the need to create “learning organizations” and other means to institutionalize constant adaptation to change. Businesses devote enormous resources to efforts to anticipate new markets, products, and technologies, and they are avid consumers of traditional economic and demographic forecasts. Many of the best-run transnational corporations have been developing sophisticated efforts in such fields as environmental scanning, issues management, and scenario-based planning.

Another hopeful development is the emergence of images of the future that appear to be both positive and realistic and that transcend many of the divisions and arguments of the past. The shift is visible in the many conferences organized by the World Future Society between 1971 and 2005. The earlier conferences were wracked by stormy debates: growth vs. no-growth, high tech vs. appropriate technology, conventional health care vs. holistic health, the political Left vs. the Right, and so on. Later conferences focused on more integrative and hopeful topics: “sustainable development” strategies to promote economic, environmental, and social well-being over the long run; an “environmental revolution in technology” that applies leading-edge scientific knowledge to develop environmentally advanced technologies; “complementary and alternative medicine”; and a “radical middle” politics that takes a long-term perspective, faces up to major challenges ahead, and seeks to find a higher common ground that integrates the best insights from the Left, the Right, and everywhere in between.

Perhaps the most important lesson for thinking about the future was summed up by Alan Kay, who created the computer interface that became the model for the first Apple Macintosh and then the basis for Windows. “The best way to predict the future,” Kay said, “is to create it.”

This article originally appeared in print

Loading PDF…