No-Brainer?

Two recent books on the future of media go against the grain of their authors’ professions. Nicholas Carr is a journalist who has written mostly for business and technology publications but has courageously challenged some of his readers’ most cherished assumptions. In _Does IT Matter?_ (2004), he argued that the transformative power of corporate computing is overrated. In _The Shallows_ he goes further, questioning the faith of many computer industry leaders that the Web can enhance thinking and accelerate learning.

Clay Shirky, on the other hand, is a tenured professor at a major private research university, whose heart is clearly with the amateur upstarts who doubt the need for scholarly hierarchy. While Carr does not address Shirky’s earlier book _Here Comes Everybody_ (2008) directly, he does cite a blog post Shirky wrote that dismisses the reverence for literary classics such as _War and Peace_ and _In Search of Lost Time_ as the “side-effect of living in an environment of impoverished access,” before today’s digital abundance. Carr fears that Shirky’s remark reflects not just a provocative pose but an emerging “postliterary mind.”   The conflict exemplified by these two authors is, in Internet time, already old. It can be traced back at least as far as Bill Gates’s _The Road Ahead_ (1995), in which the Microsoft chairman predicted that the Web would revolutionize reading and dedicated his millions in royalties to educational technology. The classic opposition salvo, Sven Birkerts’s _The Gutenberg Elegies,_ appeared even earlier, in 1994. Gates is still financing electronic learning, and Birkerts is still lamenting it.   Of the books at hand, _The Shallows_ is the longer and more earnest. The center of Carr’s argument is that the current media environment is destroying the ideal and practice of rich, contemplative reading—not always realized, but a norm of Western education—with a steady diet of electronic distraction. Carr turns the early enthusiasm for the Internet on its head. Hypertext, with its ability to jump to new pages when a reader clicks a mouse on highlighted words, appeared ready to fulfill the dream of engineer visionaries such as Vannevar Bush of linking all knowledge. But in Carr’s analysis, the ability to navigate away from conventional text to richer but more distracting resources turns out to be a bug, not a feature.   Carr has assembled a formidable body of scientific studies on the negative consequences of new media. At the core of this research is neuroplasticity, the brain’s seemingly endless ability to reconfigure itself in response to new stimuli, as established in more than 30 years of experiments by the neuroscientist Michael Merzenich, whose work Carr deeply and rightly admires. Heavy use of the Internet, according to Merzenich and the neuropsychiatrist Gary Small, strengthens some of the brain’s processes and weakens others, as neurons and synapses are shifted to the functions in greatest demand.   Magnetic resonance imaging of people while they are using the Internet shows that intensive users of Google, for example, activate a zone of the brain called the dorsolateral prefrontal cortex, little used by Web novices. The speed of the brain’s adaptation is also remarkable. Beginners, after only five days of one-hour Web-surfing sessions, begin to use the same area as Internet veterans. MRIs performed on people while they read books show that they use regions linked to language, memory, and vision; surfers call on prefrontal sites of decision making and problem solving.   The cognitive neuroscientist Maryanne Wolf has argued that the rapid-fire decision making required to pause, evaluate, and click on links impedes our ability to make the deep connections associated with reading traditional texts. And there is evidence that the distractions of surfing raise the barrier between short-term and long-term memory that must be bridged before we can achieve a rich understanding. Carr is right to contrast the technological impact of the pocket calculator, which freed the brain from its cognitive load and promoted the transfer of concepts to long-term memory, with that of hypertext, which taxes our working memory more.   Not all of Carr’s examples are as persuasive. Studies of road safety support his point that multitasking tends to degrade humans’ mental performance across the board, and it’s true that television viewers remember less when a news crawl and information graphics appear onscreen than when they see and hear only the announcer. But what does it mean if volunteers who watch a presentation enhanced with sound and video remember less and report less enjoyment of the experience than those who view the text alone? Results might be different with better media materials; think of the powerful impact of photography and video on the efficacy of the civil rights movement of the 1960s. And perhaps somebody who experiences an inspiring multimedia presentation will in the long run be more motivated to read deeply into a subject than someone who recalls more of a straight lecture or text—as in the old adage that education is what’s left after you’ve forgotten everything on the exam.   Carr sometimes implies that Web users have no choice but to click on every link they come across. That’s not my experience; in fact, I’ve found so many links to be trivial that I usually don’t bother following them. And for serious study, isn’t following a hyperlink less distracting than the old process of tracking down a footnote’s source in a book or bound journal? Carr cites a researcher who fears that London taxi drivers who use new satellite navigation technologies may weaken the area of their brains enlarged by memorizing geography before the introduction of GPS; doesn’t this suggest that the brain’s changes, at least in adults, are reversible, that neuroplasticity works both ways? (It’s true, though, as Wolf and others have urged, that we should be cautious about technology’s impact on young people’s developing brains.)   Clay Shirky shares Carr’s low opinion of television. But while Carr regards the Web as a failed attempt to rescue serious reading from the remote control, Shirky still takes the early cooperative idealism of the Web seriously. He reminds us that cultural critics such as Harvey Swados wondered whether the paperback revolution that began in the 1930s was going to increase access to classics or flood the market with trash. It did both. Information abundance multiplies the quantity of low-grade material and reduces the average quality of media, but it also enables the experimentation that is essential to keeping a culture alive and dynamic.   The Internet is a revolutionary medium in that it allows millions of people and organizations to share ideas collaboratively at low cost, as book readers, television viewers, and even telephone users cannot. Shirky rejects the notion, advanced by Carr on his own blog, that the work of YouTube and Facebook contributors is “digital sharecropping,” uncompensated and exploitive labor for the shareholders and executives of Web media companies. Shirky counters that social networking sites are sought for “sharing rather than production,” that contributors’ works are “labors of love,” and that users desert companies that abuse their trust. In Shirky’s view, the Web is enabling a new style of generous common culture as an alternative to the professionally created conventional media that prevailed in times of information scarcity. He sees the social Web expanding from personal expression to group mutual help, and ultimately to public and civic projects that can transform society.   Shirky, like Carr, overstates valid points. For one thing, he exaggerates the conflict between amateurs and professionals. Both have long helped and complemented each other in scientific fields such as astronomy and ornithology. Many “generous” contributors to the Web are really aspiring pros who still dream of attracting conventional agents and publishers. The nonprofessional volunteers who work on Wikipedia articles frequently insert calls for better documentation—in practice, that usually means the work of career academics and journalists. And lay collaboration is better for assembling facts than synthesizing them. That’s one reason for the survival of the print edition of the _Encyclopaedia Britannica_ despite all predictions voiced in the 1990s that it would become obsolete.   Both authors invoke history, but their examples don’t always support their points. Consider Carr’s technological determinism. He cites the early medieval substitution of space between words for the unbroken _scriptura continua_ of ancient Latin as evidence that media technology reshapes our thinking. Yet the change reflected not the advent of a new pen or writing surface but the need of early medieval Irish monks to teach Latin texts efficiently to speakers of non-Romance languages. Mechanical clocks arose as a result of religious orders’ quest for punctual observance, not the other way around. Nor did print-era cultural authorities always welcome reading as a form of mental self-discipline. In _The Nature of the Book_ (1998), which Carr doesn’t mention, Adrian Johns cites the natural philosopher Robert Boyle, who was prescribed romances to cure his melancholy, but found that fiction “accustom’d his Thoughts to such a Habitude of Raving, that he has scarce ever been their quiet Master since.”   Carr also reaches surprising conclusions on more recent media history. He considers Google a product of the efficiency movement instigated in the early 20th century by Frederick Winslow Taylor, when it is really the opposite in spirit, even if both are dedicated to reducing effort. Taylor preached benevolent imposition of a single scientifically determined method and tool for each job, disdaining workers’ individual and collective knowledge. As a search engine, Google rejects prescriptive, hierarchical library classification systems; it’s an organized anarchy (to quote a classic definition of the market) aiming to give users not necessarily what they ought to have but what most people entering a search term are looking for. Taylor’s procedures had to be followed to the letter; Google’s options encourage personalization.   Shirky, too, sometimes misdirects his historical examples. Printed vernacular Bibles may have initially interrupted “the interpretive monopoly of the clergy,” but Protestant leaders were soon persecuting Unitarians, Anabaptists, and others for their heretical readings of Scripture. London’s scientific Royal Society may have exemplified the cooperative spirit, but it was no protodemocracy; the society was originally limited to gentlemen and denied recognition to the craftsmen who actually performed many of the experiments it published. Shirky also argues that the gin craze in early-18th-century London ended with the social and political integration of the city’s poor. But his dates are fuzzy; the (male) working class did not get the vote until 1867, more than a century after the fad’s end. Rising alcohol prices had more to do with the change. Besides, there was another gin mania in London in the early 19th century.   It is in prognosis that Shirky has the edge over Carr. Carr holds out some hope of stemming the tide of distraction, but toward the end of _The Shallows_ he confesses to backsliding into following social networking sites, a captive of his own technological determinism. Shirky, rejecting inevitability arguments, ends with a more nuanced view of the possibilities and some memorably epigrammatic advice (e.g., “Intimacy doesn’t scale” and “Clarity is violence”). Those who would save deep reading and a place for print need not more elegists but a Shirky of their own.