Better Science Through Failure

Want a Nobel Prize for physics? Try this: Build a radio telescope to search the far reaches of space, curse at it because all it ever seems to pick up is static, attempt to fix the problem by coating it with aluminum tape and scrubbing it clean of pigeon droppings, and then—when you’re finally convinced the contraption is a complete failure—place a call to a fellow scientist who’s trying to figure out how to measure cosmic debris left from the Big Bang. Pause while it sinks in that cosmic debris is causing your telescope’s irritating static. Fourteen years later, book flight to Stockholm.

According to _Wired_ contributing editor Jonah Lehrer, what happened to Bell Labs astronomers Arno Penzias and Robert Wilson, winners of the 1978 Nobel Prize for physics, is often the way science works. He cites the work of Kevin Dunbar, a researcher who found in his study of scientific methods at four Stanford University biochemistry labs since the early 1990s that more than 50 percent of the experiments produced unexpected results. “The details always changed,” Lehrer reports, “but the story remained the same: The scientists were looking for X, but they found Y.” Dunbar’s study showed that researchers almost always blamed mistakes for their surprising findings, even when the anomalies showed up multiple times. That persistent denial of what they were seeing, Lehrer writes, is “rooted in the way the human brain works.”

In the past few decades, he says, psychologists have “dismantled the myth of objectivity.” Although scientists like to believe they are empiricists—that their work demands obedience to the facts—Lehrer says that more often people are “actually blinkered, especially when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail—it’s that most failures are ignored.”

Dunbar ran a separate experiment that pinpointed two brain centers that react to the unexpected. Students were shown film clips that recreated Galileo’s famous experiment of dropping different-sized cannon balls from the Tower of Pisa. One clip showed a larger ball falling faster than a smaller one—a false representation of gravity’s action—while the other displayed Galileo’s discovery: The two balls would fall at the same rate. When college physics majors watched the manipulated clip, the region of their brains associated with perception of errors and contradictions, the anterior cingulate cortex, was activated. That’s to be expected. But Dunbar also detected activity in the dorsolateral prefrontal cortex, an area that acts as a kind of “delete” key, suppressing unwanted information. The students, Lehrer writes, “didn’t watch the video and wonder whether Galileo might be wrong. Instead they put their trust in theory, tuning out whatever it couldn’t explain. Belief, in other words, is a kind of blindness.”

Scientists, of course, can sometimes overcome this tendency. One strategy is to admit that what appears unreal is, in fact, a possibility. Researchers on the margins of mainstream society can also have an advantage, which may explain why, as sociologist Thorstein Veblen suggested in a controversial 1918 essay, Jewish scientists such as Albert Einstein thrived in the anti-Semitic culture of Germany. And Dunbar’s research points to another fruitful avenue: diversity. The laboratories he studied all held regular group meetings where knotty problems were tackled en masse. Labs in which the scientists were all in the same field were much less efficient at solving such puzzlers than those that included researchers from unrelated fields, partly, Lehrer says, because the scientists were forced to explain their experiments in abstract terms that allowed for more creative ideas to emerge.