The Limits of Knowledge
The 18th-century Scottish philosopher David Hume was famously skeptical of human perceptions of the relationship between cause and effect. Causes, in Hume’s estimation, were tales “we tell ourselves to make sense of events and observations,” not necessarily a complete picture of what really triggered an event, writes Jonah Lehrer, a science journalist and the author of the new book _Imagine: How Creativity Works._ The disconnect Hume intuited is becoming more apparent in modern science, especially in medicine, Lehrer writes.
Plenty of cause-and-effect discoveries, such as smoking’s impact on mortality, are perfectly valid. But most clear-cut relationships have been uncovered. As medical researchers move into ever knottier territory, parsing the threads that make up biological systems is becoming more difficult. Scientists are prone to perceptual shortcuts, misapprehensions, or oversimplifications. Because we rely so heavily on our vision to construct and interact with reality, for example, we’re particularly susceptible to believing that what we see is the whole picture, even when it’s not.
Take chronic back pain. The common treatment used to be to do nothing, a slow but effective palliative. Then magnetic resonance imaging (MRI) revealed that many sufferers had severely degenerated spinal discs, and patients underwent surgery to have them removed. Researchers later discovered that the seemingly obvious causal relationship did not hold up: Some people with injured disks never experienced back pain. Now doctors are advised to skip performing MRIs on patients with the complaint; the additional information confuses more than it clarifies.
Cases such as these have multiplied across the medical world. Yes, there are checks in place to stop scientists from prematurely believing they’ve discovered a causal relationship. The principle of statistical significance is one such check; it specifies that an experiment’s results can’t be considered valid if its outcome can be produced by chance more than five percent of the time. But such protocols are weak in the face of science’s deep conviction that “the so-called problem of causation can be cured by more information, by our ceaseless accumulation of facts,” Lehrer writes. He refers to a 2011 study of scholarly articles in which causal relationships had been reported between certain molecules and illness. Of the 400 articles that were scrutinized, all of them published in highly influential journals, 83 percent had been subsequently retracted or revised to tone down the finding.
Searching for correlations is a poor way to go about understanding the complex systems that scientists now seek to demystify. “While correlations help us track the relationship between independent measurements, . . . they are much less effective at making sense of systems in which the variables cannot be isolated,” Lehrer says. Scientists need to be more mindful of how the system they’re evaluating interacts with other systems. A drug that lowers cholesterol, for instance, may also raise blood pressure, wrecking a patient’s overall cardiovascular health. In the end, Lehrer writes, “the details always change, but the story remains the same: We think we understand how something works, how all those shards of fact fit together. But we don’t.”
This article originally appeared in print