Over the last week we’ve had news that several gene variants are responsible for some small percentage of autism cases. That came within days of another study that shows how limited genetic knowledge is in disease prediction. Meanwhile we’re obtaining so much data from the human and other species’ genomes that our computing systems can’t keep up with the mass of raw information, a “data deluge.” A few months ago some physicists, drilling ever deeper into the basic forces of the universe, thought that they’d find a particle that moved faster than light (it doesn’t). And there are more failures of new experimental medications than there used to be so pharmaceutical research and development costs per successful drug keep rising.
What lesson can be gleaned from these seemingly disparate story lines? One is this: science is asking more precise questions but the answers are harder to get.
A core concept of the Enlightenment was that the more that reasoning is based on experimentation the more we can learn about the world. Manipulation of variables, recommended in the 17th century by Francis Bacon, proved to be a turning point in the history of science. By uncovering previously invisible truths and giving human beings novel and effective ways to manage their environment, the scientific method gave the idea of progress a whole new meaning. Until then it wasn’t at all clear that civilization wasn’t in some kind of steady state, or even that we weren’t in decline from some “golden age.” But it turned out that the golden age was still ahead of us, if we were smart enough to invest in it and wise enough not to misuse the knowledge being gained.
By the late 19th century, some philosophers theorized that, given enough time and money, the scientific community would be able to come up with a unified system of knowledge of the whole of reality. That end point may never actually come, of course, but it was the product of a thought experiment about the demonstrated power of actual experiments, a kind of ideal terminus of knowledge. Certainly the Darwinian revolution gave reason to believe this could be done in the biological world, and even today physicists are aiming for a unified theory of the cosmos.
But what we seem to be discovering is that, as we dig into the weeds of the nature of reality, reality is ever more stubborn about giving up its secrets. Nowhere is this recalcitrance more apparent than in genetics and neuroscience. The more we’ve learned about the human genome the more important proteins have turned out to be, and the more we’ve learned about brain cells the more we’ve realized how important the connections between brain cells are (something the philosopher and psychologist William James told us in 1890!).
In a way, we’ve already plucked much of the low-hanging fruit. Though it took tens of thousands of years to get on track, once we got there, we learned fast. How low-hanging the fruit of new knowledge is depends in part on how we approach it. We seem to be in a transitional period from a marvelously rich era of discovery in the last 30 years to an era in which new concepts and methods will be required to gain access to another range of powerful discoveries.
However, this transitional period is frustrating. Just to take the case of neurological disorders like autism, it’s quite clear that like cancer the conditions brought under that large rubric have diverse biological roots. A couple of years ago a well-known geneticist told me that he is sure Asperger’s Syndrome is genetically quite distinct from the spectrum of autism disorders, for example. So, again, it’s often the case that the next step gets harder and more expensive. At the same time, the external variables that can cause multiple genetic switches to be turned on and off are so numerous, subtle and hard to control that few disease risks can be confidently predicted. Even “designer babies” wouldn’t change that.
Now we come to the heart of the problem: the basic science is more promising than ever but the opportunities to monetize the value of that science are, for the moment at least, fewer than they were even 20 years ago. The headwinds into which investors must now sail are not simply due to easy answers like “over regulation” but reflect the nature of the science waiting to be done. America is out of the new supercollider business. The pharmaceutical pipeline is running dry, and especially in neurological disease the big players are leaving the field.
All is not lost. Government agencies like the National Institutes of Health are developing new models for data collection and empirical studies. Regulatory reforms are in process. Industry is looking to smaller and more flexible corporate arrangements. Gradually, drug testing will be liberated from the limitations of animal models by more in vitro processes, analytical devices will be improved and more powerful computers will develop superior algorithms.
But in an increasingly competitive international science marketplace, who will take the lead? For the United States' future in medical research, this set of factors calls on a virtue that is not prominent in the American character: patience. Our famous “can do” attitude has tended to devolve to a “do it now” culture. Investors and corporate boards have come to expect annual, semi-annual, or even quarterly rewards. The political system—and perhaps crucially the national security science establishment—will have to play a key role in providing leadership and incentives to keep this country in the front ranks of discovery. As the nation that can most readily lay claim to be the child of the Enlightenment, this will be an especially provocative challenge.
And the answer is “both.” If we try.