The Problems of Science
An overview of the philosophy of science.
Posted March 4, 2019
It is sometimes said that 90 percent of scientists who ever lived are alive today, so why is science not advancing by leaps and bounds?
To call a thing ‘scientific’ or ‘scientifically proven’ is to lend that thing instant credibility. Especially in Northern Europe, more people believe in science than in religion, and attacking science can raise the same old atavistic defences.
In a bid to emulate, or at least evoke, the apparent success of physics, many areas of study have claimed the mantle of science: economic science, political science, social science, and so on. Whether or not these disciplines are true, bona fide sciences is a matter for debate, since there are in fact no clear or reliable criteria for distinguishing a science from a non-science.
What might be said is that all sciences (unlike, say, magic or myth) share certain assumptions which underpin the scientific method—in particular, that there is an objective reality governed by uniform laws, and that this reality can be discovered by systematic observation.
A scientific experiment is basically a repeatable procedure designed to help support or refute a particular hypothesis about the nature of reality. Typically, it seeks to isolate the element under investigation by eliminating or ‘controlling for’ other variables that may be confused or ‘confounded’ with the element under investigation. Important assumptions or expectations include: all potential confounding factors can be identified and controlled for; any measurements are appropriate and sensitive to the element under investigation; and the results are analysed and interpreted rationally and impartially.
Still, many things can go wrong with the experiment. For example, with drug trials, experiments that have not been adequately randomized (when subjects are randomly allocated to test and control groups) or adequately blinded (when information about the drug administered/received is withheld from the investigator/subject) significantly exaggerate the benefits of treatment. Investigators may consciously or subconsciously withhold or ignore data that does not meet their desires or expectations (‘cherry picking’) or stray beyond their original hypothesis to look for chance or uncontrolled correlations (‘data dredging’). A promising result, which might have been obtained by chance, is much more likely to be published than an unfavourable one (‘publication bias’), creating the false impression that most studies on the drug have been positive, and therefore that the drug is much more effective than it really is.
One damning systematic review found that, compared to independently funded drug trials, those funded by pharmaceutical companies are less likely to be published, while those that are published are four times more likely to feature positive results for the products of their sponsors!
So much for the easy, superficial problems. But there are deeper, more intractable philosophical problems as well.
For most of recorded history, ‘knowledge’ was based on authority, especially that of the Bible and white-beards such as Aristotle, Ptolemy, and Galen. But today, or so we like to think, knowledge is much more secure because grounded in observation.
Putting to one side that much of what counts as scientific knowledge cannot be directly observed, and that our species-specific senses are partial and limited, there is, in that phrase of NR Hanson, ‘more to seeing than meets the eyeball’:
Seeing is an experience. A retinal reaction is only a physical state . . . People, not their eyes, see. Cameras and eyeballs are blind.
Observation involves both perception and cognition, with sensory information filtered, interpreted, and even distorted by factors such as beliefs, experience, expectations, desires, and emotions. The finished product of observation is then encoded into a statement of fact consisting of linguistic symbols and concepts, each one with its own particular history, connotations, and limitations. All this means that it is impossible to test a hypothesis in isolation from all the background theories, frameworks, and assumptions from which it arises.
This is important, because science principally proceeds by induction—that is, by the observation of large and representative samples.
But even if observation could be objective, observations alone, no matter how accurate and exhaustive, cannot in themselves establish the validity of a hypothesis. How do we know that flamingos are pink? Well, we don’t know for sure. We merely suppose that they are because, so far, every flamingo that we have seen has been pink. But the existence of a non-pink flamingo is not beyond the bounds of possibility. A turkey that is fed every morning might infer by induction that it will carry on being fed every morning, until on Christmas Eve the farmer corners it and wrings its neck.
Inductive reasoning only ever yields probabilistic ‘truths’, and yet it is the basis of everything that we know or think that we know about the world we live in. Our only justification for induction is that it has worked in the past, which is, of course, an inductive proof, tantamount to saying that induction works because induction works.
It may be that science proceeds not by induction, but by abduction, or finding the most likely explanation for the observations—as, for example, when a physician is faced with a constellation of symptoms and formulates a ‘working diagnosis’ that more or less fits the clinical picture. But ultimately abduction is no more than a type of ‘backward reasoning’, formally equivalent to the logical fallacy of affirming the consequent:
If A, then B. ("If I have indigestion, then I have central chest pain.")
B. ("I have central chest pain.")
Therefore A. ("Therefore, I have indigestion.")
But, of course, I could also be having angina, a myocardial infarction, a pulmonary embolism… How am I to decide between these alternatives?
At medical school we were taught that ‘common things are common’. This is a formulation of Ockham’s razor, which involves choosing the simplest available explanation. Also called the law of parsimony, Ockham’s razor is often invoked as a principle of inductive reasoning, but of course the simplest explanation is not necessarily the best or correct one, and the universe is proving much more mysterious than we might have imagined, or even been able to imagine, just a generation ago.
What’s more, we may be unable to decide which is the simplest explanation, or even what ‘simple’ might mean in context. Some people think that God is the simplest explanation for creation, others that the idea of God is far-fetched.
Still, there is some wisdom in Ockham’s razor. While the simplest explanation may not be the correct one, neither should we labour or keep on ‘fixing’ a preferred hypothesis to save it from a simpler, neater explanation.
The psychological equivalent of Ockham’s razor is Hanlon’s razor: Never attribute to malice that which can be adequately explained by neglect, incompetence, or stupidity.
Simpler hypotheses are also preferable in that they are easier to disprove.
To rescue it from the problems raised by induction, Karl Popper argued that science proceeds not inductively but deductively, by making bold claims and then seeking to disprove, or falsify, those claims.
"All flamingos are pink." Oh, but look, here’s a flamingo that’s not pink. Therefore, it is not the case that all flamingos are pink.
On this account, theories such as those of Freud and Marx are not scientific insofar as they cannot be falsified.
But if Popper is right in holding that science proceeds by deductive falsification, then science could never tell us what is, but only ever what is not. Even if we did land on some truth, we could never know for sure that we had arrived.
Another issue with falsification is that when the hypothesis conflicts with the data, it could be the data rather than the hypothesis that is at fault—in which case it would be a mistake to reject the hypothesis.
Scientists need to be dogmatic enough to persevere with a preferred hypothesis in the face of apparent falsifications, but not so dogmatic as to cling to their preferred hypothesis in the face of robust and repeated falsifications. It’s a delicate balance to strike.
For the philosopher Thomas Kuhn (d. 1996), scientific hypotheses are shaped and restricted by the worldview, or paradigm, within which scientists operate.
Most scientists are as blind to the paradigm as fish to water, and quite unable to see it, let alone see beyond it. In fact, most of the clinical medical students I teach at Oxford, and who already have a science degree, don’t even know what the word ‘paradigm’ means. Plato thought that leaders ought to receive philosophical training, but these are the people who will lead us out of the next pandemic.
When data emerges that conflicts with the paradigm, it is usually discarded, dismissed, or disregarded.
But nothing lasts forever: After much resistance and burning at the stake (whether literal or metaphorical), the paradigm gradually weakens and is overturned. Examples of such ‘paradigm shifts’ include the transition from Aristotelian mechanics to classical mechanics, the transition from miasma theory to the germ theory of disease, and the transition from clinical judgement to evidence-based medicine.
In 1949, Egas Moniz received a Nobel Prize for his discovery of ‘the therapeutic value of leucotomy in certain psychoses’. Today, prefrontal leucotomy (also called lobotomy), which involves the surgical severance of most of the connections to and from the prefrontal cortex of the brain, is derided as a barbaric treatment from a much darker age.
Of course, a paradigm does not die overnight. Reason is, for the most part, a tool that we use to justify what we are already inclined or programmed to believe, and a human life cannot easily accommodate more than one paradigm.
In the words of Max Planck,
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.
Or to put it more pithily, science advances one funeral at a time.
In the Structure of Scientific Revolutions (1962), Kuhn argued that rival paradigms offer competing and irreconcilable accounts of reality, implying that there are no independent standards by which they might be judged against one another.
The philosopher Imre Lakatos (d. 1974) sought to reconcile and in some sense rescue Popper and Kuhn, and spoke of programs rather than paradigms.
A program is based on a hard core of theoretical assumptions accompanied by more modest auxiliary hypotheses formulated to protect the hard core against any conflicting data. While the hard core cannot be abandoned without jeopardizing the program, auxiliary hypotheses can be adapted to protect the hard core against evolving threats, rendering the hard core unfalsifiable.
A progressive program is one in which changes to auxiliary hypotheses lead to greater predictive power, strengthening the whole, whereas a degenerative program is one in which these ad hoc elaborations become sterile and cumbersome.
A degenerative program, said Lakatos, is one which is ripe for replacement. For example, classical mechanics, with Newton’s three laws of motions at the core, although very popular in its time, was gradually superseded by the special theory of relativity.
For the philosopher Paul Feyerabend (d. 1994), Lakatos’s theory makes a mockery of any pretence at scientific rationality or objectivity. Feyerabend went so far as to call Lakatos a ‘fellow anarchist’, albeit one in disguise.
For Feyerabend, there is no such thing as ‘a’ or ‘the’ scientific method: anything goes, and, as a form of knowledge, science is no more privileged than magic, myth, or religion.
More than that, science has come to occupy the same place in the human psyche as religion once did. Although science began as a liberating movement, it grew dogmatic and repressive, more of an ideology than a rational method that leads to ineluctable progress.
To quote Feyerabend:
Knowledge is not a series of self-consistent theories that converges toward an ideal view; it is rather an ever increasing ocean of mutually incompatible (and perhaps even incommensurable) alternatives, each single theory, each fairy tale, each myth that is part of the collection forcing the others into greater articulation and all of them contributing, via this process of competition, to the development of our consciousness.
Feyerabend was never one for mashing or mincing his words. ‘My life’ he wrote ‘has been the result of accidents, not of goals and principles. My intellectual work forms only an insignificant part of it. Love and personal understanding are much more important. Leading intellectuals with their zeal for objectivity kill these personal elements. They are criminals, not the leaders of mankind.’
As I argue in my new book, Hypersanity: Thinking Beyond Thinking, every paradigm that has come and gone is now deemed to have been false, inaccurate, or incomplete, and it would be ignorant or arrogant to assume that our current ones might amount to the truth, the whole truth, and nothing but the truth.
If our aim in doing science is merely to make predictions and promote successful outcomes, then this may not matter quite so much, and we continue to use outdated or discredited theories such as Newton’s laws of motion so long as we find them useful.
But it would help if we could be more realistic about science and, at the same time, more rigorous, imaginative, and open-minded in conducting it.
Lexchin J et al (2003): Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 326:1167-1170.
NR Hanson, On Observation. In TJ McGrew et al (2009), The Philosophy of Science: An Historical Anthology, p. 432.
'The glory of science and the scandal of philosophy’. Paraphrased from CD Broad (1926), The philosophy of Francis Bacon: An address delivered at Cambridge on the occasion of the Bacon tercentenary, 5 October, 1926, p67.
Max Planck (1949), Scientific Autobiography and Other Papers.
Paul Feyerabend (1975), Against Method.
Paul Feyerabend (1991), Who's Who in America.