Bias
When Science Is a Projection of Researcher Bias
New data shows how researchers use math and models to share personal opinions
Updated February 1, 2026 Reviewed by Abigail Fagan
Key points
- A 2026 study of 71 teams found that researchers tend to produce results that match their own politics.
- Pathologizing political opponents in research can mask personal opinions behind clinical terminology.
- Scientific integrity requires distinguishing between what a researcher says "is" versus what "seems to me."
A landmark study recently published in Science Advances (Borjas & Breznau, 2026) confirms a disturbing reality for our data-driven era: identical datasets do not produce identical truths. When 71 research teams were given the same data to determine if immigration reduces support for social welfare, they generated 1,253 ways of analyzing the results. The findings did not emerge from the numbers; they were dictated by the researchers’ pre-existing policy preferences.
By inserting individual subjectivity about what policies are optimal into the heart of scientific research, we have succumbed to what the 19th-century critic John Ruskin called the Pathetic Fallacy.
The Pathetic Fallacy
Derived from pathos, Ruskin’s term refers to an emotional error rather than a pitiable one. It is the human tendency to project internal moods onto external reality. This is like psychological projection, but Ruskin described it in the creation of art and poetry. To the poet, this manifests as cruel sea foam or dancing leaves.
Ruskin was scathing about those who rely on this fallacy. He viewed it as a sign of a mind so overpowered by emotion that the observer loses the ability to see the world as it actually exists. For Ruskin, the highest level of intellect belongs to those who perceive and represent the thing with absolute clarity, regardless of their own feelings. He argued that the creator who imbues nature with emotion is distorting reality to serve their own ego.
Intellectual honesty requires a simple distinction: Can we say something is, or only that it seems to me to be? The former is a statement of fact; the second is an honest way to explain one’s own response. To say it seems to me preserves the integrity of the object. It reveals a person who understands that while their emotions and the external world are connected, the world is not merely an extension of their own feelings.
The Art and Science of Statistics
The discipline to distinguish between the two often vanishes when researchers treat a mathematical model as an impartial judge of social policies. As the Science Advances experiment demonstrates, a mathematical model is easily massaged into shapes that reflect the researcher’s own preconceptions about the world.
If data were an unchangeable reality, 71 teams would converge on similar results. Instead, the study produced a chaotic map of many different models of the world: many versions of it seems to me to be rather than a description of what is. This variance wasn't caused by fake news or math errors, but by what the authors describe as Researcher Degrees of Freedom.
Researcher Degrees of Freedom
Before a single calculation is run, a researcher must make dozens of small, subjective decisions:
- Which variables deserve more weight?
- Which "noisy" data points should be excluded?
- Which statistical thresholds signify a significant result?
These decisions are influenced by the researcher's bias far more than some would like to admit. Yet “no two teams arrived at the same set of numerical results or took the same major decisions during data analysis.” Like the poet Ruskin was criticising, these researchers do not reveal the world as it is; they imbue a model with their own sensibilities, dictating how we are supposed to emotionally react to the result.
The 1:1 Map of Reality
In Lewis Carroll’s Sylvie and Bruno Concluded, a character describes a map with a scale of a mile to the mile. A totally accurate, world-sized map of the world. The joke highlights the futility of a model that grows so complex it overshadows the reality it’s meant to measure:
"It has never been spread out, yet," the character explains. "The farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well."
Our modern problem is the reverse: we have fetishized data without demanding it actually represent the real world. We are building maps of other maps. Believing that anything numerical is automatically objective is a fundamental problem with the Big Data age. We mistake the description for the thing itself.
This gap is widening because real-world information is becoming harder to find. National agencies, like the UK’s Office for National Statistics (ONS), are in a crisis because people have stopped answering their doors and phones. When survey response rates collapse from 50% to below 15%, the data dries up.
To compensate, agencies now fill the gaps with "estimates of estimates" or "synthetic data", which is just data manufactured by an algorithm. It is easier than ever to generate numbers, but harder than ever to collect data that actually represents people.
To understand the trap, you have to realize what a model actually is: a guess based on a theory. It is never reality. It is a set of human assumptions where you input what you think you know to generate a guess about what you don't know. If the theory is slightly off, or if the researchers pull the wrong levers, the result is wrong. Because a model must leave out the infinite complexity of the real world to function, it is always an estimate, never a fact.
Pathologizing Normal Opinions
The danger of synthetic data and poorly constructed models escalates when researchers begin pathologizing their ideological opponents. This happens in research where certain ideological positions are treated as psychological defects rather than normal systems of belief.
For decades, social science has focused on the "rigidity of the right." In one striking example, researchers' models to link right-wing conservatism to "pathological object relations," citing "primitive defenses" and "problems in reality testing". Conversely, recent work on Left-wing Authoritarianism (LWA) has begun to analyze progressive activism through the lens of narcissistic victimhood and dark triad traits.
The researchers in the Borjas & Breznau (2026) study found that pro-immigration research teams were more likely to frame their results as a sign of 'increased social cohesion,' while anti-immigration teams framed similar data as a 'reduction in cohesion.'
By framing a worldview or political position as a clinical pathology, the researcher masks a subjective interpretation of the world behind a statistical model. It is entirely reasonable to say, "It seems to me that this ideology is harmful" or to point out personal disagreements. It is disingenuous to use a mathematical model to suggest that science has proven that your neighbor is mentally ill for voting in a way contrary to your own sensibilities. This error diminishes public trust in the very idea of scientific expertise.
Happy Little Histograms
We can, helpfully, return to Ruskin’s distinction and the pathetic fallacy. There is a massive difference between advancing scientific understanding of a topic and simply dressing up a political opinion in a lab coat. When researchers use the word is in the form of a statistical model to hide what is actually just I think, they are no longer doing science, they are doing political PR.
To stay grounded, remember this: the next time you see a headline about a model "proving" a social or public policy fact, consider if the researcher is accurately measuring the world as it is, or if they are just projecting their own feelings onto the data and expecting you to treat their perspective as a fact.
References
Borjas, G. J., & Breznau, N. (2026). Ideological bias in the production of research findings. Science Advances, 12(1).
Breznau, N., et al. (2022). Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty. Proc Natl Acad Sci U S A., 119(44).
Carroll, L. (1893). Sylvie and Bruno concluded. Macmillan and Co.
Costello, T. H., et al. (2022). Clarifying the structure and nature of left-wing authoritarianism. Journal of Personality and Social Psychology, 122(1).
John, O. P., & Soto, C. J. (2007). The Importance of Being Valid. Reliability and the Process of Construction Validation. In R. W. Robins, R. C. Fraley, & R. F. Kreuger (Eds.), Handbook of Research Methods in Personality Psychology. New York: Guilford Press.
Jost, J. T., et al. (2003). Political conservatism as motivated social cognition. Psychological Bulletin, 129(3).
Partington, R. (2024). ‘In the dark’: Collapse in ONS survey response rates alarms UK experts. The Guardian.
Royal Economic Society. (2025). Royal Economic Society warns UK faces crisis in economic statistics.
Ruskin, J. (1856). Modern Painters: Volume 3. Smith, Elder & Co.
Yalch, Matthew. (2024). Association Between Dimensions of Pathological Object Relations and Right‐Wing Conservatism. International Journal of Applied Psychoanalytic Studies. 22. 10.1002/aps.1899.