Intellectual Imperialism, Part I

Why refute when we can dismiss and derogate?

Posted Dec 04, 2018

This is the first of a two-part series, slightly adapted from an essay I wrote in ... wait for it ... 2002, for Dialogues, which was then a hard copy newsletter of the Society for Personality and Social Psychology.

I am re-posting it here because so much of it still applies.

Agricultural Imperialism 
A few years ago, while casually skimming through some social science journals, I came across an article on "agricultural imperialism."  I almost lost it right there.  Talk about taking a reasonable idea (imperialism) to a bizarre, exaggerated extreme.  I had visions of vast fields of wheat, armed to the teeth, prepared to wage war on defenseless fields of barley, soy, and rice. 

Lee Jussim
Source: Lee Jussim

Until I started reading the article.  The author's point was that agricultural production was becoming so standardized and excessively focused around a relatively small number of crops (such as corn, rice, soy, and wheat), that many local, unique, and indigenous products were being squeezed out of the marketplace and, functionally, out of production.  And the point was not that this was, by itself, intrinsically bad.  Instead, over-reliance on a fairly small number of crops would seem to put much of the human race at excessive risk should an act of god (drought, disease, etc.) decimate one or two particular crops.  Although the author did not quite put it this way, just as it is important to diversify your stock portfolio, it is important for us, both as individuals and as a species, to diversify our food sources.  And the creeping Westernization of agriculture threatened to undermine the diversity of those food sources.

What is Intellectual Imperialism? 
I use the term "intellectual imperialism" to refer to the unjustified and ultimately counterproductive tendency in intellectual/scholarly circles to denigrate, dismiss, and attempt to quash alternative theories, perspectives, or methodologies. Within American psychology, for example, behaviorism from the 1920s through the 1960s is one of the best examples of intellectual imperialism. 

Wikimedia Commons
BF Skinner
Source: Wikimedia Commons

Behaviorists often characterized researchers taking other (non-behaviorist) approaches to psychology as "non-scientific" (see, for example, Skinner, 1990).  And, although other forms of psychology did not die out, behaviorism dominated empirical, experimental American psychology for four decades.  Although behaviorism undoubtedly provided major contributions to psychology, to the extent that the scientific study of intra-psychic phenomena (attitudes, self, decisions, beliefs, emotions, etc.) was dismissed, ridiculed, or suppressed, behaviorism also impeded progress in psychology.

Unjustified Rejection of Failures to Replicate

(2018 note: This was utterly true in 2002; thank goodness that the Replication Crisis in Psychology has begun to change this. Science reform efforts are not complete or universally accepted.  Nonetheless, it is considerably easier to publish replications now than when this post was originally written).

Intellectual imperialism emerges in all sorts of ways.  One common manifestation is reviewers' tendency to reject articles because they do not find (what the reviewer believes) someone else has.  Such studies seem to me to have an unusual potential to be particularly informative and intriguing.  They raise all sorts of possibilities, such as: The original finding or phenomena is not as powerful or widespread as the initial studies seemed to suggest; the new pattern may be as or more common than the original finding; there may be conditions under which one or the other is more likely to hold. But a common knee-jerk sort of reaction is "There must be something wrong with the study if pattern X failed to replicate."  Certainly, this is possible.  But, it is also possible that there was something wrong (or limited or left unarticulated) in the original study or studies 
demonstrating pattern X.

Wikimedia Commons
Queen Victoria. She is not amused by your failure to replicate.
Source: Wikimedia Commons

Just because researcher Smith published pattern X first, does that necessarily mean that a subsequent study by researcher Jones, who found pattern not X, is fatally flawed?  I do not see it – there is no logical or philosophical reason to ascribe higher quality to a study just because it was performed first.  Doing so constitutes intellectual imperialism – unjustifiably presuming
one study's findings are superior to another's.

The Un(or at least rarely)Questioned Superiority of the Experiment

Correlation does not mean causality.  A knee-jerk reaction we have all been taught since our first statistics class and maybe even our first psychology class.  But it is wrong.  Correlation does mean causality.  If we discover that A is correlated with B, then we now know either that: 1) A causes B; 2) B causes A; 3) C (or some set of C's) cause both A and B; or 4) some combination of 1, 2, and 3 are true.  This is not nothing – indeed, although we do not know the precise direction or set of directions in which causality flows, we know a lot more about causality than we did before we obtained the correlation.

Lee Jussim
I have discovered the Source of Power entirely without experimentation.
Source: Lee Jussim

As far as I can tell, it has been overwhelmingly, and perhaps exclusively, experimentalists who have touted the absolute superiority of the experiment.  Researchers who routinely engage in both experimental and nonexperimental work rarely make this claim.     The alleged superiority of the experiment has been greatly exaggerated.  Whole fields with considerably more scientific status and recognition than social psychology, such as astronomy, paleontology, and evolutionary biology do not rely primarily on experiments for building theory and discovering new knowledge.

Of course, if we compare a perfect experiment (i.e., one whose procedures are fully articulated, instituted flawlessly, which leaves open no alternative explanations, and involves no measurement error) to a realistic naturalistic study, the experiment is superior.  But not if we compare a perfect experiment to a perfect naturalistic study.  Our hypothetical perfect naturalistic study is also executed perfectly, is longitudinal (thereby ruling out B, which is measured at Time 2 from causing A, which is measured at Time 1), includes measures of all possible alternative explanations (all possible "C's" in the C causes A and B sense), and all measures are free of error.  In such a case, the experiment and naturalistic study are equally perfectly capable of assessing causal relations between A and B.

What about a realistically good experiment and a realistically good naturalistic study (which, of course, is the bottom line issue)?  Because this issue is too complex to deal with in this type of short essay, I will make only a few brief points here.  Although there may be some net advantage of experiments over naturalistic studies, that advantage is small and quantitative, rather than an absolute quantum leap.  Both rule out B causing A (at least if the naturalistic study is longitudinal).  This leaves one major ground for comparison regarding the quality of causal inferences: their ability to rule out C's.  Experiments do not necessarily rule out all C's.  They only rule out all C's that are uncorrelated with the manipulation.  An obvious case is demand characteristics (though the possibility of C's correlated with the manipulation is infinite, just as in naturalistic studies).  Some studies may produce differences between conditions, not because the manipulation worked, but because participants figure out what responses the experimenter wanted them to provide.

Naturalistic studies nonetheless do have a harder time ruling out those pesky C's.  But, if there is any prior empirical work in the area, any theory, or even any related theories, the researcher may often have a good idea of just who are the most likely contenders for C's. They can then be measured and controlled.  Not necessarily as good as an experiment, but not a sloppy second, either, at least not if those C's are reasonably well measured.  Indeed, because researchers using naturalistic designs may be more sensitive to C's than many experimentalists, they may often make more of an effort to include, measure, and control those C's in their designs.  If so, at least some naturalistic studies may do a better job of ruling out C's than some experiments.

The Thinker at the Gates of Hell, Rodin.  Image courtesy of Wikimedia Commons.
Who said it was easy?
Source: The Thinker at the Gates of Hell, Rodin. Image courtesy of Wikimedia Commons.

Furthermore, even if the causal inferences derivable from a typical naturalistic study are not quite as convincing as those derived from a typical experiment, the naturalistic study will often provide more information about naturally-occurring relationships than will an experiment.  To the extent that we are trying to understand basic processes, therefore, I would give the edge to the experiment.  But to the extent that we are trying to understand the role of those processes in everyday life, I would give the edge to the naturalistic study.  Whether there is any greater net increase in scientific knowledge, even of causal relationships, resulting from experiments than from naturalistic studies is, therefore, primarily a matter of opinion, perspective, and context.

Of course, as a field, we do not really need to choose.  Both experiments and naturalistic studies are extremely important, precisely because they complement each other so well.  Put this way, it probably seems obvious.  If so, then you are already agreeing with me that any tendency toward methodological imperialism (dismissing, derogating, giving less credence to naturalistic studies over experiments) is not a healthy thing for our field.


Stay tuned for Part II, coming soon to a Psych Today blog near you.