Nobody Becomes a Psych Major to Study Statistics: Part I

Everything psychologists believe is shaped by our paradigm

Posted Mar 31, 2010

Nobody becomes a Psych major to study statistics.  Trust me, I know.  I teach statistics to psychology students.  Nonetheless, Psychology majors typically spend more time studying statistics and research methods than almost any other undergraduate major.  Why?  Because its part of our paradigm.

Ask any other student in the natural sciences – do they take two semesters of stats and research methods?  I doubt it.  They take lots of labs to learn the techniques and methods of their fields.  But so do we.  Whole courses in experimental design and methodology?  Probably not.  (Exceptions to this are those fields most similar to psychology in their research problems – like field ecologists.)

Graduate students in psychology can spend years studying statistics.  I certainly did.  Most summers I spend at least a few days at ‘stats camp’ to keep up my skills and to learn new methods.  Not because I love the math or enjoy spending long hours struggling with the computer, but because I care deeply about my research.  Statistics provides me with an invaluable tool for understanding important things I really am interested in – parent:child relationships and kids’ relationships with their friends and romantic partners.  In Psychology, our paradigm leads us to understand those problems through statistics and by attacking research problems in particular ways.  Every year my research becomes more complicated.  And every year I learn new and more sophisticated methods to approach an understanding of it.

What is a paradigm? 

Thomas Kuhn’s influential book, The Structure of Scientific Revolutions (1962)  describes a paradigm as a worldview shared by scientists within a given field that includes theories, constructs, and shared methodologies that define what practitioners do and how they approach and evaluate knowledge.  Paradigms define what features of the scientific environment we attend to, how we define problems, and how we try to answer them.  For example, to understand individual differences in intelligence, I wouldn’t investigate the association between thumb length and IQ.  Why?  Because it doesn’t make sense.  Specifically, I can’t think of a good causal explanation within my theory or paradigm that would make this a sensible topic of study.  Well, I might be able to if I really tried.  For example, I can think of some far fetched causal notions that build on Stephen J. Gould’s book The Panda’s Thumb having to do with the ability to manipulate objects, the developmental association of manipulation and brain development, and spatial ability.  But it would be, as I said earlier, far fetched.  It seems too unlikely to be seriously considered.  If you told me that YOU had done such a study or developed such a theory, I'd laugh at you.  And I'd laugh for the very same reasons we laugh at the the intelligent SCIENTISTS who used to study the association of head shape and personalityIt doesn’t make sense within our current paradigm.  But in truth, those people were not stupid and they don't deserve to be laughed at.  They were just wrong.  We rejected their paradigm and moved on.  People study brain anatomy and blood flow now and look at how it may be associated with personality or morality or mental illness.  Do we laugh at them?

Paradigms and Measurement

Not only do paradigms shape the questions we ask, but also what we accept for evidence - our epistimology.  For example, if I were interested in doing a study on individual differences in intelligence, I would collect data using instruments or tasks that had good psychometric characteristics.  (Perhaps redundantly, 'good psychometric characteristics' means that it has good qualities for measuring psychological characteristics.)  What are good psychometric characteristics?  They are things like a high intercorrelation between the different items used to assess intelligence, a normal distribution of scores, and decent variance.  I would need to demonstrate that the test showed similar properties to other measures of intelligence and seemed to be associated with the kinds of things that we think intelligence was associated with and not with things it shouldn’t be.  If you took Research Methods in Psychology I can imagine you nodding your heads and mumbling words like validity and reliability. If I did all these things, you, and my peer psychologists, might accept my conclusions.  My research - i.e., the activities I engage in to build new knowledge - was conducted in a way that psychologists have agreed knowledge can be gathered in and you'd probably take my findings seriously.

On the other hand, if, instead of studying intelligence this way, I asked a psychic to give me a reading on my participants, you wouldn’t believe my results. We're scientists.  We just don't do that.

In fact, you probably wouldn’t believe my results if I asked five people to talk to each participant and rate their intelligence on a 1-5 scale.  Why?  Because that’s not what we do in psychology. I say probably wouldn’t, because you might accept these ratings under two conditions. 

First, you might accept my findings iif I could demonstrate that my ratings had a very high concordance with established measures of intelligence.  For example, if the mean ratings correlated at .95 with the Weschler Intelligence Test for Children, and I could give you a good explanation for why I chose to measure intelligence via ratings rather than administering the WISC, you might accept this method as an acceptable substitute for a measure that psychologists typically use. 

Alternatively, you almost definitely would accept my research if I didn’t call the ratings a measure of intelligence, but instead called it a measure of assessed intelligence or perceived intelligence.  In other words, ratings would be accepted as a measure of how intelligent someone seemed or was perceived, but not as a measure how intelligent they were or how much intelligence they had (both intrinsic qualities of the person).

Seem strange?  In some ways it is.  That’s how a paradigm works.  Psychologists typically measure attitudes or perceptions by asking people their opinion.  If I label my ratings of how intelligent a person is a measure of perception, that’s a normal practice, and you’d believe that's what I had done.

That's how psychologists work.

Next time: Are Intro Psych Students Lab Rats?

© 2010 Nancy Darling. All Rights Reserved