Skip to main content

Verified by Psychology Today

Attention

What Are the Standards for Citing Prior Literature?

Some questions about how to cite literature in your writing and research

This is a modified version of a blog post by a group headed by my close friend and colleague Dr. Patrick McKnight (who co-leads my own lab). The Measurement, Research Methodology, Evaluation and Statistics (MRES) group started a blog on issues undergirding the foundation of science: http://www.mres-blog.us/

We decided to start the blog with a topic puzzling us for decades. How the hell do you decide what articles, book chapters, and books to cite when writing? None of us have ever been trained in this arena and based on our cumulative experience as scientists, editors, authors, and consumers, neither has anyone else.

There are no guidelines for proper literature citation other than thou shalt not omit citations when and where warranted. That decree comes from age-old preferences to 1) acknowledge other's work, 2) provide evidence that other's have ventured into this area of inquiry, and 3) guide readers to the original sources. I am not sure if all those apply every time. In fact, I have no idea when and where the practice of citing originated. What I do know is that citing the literature is a big deal these days and nobody really knows what rules really apply to them (the references or to themselves). Let me address these two points independently.

The importance of being cited

We scientists crave attention these days in the form of citations. Our promotion, tenure, funding, and reputation rely upon these acknowledgments—perhaps more so than ever before. We have metrics that gauge our impact based upon who lists our beloved name in association with some product (i.e., our paper). Regardless of our input on that paper, we get credit. So we need others to cite us. We cite you; you cite us. It works out well because there are few restrictions on number of citations in an article—until there are restrictions. Thus, the price of fame comes at such a low cost to your friends that it behooves us all to spread our good deeds and ask others to recognize our brilliance in the form of a few lines in their papers. Given the rewards bestowed upon us, you—the reader—ought to appreciate the importance of citations.

There is another reason why we cite research. We need to determine the impact of research. It often takes years from the development of a research idea to running a study to submitting this work for publication and going through the weeks and months of peer review to publish in a scientific journal. After all of this hard work, every scientist hopes that their work will be read. Talk to any scientist and if they are honest, they will be unsatisfied to simply have people read their work. Scientists seek to influence and persuade. One way to measure this influence is through citations. Scientists building on the shoulders of prior work by other scientists. It sounds so simple. Citations = impact. There is nothing simple about citing research or what it means when your own work is cited.

Rules? What rules?

We cite when we want to or we feel the need to cite. When does that want or need arise? I observed from my graduate students that early on in their careers, they cite recklessly and far too frequently. If they each had their way, they would cite the etymological origins of every word—often in fear of a plagiarism accusation or a strong desire to be viewed as smart and well-read. As we mature throughout our careers, we learn what literature stands as the seminal work but yet we remain faithful to some and ignore others. At times, our capricious citation nature gets the better of us and some "randomly selected" reviewer calls us out. Our response? We cite the omitted piece and move on. Often, that omitted piece is the work of the reviewer [ahem].

From these practices, we can surmise several (non-)rules.

1. People often cite defensively. As you mature, you are less defensive and more strategic. That behavior hardly stands as a rule but rather a standard of practice that moves according to your own comfort level.
2. Editors and reviewers often affect citations. I can count at least 20 instances where a reviewer or editor required me (us) to cite material we omitted. Those omissions had little to no impact on the final product but they did increase the citation count of those authors—at least by one. Again, that practice doesn't appear to be a rule either but rather a practice that comes from an uncontrolled process.
3. Nobody knows who to cite. Yeah, sure. The experts cite one another and the novices cite at random. But seriously, do we actually know who to cite? A recent (2014) paper by Google researchers found that we scientists cite older material more frequently now than we had before. Why? Perhaps because we can read or at least find those older articles now with ease? If so, we feel compelled to cite the original material and now we can and do so with increasing frequency. Is there a rule here? Nope. Still no rules.
4. Citing an article provides the authors with a reward—regardless of the validity of the findings. We cite works and often our citations are to identify contributions we remain skeptical about and yet our skepticism doesn't not get conferred in our citation. Instead, merely citing the article provides those authors with the increment they need for their scientific stature. So whether you produce garbage or gold, a citation receives equal credit.

Ugh!

What remains from this discussion is a clear lack of guidelines. We practice what we practice, teach students some arbitrary rules that we rarely abide by, and then reward one another for their stature based upon a system that seems to lack any rational structure. Actually, I don't believe that last part completely—at least not without qualification. The rational structure is "every person for him or herself." That structure provides no guidance for our junior members and often confuses students to write paragraphs like this:

Self-esteem (Rosenberg, 1965, Harter, 1993, Scheier, Carver & Bridges, 1994, Anusic & Schimmack, 2016) remains an important topic in psychology (Kalat, 2016). From the advent of self-report measures (Likert, 1932, Johnson, 2016), researchers constantly strive to find individual differences for happiness (Costa & McCrae, 1980, Ryan & Deci, 2001, Ryff, 1989, Deiner, Lucas, & Scollon, 2006, Hershfield & Mogilner, 2016).

That bit above is merely an illustration. All authors—alive or dead—had no part in its construction. I only wrote two lousy sentences and I cited 12 articles. Not bad, eh? Twenty people got credit for those 2 sentences. Amazing, right? Those twenty people had no hand in my writing. They may not have written anything relevant to my sentences and yet they get credit.

Now the example above is a good one. Only one problem, the literature is full of horrible writing and there's no need for a made up example to illustrate the point. To avoid bias, we picked arguably the most important anxiety researcher in the 20th century—David Barlow—and did a google search of his articles since 2015 and picked one paragraph from the first page of a random paper. Here is the first paragraph:

Emotion regulation is an important set of processes by which an individual manages and responds to their emotions (Gross & Muñoz, 1995). Previous researchers have shown particular interest in the regulation of distressing negative emotional states, such as sadness or anxiety (Campbell-Sills, Barlow, Brown, & Hofmann, 2006; Gross, 1998). As a consequence, emotion regulation has been increasingly incorporated into conceptualizations of psychopathology development and maintenance (Aldao & Nolen-Hoeksema, 2010; Kring & Sloan, 2010) and has also become a focus of treatment (e.g., Barlow, Allen, & Choate, 2004; Hayes & Feldman, 2004; Mennin, 2004).

There are 100's of articles published on emotion regulation every year. This example begs for the Passover question of science—why these references and scientists over any other to support his points? Two citations are from his lab and of the other 6 articles, 3 of the researchers are from Yale, 2 are from Stanford, and the final one is from Berkeley. Do we choose the best scientists from the best universities? Or, do we do our due diligence to find the best work, regardless of author? And, what about the timing of when this work was published? Do we focus on the seminal work? Perhaps most recent work? Or, do we focus on something less deliberate and purposeful—such as whatever happens to have a PDF available in a google scholar search?

For now, we have no solution to this citation game. What we know for sure is that if you publish more, you'll get more citations. The more you publish where other people publish, the more likely you are to get cited. Conversely, publish in an obscure area where very few of the readers (if any) conduct any research or publish and you likely find your citations very few outside your own self-citations. Moreover, if you come from a prestigious university (i.e., work there and not merely attend that university), you are likely to gain more attention. I don't know if that last statement is true but it certainly feels true. Prestige begets prestige I believe but that is all I have...a belief and no citation to back it up.

Measuring the problem to find a solution

Dr. Patrick McKnight and (the rest of the MRES group) are a huge fan of measurement. Without good measurement, we are lost. Lord Kelvin (1872) once said

[a]ccurate and minute measurement seems to the non-scientific imagination, a less lofty and dignified work than looking for something new. But nearly all the grandest discoveries of science have been but the rewards of accurate measurement and patient long-continued labour in the minute sifting of numerical results.

With that in mind, permit me to present a few metrics I made up to show you how silly these citations are from my two sentences. These metrics or numbers get calculated like Bill James calculates Major League baseball statistics—by raw counts and ratios. Take a look at a few I computed.

METRIC 1: Citations (12) per word (23) = 12/23 = 0.52. Not sure what this number conveys other than we have almost half of the words muddled by a citation. Perhaps this conveys the space to citation vs. space to word. A 1 to 1 ratio would mean that half the space gets taken up by citations. Here, we only have a .52 ratio. Perhaps...

METRIC 1b: Citations (12) per word (23) and Citation (12) = 12/(23+12) = .34 or 34% of the space gets devoted to citations. Sheesh! I like this one. Let's try another.

METRIC 2: Winners (20) per word (23) = 20/23 = 0.87. This one makes a little bit of sense. Each word carries with it almost a single winner. NB: I declare a "winner" to be anyone lucky enough to be cited by me.

METRIC 3: Citations to Winners = 12/20 = 0.6. Percentage of single author citations. Not sure this is even relevant any longer since very few researchers publish sole-author articles.

METRIC 4: Sentences (2) to Citations (12) = 2/12 = 0.17 I like this measure. 17% of the space is taken up by citations as opposed to sentences or complete thoughts.

METRIC 5: Sentences (2) to Winners (20) = 2/20 = 0.10

Better yet...
METRIC 5b: Winners (20) to Sentences (2) = 20/2 = 10!!! We have 10 winners per sentence. That seems rather gratuitous.

And my favorite....
METRIC 6: Readers = 0. The text is unreadable.

Finally, Michael Waltrip added:
METRIC 7: Original content, thoughts, or ideas (0) to Total word count (23) = 0/23 = 0
Ouch!

Make up your own metric if you desire. The real problem is that we don't have any guidance for ourselves or our students to properly acknowledgment one another yet we use these citations as a high-stakes outcome in our fields. What should we do? I ask you. What should we do to fix this problem? Nothing is what is currently being done. Silence means nothing is a suitable course of action. So tell me, what do you think we ought to do? When you cite us, be sure to cite us frequently, OK?

In summary, we can create more metrics rather than rely on the h-index.The problem really is that the best way for us to evaluate a person's impact is not through citation count but by actually reading the person's work. Lee Sechrest argued vehemently that we ought to evaluate our peers by actually reading their work. He lost that battle but won the war with me. I'm convinced he was right. We ought to read each other's work before gauging that person as an imminent scholar.

Alternative and Additional Points (numbered for ease of commenting later)

David Disabato added: my suggestions would be...

1. Cite prior research when your point involves a scientifically testable statement (e.g., older adults tend to experience more symptoms of dementia; Lyketsos, et al., 2002). If there exists no evidence for that statement, make it clear to the reader that no evidence exists and it is only your opinion.

2. Cite prior research when the ideas (e.g., a theory) behind your sentences are strongly informed by the ideas of other research or researchers. (e.g., we applied cognitive-behavioral principles to the problem of dementia; Pinquart & Sorensen, 2006). I don't have a concrete answer for what "strongly informed" constitutes but it seems like a good rough start.

3. Cite prior research that has tested the same research questions, which you are testing (e.g., prior research tested whether behavior therapy can help people with dementia; Teri et al, 1997). A problem arises around how close do other researcher's tested research questions have to be to your own specific research question to warrant citation.

Dan Blalock weighed in...

4. I (Dan) often wondered why the APA doesn't adopt the AMA formatting of numbers for citations in-text, followed by a Reference section in order of citation. This seems to resolve Metric 6 (readability), and increase the burden of citation slightly. I would hypothesize the increased burden leads to a small effect of increased thoughtfulness of citation. This increased burden is not ideal, but it would be minimal, feasible, and foster more intentional citing—especially with digital publishing and online document searching, alphabetical order seems outdated.

[Consider the phony example paragraph above with numerical references:
Self-esteem[1,2,3,4] remains an important topic in psychology [5]. From the advent of self-report measures [6,7], researchers constantly strive to find individual differences
for happiness [8,9,10,11,12].

Much cleaner.]

5. I love the example sentence (by Patrick McKnight) because it captures something I've done before: "Let me find the oldest citation possible and couple it with the newest one." I'm not entirely sure of the motives behind this practice, but I've seen it plenty. Are we trying to prove that we've read ALL the literature on the topic from beginning to end? Are we trying to convince readers that the phenomena are legitimate—since studies across decades support the claim? Would there be any use in an additional Metric referencing the oldest work cited in support? Or average age of citations across a manuscript? Hell, we could form a distribution of citation years for each manuscript! I think there would be valuable information in knowing the average age of evidence supporting a new manuscript.

6. I'm sure most people reading this post will also be thinking "so what courses of action can correct this problem?" Measuring the problem is a good first step. Some journals have maximum citation limits, but there must be more refined solutions. Citation-per-page limit. Citations as a function of the impact factor of the journal you are submitting to (if your paper will be cited once, why should 100 people get equal credit as if they were being cited by a paper that itself was cited 100 times? There's the idea that citations even out, as a more influential paper citing your work is likely to disseminate that citation to many others. But this approach doesn't get at the fact that a more influential paper should be built upon a stronger theoretical background...more citations).

Johanna Folk added...(along with a silent Jeff Stuewig)
7. When I've submitted to journals with citation limits, it makes me think about the citations I choose but does not guide me on how to do so (e.g., the oldest, the newest, the biggest sample). It seems like solutions like that one help with the readability issue, but not the core problem of not having a standard practice.

8. As a bit of a counterpoint, I think Jeff's point about how citations are used to bolster our argument and demonstrate whether our statements are "factual", or at least not something we made up, is a relevant one. If someone has empirically demonstrated what an author is stating, I may give that statement more weight than one without empirical support.

9. Sometimes I find what is cited to support the author's argument is not entirely relevant but that is another problem.

10. When reading a new area (for me), I often find citations useful for knowing where to search for relevant/related work. A practice that gives a bit more guidance on the topic area than searching keywords and weeding through thousands of google scholar search hits. [excellent point—precisely why Patrick McKnight recommends students begin their research reading with Annual Reviews]

Patrick gets the last point...

11. Are all citations positive? I suspect not. If I recall correctly, in the mid-1990's Martin Seligman published a paper on the "effectiveness of psychotherapy" using data from a Consumer Reports survey. That paper was cited more than 2000 times (Google search says 2226 as of January 6th, 2017). I suspect a good portion of those citations were negative. In other words, people were citing the paper NOT because it was good science but rather it was poor science. It was and is poor science. Nobody worth their weight in spit would consider effectiveness as a defensible position from a non-randomized, single group, post-test only design with data collected via survey. [Editor's note: No, I didn't cite the paper explicitly because it does not deserve any more credit. Read the paper linked above so you can derive your own decisions but please do not provide any more attention to this article.] There are countless others that I do not wish to rehash. Suffice it to say that not all citations are indicators that your work is a good contribution; sometimes, the citation may be an indictment on your work.

Summary

I doubt we have any major points to add to the already muddled mess that exists in the world of citations. We cite for many reasons—many we do not explain well or perhaps hide from others for various reasons. I end here with another point that has not been addressed and ask you—the reader—to weigh in. Could it be that we cite material not for the benefit of others but for our own benefit? We cite the literature as a bat to beat down the ignorant. I recall once that a graduate student in another department (perhaps Political Science at another university) gave a talk and cited phony articles. The audience paid rapt attention to the talk and never inquired once if the references were real or supported his (I believe it was a male) points. I tell the students in my graduate statistics courses that statistics are tools to quantify our uncertainty; they may be used to support your claim or beat down the ignorant. Citations may serve a similar role. They serve a much greater role today than they did in prior decades so it behooves us scientists to understand what we are measuring and why they mean something to us. So, I ask you, what should we do to clean up the mess?

References

NONE - Links to the original material cited are on the origin blog site. Read the originals; you'll benefit more by doing so than perusing who we cite.

Acknowledgements

The blog post above was written by Patrick E. McKnight with the assistance of the entire MRES group. We discussed this topic during several meetings. Those who contributed to the written work received proper credit above but here is an alphabetical list of the key contributors (affiliated or once affiliated with George Mason University):

Dan Blalock - clinical psychology intern at Northwestern
David Disabato - clinical psychology graduate student
Simone Erchov - human factors graduate student
Johanna Folk - clinical psychology graduate student
Cyrus Foroughi - NREIP post-doc at Naval Research Labs
Todd Kashdan- professor, department of psychology,

Patrick McKnight - associate professor, department of psychology
Sam Monfort - human factors graduate student
Jake Quartuccio - human factors graduate student
Jeff Stuewig - research professor, department of psychology
Michael Waltrip - human factors graduate student

***Since I will only be cross-posting blog posts I am heavily involved in, please bookmark and regularly check the MRES home: http://www.mres-blog.us/

We look forward to your comments and a productive dialogue.

Dr. Todd B. Kashdan is a public speaker, psychologist, professor of psychology and senior scientist at the Center for the Advancement of Well-Being at George Mason University. His new book is The upside of your dark side: Why being your whole self—not just your “good” self—drives success and fulfillment. If you're interested in arranging a speaking engagement or workshop, visit toddkashdan.com

advertisement
More from Todd B. Kashdan Ph.D.
More from Psychology Today