Editing Matters

Should journals require authors to state conflicts of interest?

Posted Oct 06, 2018

Nine years ago, I was asked to start writing a blog for Psychology Today. I did it for about six months while I was a stay-at-home dad with my daughter. Then, I went back to work full time and blogging just didn’t seem important. I would occasionally write something, but by the one-year anniversary, I had given it up. A few weeks ago, I was asked to restart the blog and I’ve reluctantly agreed.

The reluctance has to do with 1) I quit all social media, so I doubt anyone will read this – I will certainly not advertise it. And 2) When I initially started the blog, Psychology Today said that they would compensate me based on the number of readers. I had some readers, and ran the numbers, and realized that I was owed about 70 cents. I’M STILL WAITING FOR MY CHECK.

I got over #1 because some of the things I want to say in this (new) blog are things that I just want to think about and write down, so it doesn’t really matter if anyone reads it, and I wrote #2 just so that I can reference the kid in Better off Dead who wants his two dollars.

So, I’m back.

One of the things that happened in between then and now is that I became an editor. I was appointed an Associate Editor for Developmental Psychology, which is a position I’ve held for the last 8 years (I am officially finished at the end of this calendar year, but will stay on handling any outstanding papers that have received R&Rs into 2019). I was an Associate Editor for Frontiers in Developmental Psychology for 3 years, and was a member of the editorial board of American Psychologist for a few years. I also edited a book on cognitive development in children’s museums (with my wonderful co-editor, Jennifer Jipson) and am writing a new book now.

Here’s what I learned, editing is hard. You’re a gatekeeper. You spend a lot of time working on editing responsibilities, often for little reward. I once calculated the stipend that I got from APA for my editing duties based on an estimate of the number of hours I worked; it was around minimum wage. You make a lot of people mad and rarely are people happy – even when papers are accepted, authors usually don’t like you because you make them revise their original submissions, sometimes more than once. And, it never ends – the manuscripts keep on coming. No matter how many action letters I write, it seems that new manuscripts come in at the same rate. You always feel like there’s a weight pressuring you to handle the next manuscript – you’re partly responsible for others’ careers.

But I’m not here to complain (well, I am, but not about this). What I really want to talk about is the replication crisis. We all know what this is – lots of published psychological studies don’t replicate. There are numerous projects that try to replicate studies. Some are more principled than others, and some make more sense than others. Some concern themselves only with what replicates (and thus should be considered “science”) while others concern themselves with why a particular study does or does not replicate (which frankly is much more scientific in nature). I have lots of thoughts about this, which I’ll get to in other posts, but today I want to talk about something that I feel contributes to the process but isn’t mentioned that frequently.

Psychology – any science really – is a small field. We all know each other. And we’re mostly friends. In developmental psychology, there are people I like and there are people I don’t know well, but there aren’t that many people whom I actively dislike (except you, Reviewer 2 of the first paper I ever submitted to a journal in 1997, who said that the paper was so bad that I shouldn’t be a scientist – I know who you are and I don’t like you). But reviewing is supposed to be neutral. When I send a manuscript out for review, I’m trying really hard to make sure that the reviews are unbiased – that there aren’t personal relationships between the authors and the reviewers and that there are no conflicts of interest that would prevent objectivity.

Funding agencies (at least in the US and several other countries) require you to think about whether you have a conflict of interest with the applicant. You shouldn’t be reviewing a grant proposal of someone with whom you have a conflict of interest. It’s biased. You shouldn’t review the grant of your former graduate student or your former mentor, or people who have co-authored with in the recent past – you like these people, and might look upon their work favorably. You shouldn’t review the grant of your close friends, or lovers (or ex-lovers), or partners (or ex-partners), or collaborators. A program officer at NSF once described this to me as the “pajama test” – basically, if you have seen the person in their pajamas, you shouldn’t review their proposal. Beyond the obvious, a way of thinking about this goes like this: suppose you wind up in a strange city – if you can call up the person and arrange to stay at their house, you shouldn’t be reviewing their grant.

So, why does this matter. The APA styleguide talks about this at length: “In general, one should not review a manuscript from a colleague or collaborator, a close personal friend, or a recent student. Typically, the action editor will not select individuals to be reviewers in which this is obvious conflict of interest may exist.” (pp. 17-18) But I do not know of a journal that requires authors to reveal their conflicts of interest. And worse, many journals encourage you to suggest potential reviewers. What’s to stop someone from gaming the system here?

As an action editor, I have often seen authors ask for collaborators to review their manuscripts. I have seen authors (usually recent Assistant Professors) ask for their advisors to review their manuscript. I have seen authors (usually Tenured faculty) ask for their students to review their manuscripts. Usually this is easy to catch, and you simply deny the request. But sometimes it isn’t. Sometimes an action editor doesn’t know, and this slips through.

Here’s the worst case that I ran into: I was asked to edit a paper written by a graduate student and mentor. The mentor was a tenured faculty member with a great deal of experience publishing. The manuscript was close enough in my area that I could evaluate it, but far enough from my area that I did not know many of the researchers in that subfield. I asked 5-6 people. Some were from the authors’ request list (which was fairly long), some were generated by me. After a week of sending out requests, I got no one.

So, I went back, read the manuscript again, asked a few more people. Again, some were from the authors’ request list and some were generated by me. The person that I asked from the authors’ request list was a postdoc. I didn’t know the postdoc, but I had previously asked the postdoc's mentor (who had said no), so I figured it would be OK. The postdoc said yes (as did a few other people, so I now had lots of reviewers).

I waited and the reviews came in. Two outright rejections, one pessimistic review and resubmit, and one acceptance. The acceptance was from the postdoc. My own view was pretty negative, so I started to write a rejection letter. But the glowing review really stuck with me – was I missing something? Did this person know something that others didn’t? Did this person publish more in this area? So, I looked at the postdoc’s CV.

OK, full disclosure time. I should have done this before I asked the postdoc. I should have read the CV carefully. This is my mistake.

It turns out that the postdoc was a recent graduate of the same graduate program. And after a little digging, I discovered that the postdoc and the graduate student author were close friends. The senior author on the paper was on the Ph.D. committee of the postdoc.

I wrote a rejection letter. The senior author then appealed. Appeals of rejections happen in publishing - this happened to me once every 18 months or so – someone who got rejected would ask for the opportunity to resubmit. I’m usually happy to say yes to these requests. This was the first time I turned an appeal down flat.

There are two obvious fixes here. The first is to not allow authors to suggest reviewers. The second is for all action editors to ignore these suggestions. The first might be too big of a change for people; the second seems duplicitous. So, here's my fix: If you’re an author of manuscript and you submit that manuscript for publication, you should be required to state that you have no conflict of interest with any reviewer you recommend. Moreover, the definition of “conflict of interest” should be articulated by the journal. I personally like the NSF definitions, but I recognize that this is a matter of taste. Journals (really, publishing houses) can institute this fix with little cost. 

By the way, please don’t think that this is a problem that can be solved simply by open access. Open-access journals have exactly the same problem. Editors don’t often know about reviewers’ conflicts of interest, and reviewers might know who the author is in order to identify the conflict. That said, as much as disliked my experiences with open-access publishing (which I’ll write about in subsequent entries, I promise), the one thing that I did like about my experience as an editor for Frontiers was that reviewers and editors were required to sign their name. It’s a good check on the system for a variety of reasons.

Perhaps more journals should require this. That way, even if the action editor can’t detect a conflict of interest during the review state, it could be detected after the manuscript is accepted, and a note made. In Scott Miller’s wonderful Developmental Research Methods book, he distinguishes between “unintentional” and “intentional” experimenter bias – cases in which experiments might intentionally or unintentionally affect the results of their experiments (see p. 106 of the 2018 edition).

What I’m suggesting here would be a way of identifying unintentional publishing biases, which might contribute to the replication crisis. Authors recommend reviewers who like them, not who will referee their science objectively. While I have no evidence that anyone acted in an unethical manner in my story, I wonder how often authors suggest people who might not be as objective as possible. 

I point all of this out not to say that science should be abandoned or that the replication crisis points to a larger problem with psychology as a science. Rather, I point it out to indicate that we can do better. But that’s what the scientific method is all about – figuring out the ways that we are wrong in order to get closer to what is right. If ethics is part of science, then having scientists think about ethics might help with their science.