Criticism can come from peers, mentors, review-committees, editors, parents, the media, and anyone else who might be privy to your work. First we should begin with an operational definition of criticism, from Merriam-Webster we find: a critical (exercising or involving careful judgment or judicious evaluation) observation or remark. That sounds about right. We usually attach a negative connotation to criticism, but that's not necessarily the case. Criticism can come in many different forms, for example...
Praise: This is the easiest feedback to swallow. Unfortunately it is also not very useful. Don't get me wrong everyone wants to hear that they are doing things right, and to keep up the good work. Better than an "attaboy" is true critical praise where insight about where you went right is revealed. Receiving praise, however, rarely motivates you to improve.That said feel free to send me praise anytime. Please?
Constructive Criticism: Likely the easiest negative comments to swallow. The kindly advice of your peers points out ways in which you can improve. Motivated by a desire to see you achieve, constructive criticism is your best friend. It can still be tough to hear where you went wrong, but the goal of constructive criticism is to make suggestions for improvement. This is the criticism you want to take to heart,as it can only help you improve.
Self-promoting Criticism: When you have presented a research talk about disgust sensitivity and superstitious conditioning and you get the question, "Did you find any interaction with circadian rhythms?" you know you are the victim of this type of insidious criticism. The key is to recognize it for what it is: the asker wants to talk about her own research, or a clever idea she had. Common mistakes in dealing with these questions is to argue against them ("Circadian rhythms have nothing to do with this.") or admit failure ("We didn't think about that at all."). A better solution is to throw the ball back into the asker's court and let her talk about her ideas ("That's an interesting point, how do you think circadian rhythms would affect the study as presented?"). This allows the asker to promote her ideas and you appear neither petulant nor incompetent. And it has the potential to become constructive criticism if (unlike my example) the points she raises are actually related to what you are researching.
Nitpicking: These are the annoying nettles that reviewers send. Usually about grammar (sometimes Latin grammar), formatting or semantics. It is usually easiest to just correct any nitpicks. Unless you know you are right, go ahead and make whatever changes are suggested by the nitpicker. Since for the most part it doesn't really matter, these criticisms are just white noise.
Bad Advice: This is the worst. When what looks like constructive criticism is actually destructive. Imagine if you were looking at social development in adolescent rats, and one of your measures was playfighting (pairing the rat with a same-gender rat to see them play). And you received the note that you have to look at cross-gender social interactions in your playfighting paradigm. This might seem like good advice at first glance, eliminating your imposed gender division of play-fighting partners sounds like something you'd want to do. Except that instead of play-fighting, you will observe mating, or at least attempted mating. So be on the look-out for bad advice. Your best bet is to solicit advice from several different sources, hopefully then you can better separate the wheat from the chaff.
So now that we know what types of criticism is out there, how do you take this criticism well? There are books and books about it, but I think the best advice I ever got came from a Terry Gross interview with actor Seth Green. Seth was talking about not being cast in roles he really wanted:
"I never took it as a failure on my part, I kinda had an objectivity at a young age that I really held onto where if you believe in a director or if you believe in a project in a project you have to respect their vision for what it is and if you don't fit that vision then you just don't fit. And there's a lot of times that no matter how good an actor you are that you're just not right for it. And I became very aware and zen about that."
So by that reasoning, if you've chosen a good pool of peers (mentor, colleagues, etc.) from which to solicit criticism, then you you can be reasonably certain that they are trying to do what is best for science when they give you feedback. Don't take it personally if your vision of a project, a paper, or a presentation doesn't mesh with their vision. Remember, by integrating their suggestions into your work will help make you a better scientist. No matter what the criticism it is always worthwhile to address it, at the very least you will dissuade others from making the same critical observations. In a future post I'll show some examples of feedback I (and my colleagues) have gotten on journal submissions, or other projects. But as a teaser here is some feedback we received about this (now published) paper:
"Only the questionnaire data are analysed. However, the subjects had a keyboard to try to control words. Even if self-report may be an interesting behaviour to study, subject probably uses their keyboard with a high rate. What is the rate of responding in each condition? What is the rate of responding on each button of the keyboard? Is the rate of responding related to self reports? Is the rate of responding to try to make the word "bad" appearing comparable to the rate to try to make the word staying at the screen? All these questions are important because directly related to the central question of the research."
I'd call that constructive criticism. We did in fact collect response rate data, but there were no correlations. Not to mention that the question of the relationship of response rate to superstition has been well answered in previous research. To address the criticism however we added this line: "The tendency for these behaviors to be unique to the individual organism has made traditional operant measures, such as response rates, less useful in assessing non-contingent reinforcement."