When I was a student at the University of Bielefeld, Germany, most of our exams were oral. Only statistics and other methods were tested in writing. The protocol of the half-hour exam was that you presented a short review of a particular topic within the subject area and then answered the two examiners' questions covering the entire area. A credible rumor was the each examiner, typically one professor and one assistant, received 10 Deutschmarks per exam, which would buy four lunches in the University cafeteria.
I enjoyed these oral exams because I had a theory of how to improve my grade without working harder. What I had to do was to avoid the most popular topics for the initial presentation. From conversations with friends, I knew what these topics were. The subject area of physiological psychology, for example, included the topic of "sleep and dreams." No one wanted to talk about neurotransmitters. I remember that I did not talk about sleep and dreams. In the area of personality and "differential psychology," most students wanted to talk about psychoanalysis. In this case, I broke my routine and went with the majority. When I announced my intention to Professor Streufert, he had a jaded look on his face. I then played my maverick card and told him that I would not talk about the "Three essays on the theory of sexuality," but about what Freud called his "meta theory" of mind, you know, the id, the ego, and the super-ego. That worked. Professor Streufert paid attention.
Essentially, my strategy was to solve a discoordination game, assuming that I would reap a higher payoff if I did what most others did not do. Because I had good information about the intention of others, and because I noticed that others were not on to the benefits of discoordination, the strategy worked pretty well. I was also concerned about another aspect of non-independence among the exams. Would it be best to go up after weak students or after strong students? The former seemed more promising. I had a hunch that the examiners' evaluations would show a nice contrast effect (akin to Streufert's perking up when not hearing about the three essays). The problem was that it was not easy to get a good fix on who went before me and how well they did. Besides, there was little choice in where to be placed in the sequence of examinees. To at least get an idea of whether I should worry about sequence effects that could bias how I was evaluated I asked my examiners in social psychology, Professors Abele and Schultz-Gambard, whether they had noticed any contrast or assimilation effects in their own decisions. They said no.
The social psychological literature is of course full of demonstrations of sequence effects: primacy, recency, contrast, assimilation. The intellectual root of interest in these phenomena lies in the psychophysics of the late 19th century, which tends to make the subject matter a bit dry, very perceptual, very cognitive, and very mathematical. From time to time, though, a powerful real-life demonstration comes along. Here is one.
In a new article in the Proceedings of the National Academy of Sciences (PNAS), Danziger, Levav & Avnaim-Pesso analyzed 1,112 bench rulings in a parole court. They then plotted the proportion of favorable rulings over the course of the day. The striking finding was that this proportion started out high, at about 65%, and then dropped off rapidly. By the time the next meal break came around, the proportion of favorable rulings was essentially zero. When the court was back in session, the pattern repeated itself, starting high and ending with nothing.
Danziger et al. note that according to legal formalism this should not be so. Every case ought to be reviewed on its merits, and extraneous factors, such as the judge's metabolic state, should play no role. However, legal realists have claimed for some time that more goes into judicial decisions than rational deliberation alone. Danziger et al. recall the quip that justice is what the judge had for breakfast. An additional finding, which is almost as interesting, was that neither the judges, nor the panelists who advised them had any idea that this was going on. Perhaps one could appeal to the power of egocentric self-justification. But the attorneys had no idea either. They ought to be motivated to detect extraneous factors that affect their clients and their own reputation. To not see the drop-off in the proportion of favorable ratings is an instance of massive change blindness.
So why does this happen? Danziger et al.'s theory is that when judges are well-fed, they have the mental energy (glucose) to deliberate carefully. As the glucose burns off, they become more likely to pass judgments upholding the status quo. In the context of the parole decisions, a denial of the request maintains the status quo. In support of this idea, the authors find that negative decisions took less time than positive ones. Assuming that a proportion of 65% positive decisions was most accurate, the expected error over the entire day would be about 32.5%. Another possibility is that judges started out with a leniency bias. Suppose 32.5% of the candidates truly deserved parole, and that in the warm glow of the recent meal, the judges grant parole to too many. In this case, the total expected error would be 16.25%.
In the present study, it is hard to distinguish the status-quo maintenance hypothesis from the mood-deterioration hypothesis, although the data regarding length of deliberation point to the former. An interesting study would be one, in which the two hypotheses are at odds with each other. If the status quo is a favorable decision (I'm thinking here of certain academic promotion conditions where the default is to not terminate an appointment), only the mood hypothesis predicts a downturn in the proportion of favorable judgments, whereas the status-quo hypothesis might even predict an increase in favorable decisions (unless the proportion is at ceiling to begin with. Interestingly, this scenario would be a good place for a two-tailed statistical test.
Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous factors in judicial decision. PNAS, 108, 6889-6892.