Skip to main content

Verified by Psychology Today


Can We Tell If a Witness Picks an Innocent Suspect?

Does the witness’s confidence tell us if they got it right? Here’s what we know.

Wikimedia Commons (Public Domain)
Source: Wikimedia Commons (Public Domain)

Sometimes, when people witness a crime, they are asked at a later date to view a lineup—either a live lineup or, more commonly, an array of photos—to see if they can identify the person who they saw commit the crime. Although the culprit may be in the lineup, it is also possible that the police have the wrong guy (or gal)—in other words, there is an innocent suspect in the lineup.

The outcome of this identification test may be just one component of the body of evidence that the police have against the suspect. Quite commonly, however, the police case may depend entirely on whether the witness picks their suspect from the lineup. A massive array of research data shows that witnesses frequently make errors: They might pick one of the known-to-be-innocent “fillers” (i.e., the people placed in the lineup with the suspect), say the culprit is not there when in fact he or she was, or pick an innocent suspect.

The reality and the disastrous consequences of the last type of mistake have been highlighted by the cases of wrongful conviction unveiled by the Innocence Project. But all of these different types of identification errors have important consequences.

An issue that has consumed eyewitness memory researchers for several decades now has been whether it is possible to “diagnose” the accuracy of a witness’s identification decision.

Researchers have explored a number of potential indicators of accuracy. These include how rapidly eyewitnesses make their decision, their verbal descriptions of the thought processes they go through in arriving at the decision, and even their eye-movement patterns when scanning the lineup. But the indicator that has attracted by far the most attention—as well as being the most easily measured—is the witness’s confidence in the decision.

Confidence seems like a logical indicator. If you found a match to your memory in the lineup very easily and quickly and hence were relatively confident, it seems obvious that you are more likely to be right than wrong. Conversely, if no one in the lineup leapt out as an obvious match, you took ages to inspect the various possibilities, and eventually arrived at a not very confident decision, an incorrect decision would not be a surprise outcome.

So, is a witness’s confidence a reliable indicator? The research shows that police, lawyers, judges, and jurors certainly think so. Eyewitness memory researchers have often held contrary views. For many years, the consensus was that the relationship between confidence and accuracy was weak, at best.

Then, in the 2000s, considerable evidence emerged—largely via a new way of analyzing the relationship—showing that confidence and accuracy (at least for those witnesses who actually made a pick from the lineup) were quite well-calibrated: That is, high confidence provided a reasonable (though far from perfect) indication of a likely accurate identification, whereas low confidence would suggest grounds for suspicion about the accuracy of the witness’s decision.

In the last few years, some researchers have been much more bullish about the relationship, suggesting that identifications made with extremely high (e.g., 90-100 percent) confidence were just about guaranteed to be accurate—provided the lineup contained only one suspect, the witness was warned that the culprit may not be in the lineup, the lineup was conducted by someone who did not know which member was the suspect, confidence was measured immediately after the identification decision (and before any communication could occur between the witness and the lineup administrator), and the suspect did not stand out in some way from other lineup members. These provisos have been labelled pristine lineup conditions.

This recent development again reflects a new development in the approach to analyzing the data from confidence-accuracy research studies. The approach, and the resultant data outcomes, are summarized in a major 2017 review by two high-profile researchers in the field, John Wixted and Gary Wells.

This recent appraisal of the state of play is rather persuasive and, in a short period of time, has gained considerable traction. Of course, one very likely consequence of such a definitive and optimistic view of the relationship is that police and the courts will be easily persuaded that a suspect who has been picked from a lineup with very high confidence is almost certainly guilty. This is obviously a good thing if the Wixted and Wells’s conclusions represent the end of the story—but it is not such a good thing for a suspected individual if these strong, global conclusions turn out to be premature.

So, are there grounds for caution? Some of us in the field certainly think so, arguing that there are a number of critical issues that should suggest caution. Some of these issues have been highlighted in a recent paper authored by Jim Sauer, Matt Palmer, and myself.

One issue is that few police jurisdictions have guidelines in place that acknowledge the importance of the pristine conditions described above. Of course, however, it would certainly be possible to ensure those conditions if the relevant policymakers had the will to implement them and to monitor their observance.

A second issue is that while the data assembled by Wixted and Wells are impressive, the studies conducted to date have only scratched the surface in terms of exploring the wide array of possible variables (and the interaction between them) that potentially could affect the confidence-accuracy relationship. Many of the studies that will be needed to examine these variables will be much more difficult to conduct than the studies conducted thus far, so a substantial future research effort will be required.

A third issue concerns the proviso that the suspect should not stand out in the lineup. Experienced eyewitness researchers can often spot a “standout” just by examining the lineup. And they also have a variety of tools available to help them ensure this does not happen—and of course, when they have finished collecting their data from a large sample of mock-witnesses, they can actually test whether they were successful. These tools are not available to police when they are constructing lineups and would be impractical to implement.

In our recent paper, we showed how even experienced eyewitness memory researchers who used carefully crafted procedures to ensure their lineups were not biased against an innocent suspect can fail dismally. This failure was not obvious from their visual inspection of the lineup but only became apparent after a large sample of mock-witnesses had been tested.

What seems a relatively simple issue—but one with enormous implications for people who become a suspect for a crime after being identified from a lineup—continues to generate research and argument amongst eyewitness memory researchers. There is little doubt that, contrary to what eyewitness researchers argued for many years, witnesses’ confidence should be regarded as a pointer to the likely accuracy of their identification and can justify the focus of an investigation on the current suspect (or suggest a different focus).

But spare a thought for the innocent suspect who gets picked from a lineup. Jurors tend to believe eyewitness identifications, especially very confident ones. If despite the plethora of unresolved research questions, they are now led to believe that a highly confident identification is just about guaranteed to be accurate, things will only go downhill for an innocent suspect.


Sauer, J. D., Palmer, M. A., & Brewer, N. (in press, 2019 online first). Pitfalls in using eyewitness confidence to diagnose the accuracy of an individual identification decision. Psychology, Public Policy and Law. DOI: 10.1037/law0000203

Wixted, J. T., & Wells, G. L. (2017). The relationship between eyewitness confidence and identification accuracy: A new synthesis. Psychological Science in the Public Interest, 18(1), 10-65.

More from Neil Brewer Ph.D.
More from Psychology Today
More from Neil Brewer Ph.D.
More from Psychology Today