Skip to main content

Verified by Psychology Today


The Truth About Lie Detection

There are no reliable behavioral signs of deceit that humans can detect.

Key points

  • Lying is complex, involving cognitive and physiological changes that vary depending on the situation.
  • No reliable and practical lie detection technology currently exists despite decades of research.
  • AI may be able to detect subtle patterns invisible to humans, but accuracy is still below ideal.
Evgeniya Porechenskaya/Shutterstock
Source: Evgeniya Porechenskaya/Shutterstock

Lying is an important human capacity and a necessary, frequently deployed skill for greasing the rails of social commerce. The emergence of lying, a complex cognitive operation, represents a major milestone in normal child development.

Lying is a form of impression management. We lie mainly to escape social punishment or obtain social rewards. Lying is for verbal communication what dress and makeup (or Instagram posts) are for visual communication; it can confer advantages in the social contest and rests on evolutionary foundations. If I can make myself look bigger and more menacing than I am, I may deter a predator; if I can make myself seem harmless, I may snare prey; if I can make myself seem more alluring I may attract a mate.

Alas, like all other human social capacities, lying has a dark, destructive side and can bring about misery, chaos, and conflict. Thus, the ability to reliably detect lying in practical social situations would be of obvious value. Think of how hiring practices and the legal, justice, and political systems would change if we could tell the liar reliably and in real time. No wonder all kinds of lie detection systems have been proposed over the years, claiming to have cracked the code.

A recent (2023) article by psychologist Tim Brennen of the University of Oslo and colleagues reviews several lie detection methods that have gained either popularity or empirical backing since the 1960s.

Source: GDJ / Pixabay
Source: GDJ / Pixabay

First, at least in the popularity department, there are approaches that attempt to read nonverbal behavior (body language and micro-expressions). These have been largely debunked by contemporary science, a fact that has not stopped law enforcement agencies and governments around the world from wasting money and time on dubious behavioral analysis programs claiming to be able to detect liars.

Lie-detecting (polygraph) machines looking to exploit our neurophysiology—the fact that internal functions only tend to occur when telling a lie—have not done well either, despite their popular and intuitive appeal. This approach relies on the notion that liars, but not truth-tellers, show increased physiological arousal when questioned. Alas, physiological excitation is a nonspecific marker, not one exclusively reserved for instances of lying. The National Academy of Sciences 2003 report on polygraph testing concluded that its scientific basis was weak, backed by low-quality research and unfounded accuracy claims. A recent review of the literature (2019) concluded that these results still stand.

A more promising approach is “criterion-based content analysis” (CBCA), which is based on the assumption that “experience-based statements are of higher content quality than fabricated statements, meaning they are richer in detail and show more elaborate links to external events.” This approach, popular within academic circles, has been so far undermined by coding problems, publication bias, and low specificity and sensitivity.

“If one adjusts the decision criterion to minimize the incidence of calling a true statement a lie, the rate of detection of deceitful statements is reduced to 9 percent, with a similar collapse in the detection of true statements if one prioritizes catching deceitful statements."

Mining the same terrain is reality monitoring (RM), which is based on the idea that “memories of experienced events have stronger external links than memories of things that have only been imagined. And that certain criteria may differentiate the two memory types.” In other words, real memories are richer in contextual, sensory, and semantic detail than lies.

A recent meta-analysis of both these methods concluded:

“There are… sound indications that CBCA and RM indeed do discriminate between experience-based and fabricated statements. Yet, although the present data indicated that both procedures are among the best approaches for credibility assessment, the analysis of the studies also showed that strong implications for practical application are only possible once further research questions have been answered.”

In other words, accuracy rates, while better than chance (around 70% on average for these methods), are not sufficiently strong for real-world, real-stakes uses.

The same appears to be true for newer methods looking to base lie detection on the well-documented cognitive differences between telling a lie and telling the truth. Lying, research shows, is more mentally demanding. Truth-tellers produce more meandering and detailed accounts, while liars tend to keep stories simple.

Thus, interview methods that impose cognitive load may allow observers to discern lies from truths effectively. This approach, while promising, is still being refined and is ”not yet ready for transfer to the applied arena.”

The advance of magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI) technologies has raised hopes for a neurological lie detection test. Yet the promise has not materialized and may never mature into practical usefulness.

For one, MRI studies generally compare group averages on multiple responses, whereas in real life, we often need one answer from one person. Second, lies told when instructed to lie may differ neurologically from those told in everyday life. In addition, the neurology of habitual liars may differ from that of occasional ones.

Moreover, not everybody will agree to “perform esoteric tasks repeatedly while lying inside a noisy, claustrophobic tube.” Finally, the MRI literature suggests that distinct concepts may share the same neural substrate, complicating detection further at that level.

Recently, a new method of lie detection has been gaining traction, a commonsense approach developed as a police-interview technique. This method asks the suspect first to produce a thorough account of the incident and then gradually confronts them with independent evidence that contradicts the account, asking the interviewee to account for the inconsistencies. A recent meta-analysis found that “guilty suspects provide more statement-evidence inconsistencies than innocent suspects.” However, the authors note: “There are indications of small study effects that warrant considerable caution when interpreting the size of some of the identified effects.”

Finally, the recent emergence of artificial intelligence (AI) machine learning technology has opened a new front in the fight to accomplish lie detection. AI is superior to humans in identifying patterns, and this may mean that subtle nuances in verbal and nonverbal cues that have escaped detection in the past may indeed be revealed by AI technology.

For example, in a recent (2021) study, Dutch psychologists Bennett Kleinberg and Bruno Verschuere have shown that machine learning can achieve better lie detection accuracy than human judges on the same material. Yet the machine’s accuracy in their study hovered around 69 percent. Less than stellar, particularly under controlled lab conditions, and perhaps less than useful in many real-world contexts.

In summary, the dream and intuition of a simple, straightforward, reliable, and practical lie detection technology have not yet materialized, despite strong scientific efforts over the past 60 years. The review authors conclude: “The science shows that there are no reliable behavioral signs of deceit that humans are able to detect.”

This is in part because, as the psychologist Daniel Kahneman put it, “psychological phenomena are intrinsically noisy.“ Thus, signs of deceit may be lost in the cacophony of individual, cultural, and situational factors that attend the production and interpretation of any human interaction.

Still, science and technology are progressing. Like lying, hope, too is a double-edged feature of our hardware. It may frustrate and disappoint, but it also keeps us moving toward discovering the truth. In this case, one hopes for the truth about lying.

More from Noam Shpancer Ph.D.
More from Psychology Today