Skip to main content

Verified by Psychology Today

How Memory Became Weaponized

Emerging technology deployed on Internet platforms exploits our own mechanisms of memory to work against us. Both sanity and democracy are losing.

Photograph and Art by Edward Levine
Photograph and Art by Edward Levine

Read this list of words: table, sit, legs, seat, soft, desk, arm, sofa, wood, cushion, rest, stool. Now count to 30.

Did you spot the word chair? If so, I've just implanted a false memory—read the list again. In tests using lists such as this, people usually say they saw the lure word, a word that was danced around but never actually named.

It's a simple example of the flaws in human memory—specifically, how people can be induced to recall things that never happened or existed. Hearing family stories of your childhood enough times can lead you to treat the stories as memories, even if they've been embellished or invented. False information can alter eyewitnesses' accounts of crimes and accidents. Urban legends can resolve into medical consensus in our confounded recollections. And misleading news stories or videos can transform what we recall a politician actually saying or doing.

False memories have existed for as long as the brain has been encoding information. But until the advent of the internet there'd never existed such a swift and sweeping delivery system for simulacra. Once upon a time, headline news passed through layers of print or broadcast editors before landing in your home; now anyone with an agenda, or without, can upload invented articles or doctored images and watch them spread at the speed of kitten videos. Videos themselves do not invariably attest to reality, as new artificial intelligence tools are about to yield fabricated footage of anyone doing anything. Algorithms also allow the targeting of material to especially receptive audiences, ensuring that the seeds of false memories find fertile minds.

Even if the content verges on the ridiculous—the 2016 Pizzagate conspiracy theory, spread via Twitter and other social media, had Hillary Clinton running a child-sex ring from the basement of a D.C. pizza parlor—there are believers. Rapidly circulated, the rumor induced an amateur sleuth to drive hundreds of miles and blast off an AR-15 in the restaurant.

The human mind, scientists contend, is built for belief. When you picture or hear of something, you assume it's true. Our ancestors evolved in an environment too dangerous to question themselves every time they thought they saw a lion or second-guess every story from a tribe member. Credence and impressionability, and the folding of hearsay and conviction into memory networks, are not flaws—they're efficiencies in building a cohesive cosmology. But they leave us vulnerable to predatory rumormongers. Here's the tricky part: Even if you don't believe any of the illusions, they can still warp your memories.

Whether a foreign operative, marketer, or troll, anyone today can manipulate many others into believing and latching onto what they want. The problem is that democracy as well as sanity requires a semi-shared picture of actual reality. False recollection of what we've seen and read and experienced hinders the ability to make informed decisions about policy and politicians. It drives social discord and character assassination. It also corrupts choices about our own health and well-being.

Our minds are currently under attack; human memory has been weaponized by technology, distancing us from fact and fomenting disagreement. If the devolution of discourse needs a timeline, look to the word of the year named by leading dictionaries —post-truth in 2016 (Oxford), fake news in 2017 (Collins English), misinformation in 2018 (Dictionary.com).

"The landscape has shifted in the last couple of years," says Hany Farid, a professor of computer science at Dartmouth College and a veteran of digital-image forensics. "You have the technology to create sophisticated and compelling fakes. You have a delivery mechanism with unprecedented speed and reach. You have a polarized public willing to believe the worst about adversaries. And then you have bad actors, whether state-sponsored agents disrupting a global election, people trying to incite violence, or those looking to monetize fake news for personal gain. This is the perfect storm of an information war." It is a war our memories have not evolved to win.

Conjuring Recollection

In the 1970s, psychologist Elizabeth Loftus, then at the University of Washington in Seattle, now at the University of California, Irvine, had an interest in crime and, wanting to put her memory research to good use, thought eyewitness accounts were ripe for exploration. Could certain kinds of questions—the kind some prosecutors might ask, with graphic details—influence how witnesses recalled an event?

She found that they could. In an early study, participants saw a film of a car crash and were asked to estimate the cars' speed "when they smashed into each other" or "when they hit each other." A week later, 32 percent of those who'd read "smashed" recalled seeing broken glass—there was none—versus 14 percent for the "hit" group. The mere suggestion of speed had caused them to populate their recollections with destruction.

Most people have never been called to testify in court about a crash, but everyone has been called on to form an opinion of a public figure or event, whether at a cocktail party or in a voting booth. And where do we receive the information to be synthesized? Increasingly, from online or cable news sources, outlets that claim neutrality but often adhere to strongly biased agendas. If they tweak a description of a protest or press conference, you'll remember it differently than if you'd watched without commentary. A political debate becomes a train wreck.

In 2004, a photo was widely circulated of presidential candidate John Kerry with Jane Fonda at an antiwar protest. Veterans were enraged, but the image was later outed as a cut-and-paste job. That incident inspired Loftus to see whether doctored images of public events could alter people's memories of them. In one instance she and collaborators took the famous photo of a man standing in front of a tank in Tiananmen Square that graced the cover of Time in 1989, and they added crowds along the roadsides. Study subjects who saw the fake versus real photo estimated that more protesters had been present at the event. The researchers also doctored an image of a more recent incident, a peaceful protest in Rome. Italians who saw an image with added riot police recalled the event as more violent.

Could people remember public events that hadn't happened at all? In 2010, informed by Loftus's work, Slate writer William Saletan conducted an experiment on his readers (then analyzed and published the results with Loftus's lab). Readers saw photos of three real events and an image of one of five fake events, depicted by altering a photo and adding an incorrect caption. One fake photo showed President Bush on vacation with a Houston Astros pitcher during Hurricane Katrina. Another showed President Obama shaking hands with Iranian President Mahmoud Ahmadinejad. Readers were asked if they recalled the event and to describe how they felt when first hearing about it.

Photograph and Art by Edward Levine
Photograph and Art by Edward Levine

Half the time, people said they remembered the false event happening, and in most of those cases they said they actually remembered seeing it on the news. They recalled being "torn" upon seeing it, or having "mixed emotions," or "cring[ing]." Perhaps some people were lying about their recollections, but when told one of the events hadn't happened, readers guessed the wrong one 37 percent of the time. For them, the fake event was not only real but more real than some of the actual events.

False memories are examples of what psychologists call source-monitoring errors. When a scene or fact comes to mind, the brain tries to identify its source: Was it stored in memory or are you simply imagining it? We often use unconscious guidelines, or heuristics, to determine a source. If a scene is pictured in rich detail, you are inclined to assume you actually experienced it and to nestle it in your own personal timeline. More systematic conscious processing might also be used: If you know you were somewhere else when the event happened, you reason that you can't have lived it.

Familiarity Breeds Conviction

"Mike Pence: Gay Conversion Therapy Saved My Marriage." That headline appeared in 2016 on the Nevada County Scooper, a satirical news site. It's absurd, but if you saw it enough times, it might gain a ring of truth. And in a scientific study, it did. Psychologist Gordon Pennycook at Canada's University of Regina showed people six true and six false headlines collected from the internet. (False headlines included "Election Night: Hillary Was Drunk, Got Physical with Mook and Podesta" and "Trump to Ban All TV Shows that Promote Gay Activity Starting with Empire as President.") Participants answered some filler questions and then saw the headlines again plus a crop of new ones and rated their accuracy. For each new false headline, an average of 4 percent of people believed it. For the false headlines they'd seen once before, a few minutes earlier, that rate was near 8 percent. "It's not that many people, but it's double," Pennycook says. "To me that was sad but also super interesting."

Things seem more true upon repetition, what's known as the illusory truth effect. Psychologists offer two main explanations. First, if you hear something a lot, especially from many people, you reason that it's probably true. Nine out of 10 dentists agree. "With the internet, it no longer matters how bizarre your belief is," says Stephan Lewandowsky, a cognitive scientist at the University of Bristol in England. "If you jump online, you can find a community of like-minded people. Flat Earthers are a prime example of that."

But perceived consensus doesn't explain people who just read something in an experiment. They likely rely on a more unconscious heuristic, based on processing fluency: The more easily we process a statement—whether because we've heard it before or because it's written in an easy-to-read font—the greater we deem its truth. We may associate fluency with veracity because genuine truths are repeated more than falsities, making them easier to process.

The illusory truth effect generally creates false beliefs, not necessarily false memories, but it relies on features of how memory operates; it's another principle propagandists can exploit to weaponize our own memory systems against us. And it can lead to false memories. In one experiment, participants read several false news stories for the first time and returned five weeks later to evaluate them. They were pretty confident they'd first seen the stories somewhere outside the lab—source misattribution. The first step toward remembering something is believing it.

When you don't know the truth of a claim, it might make sense to trust your gut and use fluency as a cue. But researchers find the illusory truth effect can operate even when people know the right answer. Seeing the statement twice increased truth ratings for "A sari is the name of the short, pleated skirt worn by Scots"—even for people who knew it was called a kilt. Previously, researchers had thought that we first check our knowledge base and then turn to fluency if that base is empty. But we actually do the opposite: We look to fluency first, with knowledge playing a backup role. Apparently, we're all shoot-from-the-hip fact checkers.

And warning people about the fluency effect doesn't help. In a recent study, it cut the effect in half but did not eliminate it. Well, then, can we at least correct people's memories once they've encoded false information?

You Can't Unsee It

Last year, video circulated showing Emma González, a survivor of the Parkland school shooting, ripping the U.S. Constitution in half. It was actually doctored footage of González ripping up a shooting-range target, filmed to accompany an op-ed she wrote about gun control. Even if people eventually saw the original video, the manipulated version likely endured vividly in their minds. One member of the National Rifle Association reacted by tweeting, "[González] demanding I surrender my rights is literally the same thing as shredding the U.S. Constitution." Memory can morph figurative and fantasized images into concrete-seeming pasts.

Janet Cooke, a writer for The Washington Post, won a 1991 Pulitzer Prize for an article about an 8-year-old heroin addict living in poverty in D.C. She returned the prize when it emerged that the boy didn't exist. Two decades later, the story appeared in an experiment. Researchers presented it as either fiction or nonfiction. After reading the story, some of the participants who'd been told it was nonfiction were informed that important facts had been fabricated by the author. Then everyone rated various beliefs. Those who read the story as nonfiction without a correction believed more strongly than those who hadn't read it at all that "social programs with goals to assist young people always seem to fail." But people who'd read the story and been told about the fabrications held that belief just as strongly. The correction didn't matter. On some beliefs, neither did being told it was fiction.

"Fake images can affect us in the same way as fake narratives. Robert Nash, a psychologist at Aston University in England (and an academic great-grandchild of Loftus), showed British participants an image of Prince William and Kate Middleton's royal wedding. Some saw a genuine photo, some saw a well-doctored version with protesters and police, and some saw a poorly doctored version that no one could mistake for genuine. Later, they were told to estimate from memory the percentage of spectators who were protesters. Those who'd seen either doctored image gave estimates that were more than double those of people who'd seen the real photo. "Even when there's a not-very-convincing photograph," Nash says, "it can still plop an idea in our minds."

If a poorly Photoshopped image inserts the idea of protesters in our minds, it also inserts the negation of the idea in our minds: Protesters were there—not! Unfortunately, the negation falls away, leaving the pure image of protesters to influence us—perhaps because we excel at recalling concrete events and negation is a mere abstraction, like a Post-it note loosely sticking to a photo, easily detached from it. There's a philosophical proposition, supported by modern psychological research, that to understand something we must first believe it. To grasp the concept of wedding protesters, the mind holds it and tries it out, constructing a world in which it's true. Seeing is believing. And so fake news and images, even as we profess not to accept them, are still swallowed. Once you see something, you can't unsee it.

While rough descriptions and sloppy image editing can implant a memory, emerging AI tools can do one better. The biggest trend in AI is deep learning, in which software inspired by the brain roughly simulates multilayered networks of neurons to recognize patterns and make decisions. That's given rise to so-called "deepfakes." Algorithms can combine unrelated video or audio to create footage in which people are saying or doing things they're not. In one video online, comedian Jordan Peele voices a deepfake of President Obama warning us about deepfakes: "For instance," fake-Obama says, "they could have me saying things like ... 'President Trump is a complete and total dipsh*t.'"

One of the first applications of deepfake software was grafting the heads of famous actresses into pornography. As the fakes get more realistic, they'll snake deeper into our minds, so that by the time they're debunked or disbelieved, they've spawned nests of insinuation and association. Creating revenge porn of an ex-girlfriend can spoil her reputation indelibly. The more vivid, the harder to unsee.

Deepfakes might even convince you that you've committed a crime you haven't. Loftus and others have shown that autobiographical memories are easy to implant; about a third of participants in lab studies acquiesce. Being told falsely that as a child you got lost in a mall, rode in a hot air balloon, spilled wedding punch on a bride's parents, or even witnessed a demon possession, can lead you to recall the events, even to add fresh detail. Nash has used doctored videos to make people believe they'd cheated at gambling. It's not a big leap to imagine corrupt investigators somewhere deploying realistic deepfakes—combined with suggestive interrogation—to convince innocent people to confess to crimes they have not committed.

Nash says, "It's increasingly difficult for people to navigate what's true and what's false in their own memories." Photo doctoring is old, but seeing a fake video online embroidered with rich detail "just multiplies credibility a hundred times over."

Twitter.com
Twitter.com

Who's at Risk of Being Duped?

Maybe you think you're immune to memory meddling. You can spot fake stories and images just fine. And when information is corrected, you revise your accounts accordingly. You're right that some people are more susceptible to misinformation and doctored memories. But no one is invulnerable all the time.

One set of factors involves how tightly people try to hold on to reality. A tendency to become absorbed in one's reactions to experiences is associated with porous autobiographical memory. People who score high on an absorption scale ("When I listen to music, I can get so caught up in it that I don't notice anything else") have less accurate memories. Those who have dissociative experiences—forgetting why they entered a room or having daydreams that feel real—falsely remember words in exercises like the chair test.

Pennycook has found that receptivity to "pseudo-profound bullsh*t" predicts believing in fake news to begin with. People who more highly rate the depth of aphorisms made by blending Deepak Chopra tweets ("We are in the midst of a high-frequency blossoming of interconnectedness that will give us access to the quantum soup itself") had trouble discerning true from false news headlines. It seems that those tending toward absorption, dissociation, and mysticism are most likely to generate their own internal worlds, memories and all, unable to distinguish perception from imagination.

Cognitive capacity also plays a role. In studies, false recollection is correlated with lower intelligence, lower working memory (the ability to hold and manipulate information in consciousness), and poor performance on perceptual tasks such as tone discrimination and face recognition. People with poor working memory are more likely to retain information about an event that has been told to them and then negated. And people with low IQ are most likely to hold unfavorable opinions about someone even after a false accusation has been retracted.

Age is a factor, too. People older than 65 prove worse than younger people at updating their beliefs after myths are debunked. They are, however, sometimes better at resisting the familiarity effect—possibly because they rely on knowledge over gut feelings. Or they may be better at retrieving and applying knowledge after decades of experience.

Loftus's lab has also found an array of personal factors correlated with memory's susceptibility to misinformation: high self-directedness, persistence, and coping ability. Such factors may make people overconfident in their memories.

No one, however, is immune to muddied memory. Pennycook recently found that three factors—cognitive ability, an analytical thinking style, and the avoidance of uncertainty—didn't just fail to eliminate the familiarity effect. They had no impact at all.

Situational factors can affect when we're most susceptible to mind muddling, Loftus finds. Memories encoded when sleep deprived are most corruptible. All-nighters give false narratives inroads to changing people's recollections.

We're also more likely to say an event really happened when told of it by a trusted source—a parent, say. And we are prone to accept events that align with our ideologies. Using images of fabricated public events, Loftus found that liberals were more likely than conservatives to remember Bush's (fake) vacation during Katrina, and conservatives were more likely to recall Obama's (fake) handshake with Ahmadinejad. In Pennycook's study on the familiarity effect using false news headlines ("Hillary Was Drunk"), Trump and Clinton supporters were more resistant to accepting the truth of stories assailing their chosen candidate. (But even in these cases people remembered things they didn't want to believe.)

Disagreement is one thing; many factual beliefs are open to counterargument—just show me the evidence. Hoaxes of memory are another thing. They amplify the social fracturing from contested perspectives. Phony news stories and images will "create more and more opportunity for division between groups," Nash says. People will believe and remember things that fit their viewpoints, and if something doesn't fit, they'll say it was faked.

When you don't just know something but can replay it in your mind's eye, will you listen to someone telling you, basically, that you're not just wrong but crazy?

Matthew Hutson is a freelance science writer in New York City and the author of The 7 Laws of Magical Thinking.

Fending Off Fakery

How to contain the viral spread of misinformation and inoculate memory against it? It's possible to marshal a number of defenses, although they are only partially effective against an assault that is rapidly advancing in sophistication.

Information. Retractions don't always erase bad info from memory, but Stephan Lewandowsky has found that detailed retractions work better than simple ones, and repeating retractions enhances their effectiveness. Lewandowsky also recommends providing new facts to supplant wrong ones. Explaining where false facts came from heightens suspicions of bad sources. "We've evolved to believe things," he says. "We're not good at letting go of a belief unless it's replaced with an alternative that explains the world equally well."

Regulation. Banning misinformation is generally difficult because of first-amendment rights. However, scholars looking at "deepfakes" argue that some speech is not protected: speech that is fraudulent, defames private citizens, incites violence, or impersonates government officials. Individual creators and sharers of fake news are often hard to track down, and social media platforms can't easily be sued as publishers. Still, Facebook, Twitter, and Google are not immune to new regulation that assigns them more responsibility for their effects.

Filtering. The same types of AI technology used to mimic and manipulate humans online is also being used to filter out fake and bad-faith content. But fact-checking and the refinement of newsfeed algorithms require news analysis, moral judgment, and common sense, none of which can yet be automated. Hany Farid insists, "This is a very human problem; it's going to require human intervention."

Tilting toward disbelief. "The only real weapon is cynicism," Robert Nash proffers—while immediately recognizing its unworkability. Even if it were possible to pull off, questioning everything would come at the cost of everyday functioning. "Even as an expert in memory I don't go around distrusting my memory." But a little questioning goes a long way.

Submit your response to this story to letters@psychologytoday.com. If you would like us to consider your letter for publication, please include your name, city, and state. Letters may be edited for length and clarity.

Pick up a copy of Psychology Today on newsstands now or subscribe to read the the rest of the latest issue.

Facebook/LinkedIn image: gpointstudio/Shutterstock