Skip to main content

Verified by Psychology Today

Do We All Still Agree that “Seeing Is Believing”?

Deepfakes destabilize our collective notions of truth.

Illustration by Vincent Tsui
Source: Illustration by Vincent Tsui

“Seeing is believing” is perhaps one of the most uncontested idioms in the English language. Court cases are waged on the power of eyewitness testimony. When people hear something on the news, they ask, “Is there video?” When something astounding happens, we want to see it with our own eyes.

But what happens when our eyes deceive us? What happens to our rock-hard concepts of certainty when visual media are so easily tampered with? And why do we put such indisputable trust in our eyes in the first place?

These aren’t hypothetical questions; they’re urgent matters for our media-hungry public if we want to maintain a collective understanding of truth, accuracy, and authenticity.

Nowhere does this become clearer than in the matter of deepfakes. Deepfakes use artificial intelligence to create manipulated images, audio, or videos of fake events, and as technology improves, they’re becoming harder to spot.

Digging into Deepfakes

You might have seen humorous deepfakes of Bill Hader’s celebrity impersonations or those depicting Nicolas Cage as the leading man in popular films, but deepfake technologies have a more sordid origin.

In 2017, users on a Reddit forum called r/deepfakes began exchanging artificial videos of pornographic content with celebrities’ hijacked faces . Over time, people began paying for specific videos, including some featuring their personal acquaintances .

The overwhelming majority of deepfake videos involve mapping the faces of nonconsenting women onto the bodies of porn stars. That’s cause enough for concern, but now, deepfakes are sounding another major alarm because they’ve reached the political arena.

In 2019, a lewd video emerged allegedly featuring Malaysian Cabinet minister Azmin Ali, and the government quickly dismissed it as a deepfake . A “shallowfake” (a fake video not relying on deep learning technologies) of Nancy Pelosi got millions of views before it was debunked. And a bipartisan group of politicians has warned that foreign countries might try to use deep fakes to disrupt the 2020 election.

In the words of Jeremy Kahn , these videos raise the potential for “fake news on steroids.” In other words, they could completely upend our collective sense of truth.

The Rise of Ocularcentrism

Evolutionarily speaking, eyesight has always been an important source of information. Yet, many scholars argue that eyesight was just one in an arsenal of verifying tools—and not even a privileged one—until the modern era.

Their basic narrative centers on something called the “Great Divide Theory,” which originated with the scholar Marshall McLuhan in the 1960s before Walter Ong elaborated on it in the 1980s. These theorists understood modernity as the culmination of a long historical process that led people to rely overwhelmingly on their eyesight.

The process began as cultures shifted away from oral traditions to printed texts, and it accelerated after the invention of the printing press. The real dividing point, though, was the eighteenth century, when literacy boomed, a newfangled medium called the newspaper was invented, and letters circulated freely. Moving forward, modern life became increasingly “ocularcentric,” with the rise of new media like photography, film, television, and the internet.

This narrative has been adopted and adapted by theorists of modernity. Guy Debord argued that we live in a society of spectacle; Michel Foucault explained how modern life is permeated with surveillance; the psychologist Jacques Lacan introduced “the gaze” to explain the anxiety of realizing you’re visible to others.

Of course, there are many good reasons to be skeptical of the Great Divide Theory, but it’s hard to deny that many people today think of observation as the bearer of objective truth.

Still, we also all know that eyesight isn’t reliable. As kids, we play with optical illusions. In art classes, we talk about how deceptive perspective can be, and scholars have repeatedly shown how much visual information we miss when we aren’t looking for it. We all know how easily photos can be manipulated, and now deepfakes have hit the scene.

Visual deception is by no means new. But the danger comes when people instinctively (and arguably, increasingly) assume their eyes are honest. Meanwhile, sophisticated technologies are being designed specifically to trick our eyes. There’s an ever-widening gap between our perception of truth and the images we see.

AI, CGI, and all the technologies we’ve created to make our films more realistic and our entertainment more immersive—they’re a double-edged sword. They don’t just create vivid fantasies; they also have the potential to undermine an already crumbling faith in media.

The Deepfake Dilemma

We’re faced with a dilemma. If people believe deepfakes, that’s a serious problem, because democracy is threatened by the rapid spread of misinformation. But if people think everything could be a deepfake, that’s an equally serious problem because it destabilizes the public’s sense of truth. Even in the recent case of George Floyd, where most people unequivocally trusted the brutal images before their eyes, some politicians argued the footage was “staged.”

In 2018, law professors Robert Chesney and Danielle Keats Citron wrote, “The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly.”

Their findings have been bolstered by a recent report by the Brookings Institute, which states, “Faced with this epistemological anarchy, voters will be more likely than ever before to remain within their partisan bubbles, believing only those politicians and media figures who share their political orientation.”

Not only are we moving further into entrenched beliefs, but there’s also less accountability for public figures. They can say or do virtually anything, and their followers are willing to chalk it up to fraudulent media.

So, how do we begin to close the gap between implicit visual trust and easy visual manipulation? Ian Sample has argued that technologies like digital watermarks and detection systems might hold the answer. Scholars like Chesney and Citron have argued for legislative solutions. Educators stress the need to teach media literacy skills .

Some combination of these efforts might mitigate the potential disruptions caused by deepfakes, but on a deeper level, perhaps we need to evaluate our standards for proof. In what contexts does “seeing is believing” serve our best interests, and in what contexts does that certainty begin to break down?

References

Andu, Naomi, Clare Proctor, and Miguel Gutierrez, Jr. “Conspiracy Theorists and Racist Memes: How a Dozen Texas GOP County Chairs Caused Turmoil within the Party,” The Texas Tribune, June 5, 2020. https://www.texastribune.org/2020/06/05/texas-gop-chairs-racist-george-floyd/

Cole, Samantha. “We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now.” Vice, January 24, 2018. https://www.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley

Galston, William A. “Is Seeing Still Believing? The Deepfake Challenge to Truth in Politics,” Brookings, January 8, 2020. https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/

Golingai, Philip. “Is It Azmin or a Deepfake?” The Star, June 15, 2019. https://www.thestar.com.my/opinion/columnists/one-mans-meat/2019/06/15/is-it-azmin-or-a-deepfake

Kahn, Jeremy. In “It’s Getting Harder to Spot a Deep Fake Video.” Bloomberg QuickTake Originals. Video by Henry Baker and Christian Capestany. https://www.youtube.com/watch?v=gLoI9hAX9dw

“The Most Urgent Threat of Deepfakes Isn’t Politics.” Vox, Youtube, June 8, 2020. https://www.youtube.com/watch?v=hHHCrf2-x6w

Parkin, Simon. “The Rise of the Deepfake and the Threat to Democracy,” The Guardian, June 22, 2019. https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy

Patrini, Giorgio. “Mapping the Deepfake Landscape.” Deeptrace Labs, https://deeptracelabs.com/mapping-the-deepfake-landscape/

Sample, Ian. “What Are Deepfakes—and How Can You Spot Them?” The Guardian, January 13, 2020. https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them

Simons, Daniel and Christopher Chabris. “Selective Attention Test,” Youtube, March 10, 2010. https://www.youtube.com/watch?v=vJG698U2Mvo

Wolfgang, Ben. “Putin Developing Fake Videos to Foment 2020 Election Chaos: ‘It’s Going to Destroy Lives.’” The Washington Times, December 2, 2018. https://www.washingtontimes.com/news/2018/dec/2/vladimir-putins-deep-fakes-threaten-us-elections/

advertisement