Skip to main content
Artificial Intelligence

Fake News, A.I. Deepfakes, and the Pageant of the Unreal

When we can no longer tell what’s real or not, we become easily manipulated.

Key points

  • From television to artificial intelligence, new technologies have made it easier to fabricate the truth.
  • The increasing use of AI risks the spread of misinformation and false belief on a massive scale.
  • The widespread use of AI propaganda to manipulate human belief and behavior has already begun.
Walkerssk / Pixabay
Source: Walkerssk / Pixabay

I’ve been writing my blog Psych Unseenabout the psychology of false beliefsfor well over a decade now. Back in 2016, just before Donald Trump’s first presidency, I wrote about the concept of “truthiness” (defined by Webster’s dictioniary as “truth coming from the gut, not books; the quality of preferring concepts or facts one wishes to be true, rather than concepts of facts known to be true”) in a post called “The Death of Facts: The Emperor's New Epistemology”:

…the history of “truthiness” goes back well beyond a decade ago. A recent article in The Atlantic by Megan Garber credits the historian Daniel Boorstin with the theory that “image” in America became preferred over reality in the century leading up to the 1960s. Garber writes that Boorstin conceived of image as a strict “replica of reality, be it a movie or a news report or a poster of Monet’s water lilies, that manages to be more interesting and dramatic and seductive than anything reality could hope to be” and as a “fundamentally democratic… illusion [that] we have repeatedly chosen for ourselves until we have ceased to see it as a choice at all.” Boorstin, Garber says, “worried that we don’t know what reality is anymore… and we don’t seem to care.” And while Boorstin implicated emerging media in creating the illusion of image, he made that claim in 1962, long before reality TV was a thing.

Skipping ahead to 2026, 35 years after the birth of the World Wide Web and poised at the cusp of a new technological revolution in the form of artificial intelligence (“AI”), I’m just as interested in—and concerned about—“truthiness” and the universal human propensity for passionately embracing false beliefs as ever.1And the quotation about truth that I keep getting reminded of is this one by the late historian and authoritarianism expert Hannah Arendt:

If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer.... And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please.2

AI-Generated Misinformation Is Becoming Ubiquitous

These days, misinformation and deliberate disinformation are all around us. Assuming we’re among the vast majority of Americans who have a cellphone and use social media, we’re so inundated by it that it’s become difficult to know what’s real, or true, or not. And based on the impact of AI thus far, the problem is only going to get worse.

Granted, some of the false information or fake news we see is relatively harmless. In response to my social media clicks, I’m regularly exposed to incendiary but fake quotations by basketball players like LeBron James and his son Bronny, as well as bogus claims about impending NBA trades. I’ve mostly learned to ignore them. It’s harder to do that with videos—I can’t seem to help but click on fake videos of things like, for example, wild animals fighting. The other day, I was captivated by a video of a cheetah grabbing onto the tail of a fleeing hippopotamus until it was rudely repelled by the larger animal’s watery flatus. The video looked real enough, but with a little research, I confirmed that it—along with many other similar versions circulating out there—was generated by AI just as I suspected. Regardless of how real things might appear, even my 8-year-old son knows that he shouldn’t trust, or accept at face value without verifying, what he might read or see online.

While those are trivial examples, AI is now being used and misused more pervasively—and with more potential harm—than ever before. The use of AI is already well-entrenched in academia, with the unfortunate risk of disseminating false information under the guise of scholarship through so-called AI hallucinations. An analysis of peer-reviewed papers presented at the recent Conference on Neural Information Processing Systems—a prestigious annual meeting for AI researchers—found 100 hallucinated citations across 51 papers that were confirmed to be fake.3 Another recent survey of 1600 academics across 111 countries found that more than half used AI for peer review, despite guidance against it and findings from other research that doing so risks the generation of factual errors in the peer review process.4 And although universities discourage or prohibit students from using chatbots to do their homework, write essays, and complete exams, professors regularly use AI now to help them teach.5

Earlier today, I came across an ad offering mental health professionals like me instruction on how to "make your own custom GPT to write newsletters, blogs and podcast scripts in your voice." Why, I thought to myself, would I want a chatbot to write Psych Unseen (this also happens to be prohibited by Psychology Today)? Have we really become so lazy, lacking in creativity, and willing to plagiarize (because that is, in essence, what chatbots using large language models are doing) while at the same time so blind to the fact that doing so hastens one’s own professional obsolescence? And if we can no longer expect academia to steer us towards what is real and true, where else can we expect to find it?

The Looming Threat of AI Propaganda

The potentially harmful impact of AI in academia is just one small area of concern when it comes to the broader risk of misinformation disseminated by AI going forward. There’s a much more serious danger looming in the form of AI-generated political propaganda that’s already being used to sway public opinion in the service of manipulating human behavior on a massive scale. I recently discussed this growing threat in an interview with psychiatrist and Duke Professor Emeritus Allan Frances:6

Vanderbilt University professors Brett Goldstein and Brett Benson have warned that “AI driven propaganda is no longer a hypothetical future threat. It is operational, sophisticated and already reshaping how public opinion can be manipulated on a large scale.”7 Chatbots can be used to generate “deepfake” videos depicting convincingly realistic images of real-life people doing things or saying things that they never actually did or said.

Russia has used chatbots to spread disinformation about the war in Ukraine and China has used them to sway the 2024 elections in Taiwan. Robert F. Kennedy’s Make America Health Again (MAHA) commission report that called into question the safety and efficacy of vaccines contained fake citations almost certainly generated by chatbots.8 Over the past year, the Trump administration has circulated at least 14 AI-generated images including a recent photograph of a woman altered with AI to make it look like she was crying during her arrest by US Immigration and Customs Enforcement. When asked for comment, the White House responded that “the memes will continue.”9

Such concerns were echoed in a recent paper by leading information and misinformation researchers entitled, “How Malicious AI Swarms Can Threaten Democracy,” which similarly warned how “advances in AI offer the prospect of manipulating beliefs on a population-wide level… generative tools can expand propaganda output without sacrificing credibility and inexpensively create falsehoods that are more human-like than those written by humans.”10

As Arendt cautioned, a populace that no longer knows what to believe—or how to tell what’s true or false—anymore risks losing the ability to think, judge, and act freely. With AI being increasingly exploited by bad actors to generate a “pageant of the unreal” designed to divert our attention and garner clicks with a goal of convincing us what’s real or important in the world, manufacturing our outrage, swaying our votes, and buying our complacency, Arendt’s warning may very well become a prophecy fulfilled.

References

1. Pierre J. False: How mistrust, disinformation, and motivated reasoning make us believe things that aren’t true. New York: Oxford University Press, 2025.

2. Arendt H. Hannah Arendt: From an interview. New York Times Review of Books. October 26, 1978.

3. Bort J. Irony alert: Hallucinated citations found in papers from NeurIPS, the prestigious AI conference. TechCrunch.com; January 21, 2026.

4. Naddaf M. AI commonly used in peer review—often against guidance. Nature 2026; 649:273-274.

5. Hill K. The professors are using ChatGPT and some students aren’t happy about it. The New York Times; May 14, 2025.

6. Frances A, Pierre JM. Chatbot generated propaganda threatens democracy. Psychiatric Times; January 27, 2026. Accessed February 2, 2026.

7. Goldstein BJ, Benson BV. The era of A.I. propaganda has arrived, and America must act. New York Times. August 5, 2025. Accessed January 26, 2026.

8. Gilbert C, Wright E, Nirappil F, et al. The MAHA Report’s AI fingerprints, annotated. The Washington Post. May 30, 2025.

9. Kornfield M. White House posts an altered photo of Minnesota protester’s arrest to make it look like she was crying. CBS News. January 24, 2026.

10. Schroeder DT, Cha M, Baronchelli A, et al. How malicious AI swarms can threaten democracy. Science; 2026; 391(6783):354-357.

advertisement
More from Joe Pierre M.D.
More from Psychology Today