Skip to main content

Verified by Psychology Today


Be on Alert: Generative AI Can Foster Science Denial

Vigilance in vetting online information is more critical than ever.

Key points

  • ChatGPT generates responses by predicting likely word combinations from an amalgam of online information.
  • ChatGPT can potentially promote science denial because it can be misleading and generate disinformation.
  • If you use ChatGPT or other AI platforms, recognize that they might not be completely accurate.

Until very recently, if you wanted to know more about a controversial scientific topic—stem cell research, the safety of nuclear energy, climate change—you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust.

Now you have another option: Pose your question to ChatGPT or another generative artificial intelligence platform and quickly receive a succinct response in paragraph form.

ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by predicting likely word combinations from a massive amalgam of online information.

Although it has the potential to enhance productivity, generative AI has been shown to have some major faults. It can produce misinformation. It can create “hallucinations”1:a benign term for making things up. And it doesn’t always accurately solve reasoning problems. For example, when asked if both a car and a tank can fit through a doorway, it failed to consider both width and height.2 Nevertheless, it is already being used to produce articles3 and website content4 you may have encountered or as a tool in the writing process. Yet you are unlikely to know if what you’re reading was created by AI.

As the authors of Science Denial: Why It Happens and What to Do About It, we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information. Every media consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here’s how to stay on your toes in this new information landscape.

How generative AI could promote science denial

  • Erosion of epistemic trust. All consumers of scientific information depend on the judgments of scientific and medical experts. Epistemic trust is the process of trusting knowledge you get from others. It is fundamental to the understanding and use of scientific information. Whether seeking information about a health concern or trying to understand solutions to climate change, the seeker often has limited scientific understanding and little access to firsthand evidence. With a rapidly growing body of information online, people must make frequent decisions about what and whom to trust. With the increased use of generative AI and the potential for manipulation, we believe trust will likely erode further than it already has.
  • Misleading or just plain wrong. If there are errors or biases in the data on which AI platforms are trained, that can be reflected in the results. In our own searches, when we have asked ChatGPT to regenerate multiple answers to the same question, we have gotten conflicting answers. Asked why, it responded, “Sometimes I make mistakes.” Perhaps the trickiest issue with AI-generated content is knowing when it is wrong.
  • Disinformation spread intentionally. AI can be used to generate compelling disinformation as text as well as deepfake images and videos. When we asked ChatGPT to write about vaccines in the style of disinformation, it produced a nonexistent citation with fake data. Geoffrey Hinton, former head of AI development at Google, quit to be free to sound the alarm, saying, “It is hard to see how you can prevent the bad actors from using it for bad things.”5 The potential to create and spread deliberately incorrect information about science already existed, but it is now dangerously easy.
  • Fabricated sources. ChatGPT provides responses with no sources at all, or if asked for sources, it may present ones it made up. We both asked ChatGPT to generate a list of our own publications. We each identified a few correct sources. More were hallucinations, yet seemingly reputable and mostly plausible, with actual previous co-authors in similar-sounding journals. This inventiveness is a big problem if a list of a scholar’s publications conveys authority to a reader who doesn’t take time to verify them.
  • Dated knowledge. ChatGPT doesn’t know what happened in the world after its training concluded. A query on what percentage of the world has had COVID-19 returned an answer prefaced by “as of my knowledge cutoff date of September 2021.” Given how rapidly knowledge advances in some areas, this limitation could mean readers get erroneous, outdated information. For instance, beware if you’re seeking recent research on a personal health issue.
  • Rapid advancement and poor transparency. AI systems continue to become more powerful and learn faster; they may learn more science misinformation along the way. Google recently announced 25 new embedded uses of AI in its services. At this point, insufficient guardrails are in place to ensure that generative AI will become a more accurate purveyor of scientific information over time.

What can you do?

If you use ChatGPT or other AI platforms, recognize that they might not be completely accurate. The burden falls to the user to discern accuracy.

  • Increase your vigilance. AI fact-checking apps may be available soon, but users must now be their own fact-checkers. There are steps we recommend. The first is: Be vigilant. People often reflexively share information found from searches on social media with little or no vetting. Know when to become more deliberately thoughtful and when it’s worth identifying and evaluating sources of information. If you’re trying to decide how to manage a serious illness or to understand the best steps for addressing climate change, take time to vet the sources.
  • Improve your fact-checking. A second step is lateral reading, a process professional fact-checkers use. If provided, open a new window and search for information about the sources. Is the source credible? Does the author have relevant expertise? And what is the consensus of experts? If no sources are provided, or you don’t know if they are valid, use a traditional search engine to find and evaluate experts on the topic.
  • Evaluate the evidence. Next, take a look at the evidence and its connection to the claim. Is there evidence that genetically modified foods are safe? Is there evidence that they are not? What is the scientific consensus? Evaluating the claims will take effort beyond a quick query to ChatGPT.
  • If you begin with AI, don’t stop there. Exercise caution in using it as the sole authority on any scientific issue. You might see what ChatGPT says about genetically modified organisms or vaccine safety. However, follow up with a more diligent search using traditional search engines before you draw conclusions.
  • Assess plausibility. Judge whether the claim is plausible. Is it likely to be true? If AI makes an implausible (and inaccurate) statement like “vaccines caused 1 million deaths, not COVID-19,”6 consider if it even makes sense. Make a tentative judgment and then be open to revising your thinking once you have checked the evidence.
  • Promote digital literacy in yourself and others. Everyone needs to up their game. Improve your digital literacy, and promote digital literacy in others if you are a parent, teacher, mentor, or community leader. The American Psychological Association provides guidance on fact-checking online information and recommends teens be trained in social media skills to minimize risks to health and well-being. The News Literacy Project provides helpful tools for improving and supporting digital literacy.

Arm yourself with the skills you need to navigate the new AI information landscape. Even if you don’t use generative AI, it is likely you have already read articles created by it or developed from it. Finding and evaluating reliable information about science online can take time and effort, but it is worth it.








More from Barbara Hofer, Ph.D. and Gale M. Sinatra, Ph.D.
More from Psychology Today