Skip to main content

Verified by Psychology Today

Artificial Intelligence

How AI-Generated Content Can Undermine Your Thinking Skills

Navigating the boom in automated content and increasing misinformation.

Key points

  • AI-generated content is increasingly common; up to 90 percent of all content could be AI-generated by 2026.
  • Much of this content is mis- or disinformation, prompting concerns over AI's societal impact.
  • Critical thinking remains essential to minimize the risk of manipulation.

A report from Europol earlier this year warned that “as much as 90 percent of online content may be synthetically generated by 2026,” referring to “media generated or manipulated using artificial intelligence.” The remaining question is to what extent this will impact people’s critical thinking skills, and whether it really matters.

While many applications of AI can support human development and well-being and provide access to efficiency gains, AI is also being employed to generate swathes of misinformation and disinformation—both written and image- or video-based—in a bid to confuse narratives, change popular opinion, manipulate the populace, and shift focus away from pressing news items. It has already had a direct impact on the human population, from the January 6 riots to delays in seeking healthcare due to health misinformation. According to a study published by the World Health Organization, for example, “among YouTube videos about emerging infectious diseases, 20 to 30 percent were found to contain inaccurate or misleading information.”

Image by Lerbank-bbk22 - sourced via Canva
Image by Lerbank-bbk22 - sourced via Canva

Artificial Intelligence is most commonly designed to operate through human queries, and some of its creators are working to minimise the potential for nefarious use. But it is fallible, and several studies on platforms such as ChatGPT have shown how it can be directed to specifically overcome its safeguards. For many, this prompts significant concern, and some experts are warning that critical thinking skills remain a top priority for safeguarding our mental capability.

Psychologists at the University of Cambridge recently developed the first, validated “misinformation susceptibility test” (MIST), which highlights the degree to which an individual is susceptible to fake news. Younger Americans (under 45) performed worse than older Americans (over 45) on the misinformation test, scoring 12 out of 20 correctly, compared to 15 out of 20 for older adults. This was in part correlated to the amount of time spent online consuming content, indicating the relevance of how you spend your recreational time.

The Europol report continues with a stark warning: “On a daily basis, people trust their own perception to guide them and tell them what is real and what is not… Auditory and visual recordings of an event are often treated as a truthful account of an event. But what if these media can be generated artificially, adapted to show events that never took place, to misrepresent events, or to distort the truth?”

According to a Forbes Advisor Survey, 76 percent of consumers are concerned with misinformation from artificial intelligence, and only 56 percent believe they can tell the difference. Meanwhile, a study in the UK by Public First highlights attitudes towards AI, identifying that attitude is dependent on the potential application for AI. They found that the public tends to be supportive of applications such as early warning for medical intervention or to detect welfare system fraud, but were not supportive of AI being used to decide or to advise when it came to detecting guilt, either in a criminal or military context.

A lot of the current critical thinking tools encourage individuals to employ lateral thinking techniques, where we actively seek information from multiple sources, or to employ techniques such as inversion thinking, where we actively seek information that contradicts our own views. What remains to be measured, however, is how effective these tools will be in a content landscape that is up to 90 percent generated by AI and which can be rolled out and reproduced across thousands of websites en masse.

What will be essential, therefore, is to equip ourselves with tools that don’t rely on cross-checking information, such as:

  • Ensuring we understand statistics. This means knowing as much about what they don’t say as what they do.
  • Identifying the evidence base. What grounds is the content based on, how was the research generated, and is it credible?
  • Understanding the context. A key tool in manipulation is to apply information outside of its intended context. What did it say at its original source, what context was it given in, and how has it been changed?
  • Inferring from previous information. Does it fit the standard narrative you would expect? Deepfakes are increasingly an issue—so if we see, for example, our favourite TV personality speaking out on world issues, does it fit their usual profile?
  • Asking for clarity and precision. More depth can help to uncover the expertise of the source, or the origin of the content, giving you more accurate or credible information, or highlighting instances where it is less credible
  • Remaining sceptical, but not too sceptical. We need to question what we are told, but ironically, as highlighted by the MIST study, expecting everything to be fake can make it more difficult to spot the actual fakes. Remaining open-minded will be key.

“Critical thinking is crucial to combat the pervasive effect of misinformation in the digital age, especially for the younger generations. Misinformation and coercive control hamper our everyday lives, enabling everything from political and religious radicalisation, to domestic abuse, fake news to gaslighting. We can address these challenges head on by teaching people to think critically.” —Jim Atack, President of The Open Minds Foundation.

References

Europol Report on AI Content Generation

“Infodemics and health misinformation: a systematic review of reviews”, Bulletin of the World Health Organization.

Public First Analysis of AI

Ipsos report

Forbes Advisor Study

University of Cambridge MIST test and analysis

advertisement
More from The Open Minds Foundation
More from Psychology Today