Artificial Intelligence
How Machines and Humans Create Misinformation Together
Gestalt psychology and internet cognitive isoforms underpin the AI mind trap.
Posted January 9, 2025 Reviewed by Margaret Foley
Key points
- Gestalt psychology explains why humans trust AI's coherent but incorrect outputs.
- Internet cognitive isoforms shape rigid thinking, reinforced by AI's training data.
- AI amplifies human biases, creating polished but flawed narratives that feel true.
Artificial intelligence (AI) dazzles with its capabilities—from writing emails to diagnosing diseases—but its confidence in delivering false information reveals a peculiar and troubling trait: confidently providing incorrect answers. Ask it a question, and sometimes it will generate responses that sound perfectly plausible but are entirely false. This phenomenon is often called "hallucination," but the term is misleading because AI doesn't perceive or really make you hallucinate. Instead, it generates errors through the misanalysis of data. In psychiatry, hallucination means perceiving something that isn’t there. AI, however, doesn’t perceive; it analyzes data and, when errors occur, creates distorted patterns that we mistake for truth.
Humans Amplify AI's Errors
Humans play a significant role in perpetuating AI’s mistakes. AI doesn’t merely replicate our biases; it amplifies them, wrapping them in polished, convincing language. Humans are naturally inclined to trust information that feels emotionally resonant and coherent, making them susceptible to AI’s convincing yet flawed outputs. This creates a reinforcing cycle: We accept AI’s mistakes as truth and, in turn, feed these back into the systems that learn from our behaviors.
This interaction mirrors principles from Gestalt psychology. The human brain naturally fills in gaps to create coherent patterns. We see a few scattered lines and identify a triangle or hear a few musical notes and complete a melody in our minds. When AI provides fragmented or flawed information, we instinctively “complete” it, smoothing inconsistencies into something that feels true. The famous optical illusion of Rubin's vase highlights this: Are there two faces or one vase? Our brains fill in cognitive gaps to make a coherent image.
Internet Cognitive Isoforms and Echo Chambers
AI’s training data often reflects the internet’s biases, where echo chambers abound. Online, like-minded individuals cluster together, reinforcing each other’s ideas through repetition and emotional resonance. These rigid patterns of thought are termed "internet cognitive isoforms," a concept rooted in my research on extreme overvalued beliefs. Internet cognitive isoforms describe how repetitive, emotionally charged ideas on the internet crystallize into rigid mental frameworks, influencing both individual and collective thinking. They’re not merely individual quirks but collective cognitive habits shaped by the digital age.
AI mirrors these cognitive isoforms. When it interacts with users seeking affirmation of existing beliefs, it reinforces those ideas. This feedback loop can solidify misconceptions into perceived truths. For instance, a user’s query about vaccine risks may yield AI-generated results that reflect the most frequent—and often flawed—perspectives found online, reinforcing skepticism rather than providing accurate information.
Why AI Feels “Right”: The Emotional Hook
Gestalt psychology teaches us that humans are drawn to emotionally resonant patterns. AI, trained on human language and emotions, mirrors this tendency. It generates emotionally charged responses that feel meaningful, even when incorrect. A heartwarming story or a triumphant narrative, regardless of factual accuracy, captivates us because it aligns with our innate craving for coherence and emotional depth. In fact, emotionally tagged material is more likely to be remembered by humans.
Breaking the Cycle
If AI errors reflect human cognitive biases, can we disrupt this feedback loop? Gestalt psychology provides insights into how we might outthink AI. Remember: Jjust because information feels coherent doesn’t mean it’s true. Question overly polished or emotionally satisfying answers. Humans prefer harmony, but truth is often messy. Actively look for perspectives that challenge your views. AI tends to oversimplify complex issues for clarity. Insist on more context and detail.
Why It Matters
AI is deeply integrated into our lives, from assisting doctors to shaping public opinion. If we don’t address how it reinforces human biases, we risk amplifying misinformation on a global scale. But this isn’t just about machines—it’s about understanding ourselves. AI mirrors the way we think and may activate mirror neurons. In the partnership between humans and AI, staying curious, critical, and open to change is essential. As we navigate this evolving relationship, the question isn’t just about AI’s capabilities but about how we choose to use our own minds in collaboration with these powerful tools.
References
Rahman, T., & Abugel, J. (2024). Extreme Overvalued Beliefs: Clinical and Forensic Psychiatric Dimensions. New York: Oxford University Press.
Rock, I., & Palmer, S. (1990). The legacy of Gestalt psychology. Scientific American, 263 (6), 84-91.
Maleki, N., Padmanabhan, B., & Dutta, K. (2024, June). AI hallucinations: a misnomer worth clarifying. In 2024 IEEE Conference on Artificial Intelligence (CAI) (pp. 133-138). IEEE.