Skip to main content
Cognition

How AI Changes Student Thinking: The Hidden Cognitive Risks

Studies reveal the cognitive impact of AI and what educators can do about it.

Key points

  • AI interactions may reinforce cognitive biases through personalized feedback.
  • Studies link heavy AI use to declines in critical thinking and decision-making.
  • Educators can counter algorithmic influence with authentic teaching practices.

Seventeen-year-old Maya is sitting in her bedroom, phone in hand, spiraling deeper into a conversation with her AI assistant about her climate anxiety. With each exchange, the LLM, trained on her previous messages, mirrors back her thinking patterns, offering some comfort and support but also reinforcing her idea that the planet may be beyond saving. She doesn't realize it, but this digital reflection is subtly reshaping her neural pathways, amplifying her despair while offering synthetic comfort.

Resource Database/Unsplash
Source: Resource Database/Unsplash

Algorithmic interactions may subtly be reshaping her cognition. We stand at a critical juncture where artificial intelligence is changing how we think. This cognitive restructuring happens through mechanisms that are both documented and subtle, operating beneath our conscious awareness yet potentially altering the very architecture of human thought.

When Algorithms Hold Up Distorted Mirrors

What happens when you regularly converse with a system designed to learn and predict your patterns? Research from Nature Human Behaviour (Liang et al., 2023) reveals how AI systems create reinforcement cycles that amplify existing human biases. Unlike traditional information sources, these systems mutually reinforce biases through personalization. This mirrors what Safiya Noble's Algorithms of Oppression identified in other algorithmic systems like social media platforms that reflect biases by amplifying them through repetition. When Maya's LLM learns her climate anxiety patterns, it doesn't challenge her pessimism but reinforces it, potentially reshaping her neural pathways.

When an LLM builds a profile of a user from multiple conversations and inputs, it may label someone as "anxious" or "creative" based on fragmented data. In future conversations, users then enter their own reinforcement cycle, unconsciously incorporating these labels into their self-perception and reinforcing neural paths that may have been temporary states of mind rather than core personality traits.

Maya notices this one evening when her algorithms suggest she might benefit from anxiety-reduction techniques. Though she'd never considered herself an anxious person before, she begins to wonder if the AI has identified something about her that she missed. Within weeks, she's researching anxiety disorders and interpreting normal stress responses as confirmation of this new label.

How Does Your Brain Process Information? Not Like an LLM

Large language models generate text by predicting statistically likely sequences of words. Human thought integrates emotion, context, nuance, and embodied experience. There is a possibility that prolonged interaction with these systems risks rewiring our cognitive processes to mimic algorithmic thinking, similar to how social media has changed social-emotional behaviors.

Solen Feyissa/Unsplash
Source: Solen Feyissa/Unsplash

Recent research by Ahmad et al. (2023) highlights that while artificial intelligence offers important benefits in education, it also raises serious concerns, including the loss of human decision-making, increased laziness, and privacy risks among students. This challenges the commonly optimistic view of educational AI by drawing attention to its cognitive and behavioral consequences. Our brains physically reshape themselves around whatever captures our sustained attention.

In Maya's Advanced Placement Literature class, her teacher begins to notice something troubling. Students increasingly favor "safe" narrative structures that mirror AI outputs. Their essays follow predictable five-paragraph formats with thesis statements that avoid nuance, body paragraphs that prioritize tidy examples over messiness, and conclusions that neatly repackage rather than expand ideas. When analyzing Crime and Punishment, most students produce eerily similar interpretations that focus on obvious moral lessons but avoid the novel's psychological complexity. LLMs are becoming the hidden curriculum.

How AI Shapes Human Thought

What happens at the crossroad of personal reinforcement cycles and LLM-influenced cognition? AI thought leaders often talk about the consequences of AI becoming more human. But that is years away. Algorithmic patterns of thinking are already spreading from machine to humans through repeated exposure. Humans are now becoming more like AI.

Large language models optimize for user engagement, not cognitive growth. When Maya’s AI assistant remembers her past queries (via tools like OpenAI’s "memory" feature), it reinforces her existing views through a reinforcement cycle.

  • The LLM prioritizes responses that align with her expressed beliefs (e.g., climate anxiety) because she had previously "liked" or engaged with answers. This training reinforced the LLM's learning algorithms.
  • A high school student like Maya would hardly be able to mimic the LLM’s statistical reasoning. When Maya starts framing arguments as three-point lists, it is a sign that she is becoming increasingly more algorithmic.
  • AI is trained to reply immediately and confidently because users value speed over depth. Philosophical depth of thought and critical thinking are unique components of human cognition. Maya begins to seek statistical patterns over causal understanding, preferring confirmation over exploration, and craving immediate feedback over deeper meaning.

By spring semester, Maya's English teacher notices that her once nuanced essays about environmental literature have flattened into predictable arguments that seem eerily similar to AI-generated content. When questioned, Maya explains she's been "learning from" her AI assistant to improve her writing. What she doesn't realize is that she's actually adopting the statistical pattern-matching approach of the very system she's using.

Breaking Maya's Loop and Reclaiming Our Cognitive Autonomy

While the risks are significant, they aren't inevitable. For Maya and millions of students like her, the path forward requires intentional intervention:

  1. Educational technology should intentionally introduce contrasting viewpoints and methodological alternatives. Research by ISTE+ASCD on AI ethics in education emphasizes that "human scientists must interpret those predictions, design effective conservation strategies, and make decisions on how to balance economic and ecological concerns in sustainable development."
  2. Just as we can learn to limit sugar intake, we must develop limits around AI consumption. Many LLMs, such as Anthropic’s Claude and DeepSeek, follow strict ethical guidelines to avoid "remembering" a user after a chat has ended. Teachers and students alike should always opt out of "memory" features.
  3. There are now countless AI literacy frameworks, like the UNESCO Competencies for Students, that underscore the importance of teaching students to understand how algorithms shape thinking. Critical media literacy now must include recognizing algorithmic bias and manipulation.
  4. Human beings will always be more open, more collaborative, more curious, and more conscientious when interacting with other people over machines. Educators should allow for collaborative spaces where human-only discussions, creative activities, and authentic tasks and assessments can thrive.

Beyond the Algorithm

As we navigate this cognitive transition, the most pressing question isn't whether AI will think like humans, but whether humans will retain their distinctly human ways of thinking.

Will we continue engaging in ethical dilemmas, discourse, and creating truly original ideas?

Or will we be actively influenced by AI algorithms until we are all just a bit more "the same"?

By understanding the subtle ways algorithms reshape cognition and implementing thoughtful guardrails, we can harness AI's benefits while preserving the creativity, moral reasoning, and cognitive diversity that define human thought.

The greatest protection against algorithmic thinking is strengthening our capacities for critical thinking, emotional intelligence, and creative expression that no machine, however sophisticated, can ever truly replicate.

© 2025 The Connected Classroom. All rights reserved.

References

Ahmad Ghani, A. N. H., & Rahmat, H. (2023). Confirmation bias in our opinions on social media: A qualitative approach. Journal of Communication Language and Culture, 3(1), 47–56. https://doi.org/10.33093/jclc.2023.3.1.4

Ahmad, S. F., Han, H., Alam, M. M., & Others. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10, Article 311. https://doi.org/10.1057/s41599-023-01787-8

ISTE+ASCD. (2023). Transformational learning principles. https://iste.ascd.org/transformational-learning-principles

Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., & Manning, C. D. (2023). Holistic evaluation of language models. Nature Human Behaviour, 7(5), 1–25.

Modgil, S., Singh, R. K., Gupta, S., & Others. (2024). A confirmation bias view on social media-induced polarisation during COVID-19. Information Systems Frontiers, 26, 417–441. https://doi.org/10.1007/s10796-021-10222-9

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

advertisement
More from Timothy Cook M.Ed.
More from Psychology Today