Skip to main content

Verified by Psychology Today

Artificial Intelligence

25 Ways AI Will Change How We Think and Feel in 2025

From artificial intimacy to synthetic minds, we’re in for a ride.

Key points

  • As AI becomes more powerful, we’re bound to become humbler.
  • AI might help us overcome human insufficiency, but that could come at a high price: lower self-esteem.
  • More of us will view AI as a friend or romantic partner, but it’ll be a complicated relationship.
  • Our mixed emotions toward AI could lead to both a rise in anti-AI activist schemes and an AI rights movement.
Sydney Sims / Unsplash. Used with permission.
Source: Sydney Sims / Unsplash. Used with permission.

Yet another AI boom year has come to an end. With the “AI winter” long behind us, pundits are gazing on an eternal summer of progress and making their predictions for the new year. Naturally, no one can agree on what’s in store. Is AGI (Artificial General Intelligence) just around the corner, or is it still many years to come? Is the “agentic age” upon us, or are we overestimating the sophistication of our technology?

Let’s face it, as enlightening and entertaining as these predictions may be, they will never be fully accurate. Blinded by our own hubris and self-importance, we’re not reliable soothsayers; in fact, we’d do better to let AI do the predicting for us. Unlike us, it understands its limitations.

There is one area, however, in which humans might have a unique advantage: understanding the human psyche—parsing the idiosyncratic way we think and feel. So instead of predictions that focus on the technological, economic, societal, or political aspects of AI, I’d like to explore how AI will affect us psychologically in the coming year. How will our hearts and minds change? What will the continuous evolution of AI do to our emotions, mental processes, and behaviors? I have written about this before, but we seem to be at an(other) inflection point for AI, making it a good time to check in.

Here are my 25 intuitive, nonscientific, and entirely subjective predictions for 2025:

  1. As AI becomes more powerful, growing in autonomy and perhaps even showing signs of sentience, we’re bound to become humbler. AI is putting us in our place when it comes to acquiring and retaining knowledge, detecting patterns, or computing at record speed. On a more philosophical level, we’ll start rethinking our position in the universe, having lost our claim on the center.
  2. This might lead us to foster empathy and treat other life and living systems, e.g. animals and nature, with more dignity and respect.
  3. Beware of envy! AI’s capabilities are already impressive; as they get bigger and better we’re likely to simultaneously admire and resent what it can do.
  4. It’s not far-fetched to imagine AI becoming that mysterious other with whom our spouse, partner, colleague, friend would rather spend the evening. Jealousy can make fools of us all.
  5. Conversely, if there’s an “AI fail,” we might all enjoy some schadenfreude.
  6. Fear! We may feel so outperformed and out-thought by AI that it triggers an epidemic of imposter syndrome. Sure, AI might help us “overcome human insufficiency,” but that could come at a high price: an aggravated fear of our insufficiency. We may feel inadequate and insecure, and start suffering from low self-esteem.
  7. In fact, there’s a whole number of very human phobias that AI will aggravate: atelophobia, the fear of imperfection; atychiphobia, the fear of failure; and even athazagoraphobia, the fear of being forgotten or replaced.
  8. On the last point, the risk of AI-caused human extinction is disputed but real. One of the “godfathers" of AI, Nobel prize laureate Geoffrey Hinton, recently estimated the probability at 10-20% within the next three decades.
  9. A second set of feelings will revolve around relationship issues. The good news: Because our relationship with AI is not physical, many of the more touchy and feely relationship phobias will not be exacerbated by our artificial friends: philemaphobia (fear of kissing), genophobia (fear of sex), chiratophobia (fear of being touched), omphalophobia (fear of belly buttons), etcetera.
  10. Due to AI’s dramatic advancements, as well as rogue moments that might occur, we are likely to experience pistanthrophobia, the fear of trusting others, or—with AI arguably being the ultimate other—even xenophobia.
  11. At the same time, however, we will experience more (artificial) intimacy in our relationship with AI. More of us will view AI as a friend or romantic partner, helping us feel less lonely and isolated—more understood, desired, and loved. This might, however, hamper our ability to develop real intimacy or have more devastating consequences.
  12. In fact, AI might lock us into exclusive, loyal relationships. When every interaction with a particular AI will train it to know us better, interactions with other AI may feel “off” and less natural. Organizational psychologist Niels Van Quaquebeke told me this could have two consequences. First, the switching costs between AI models will increase, and, secondly, the AI that gains our loyalty can impede our personal growth, becoming tantamount to a nagging partner who argues, “But we have always done it this way!”
  13. Despite (or, perhaps, because of) this intimacy trap, we could develop philophobia, a fear of falling in love.
  14. AI might foment what Diana Lind calls the “human doom loop”, the complete digitalization of our lives to the extent that we have few incentives to leave the house. The resulting privation of our social selves will correspond with the degradation of our physical environment. We will become lonelier, more isolated, and even depressed, while our built environments will stagnate and become more desolate.
  15. As we delegate more tasks to AI, we may experience cognitive atrophy; that is, we will become less adept at certain cognitive skills. In other words: We may become sloppier thinkers.
  16. Or maybe the opposite will happen. AI might sharpen our intellect because it will augment our cognitive skills, serving as a “mind for our minds,” to borrow a concept from AI researchers Dave and Helen Edwards. Merging human neuroplasticity and “techno-plasticity,” we might in fact eventually witness the development of a whole new synthetic mind.
  17. Similarly, we might delegate our ethics to AI, expecting it to make “rational,” “objective,” carefully weighted, data-based decisions for us when we are confronted with moral dilemmas. That, in turn, might drain our moral imagination.
  18. The same paradoxical effect might kick in with regards to emotional diversity. On the one hand, AI might narrow the range of our expressiveness, forcing our emotions into a reductionist, monochrome set of predictable choices, as well as ignoring and in fact stifling more complex, nuanced emotions (a criticism leveled at so-called Emotional AI applications).
  19. On the other hand, engaging with AI might change the very catalogue of human emotions. Emotions are not black and white, and there are myriad grey tones in between. No one knew this better than John Koenig, who in his seminal Dictionary of Obscure Sorrows chronicled unnamed, niche, and unorthodox feelings outside the emotional mainstream.
  20. In the workplace, we may begin to prefer AI managers to their human counterparts because their attitude and behavior is consistent. AI bosses don’t have mood swings or make impulsive decisions; there’s no passive-aggressive demeanor, political calculus, or playing favorites. AI managers have nothing to prove and treat everybody equally based on objective criteria.
  21. Yet AI at the workplace is a double-edged sword. A recent study showed that AI tools dramatically boosted the productivity of leading scientists—but also significantly lowered their job satisfaction. The researchers felt that their skills were underutilized, and felt a diminished sense of ownership and connection to their work.
  22. This is indicative of a broader trend. Instead of exuberance and joy, we might see a rise of melancholy—a surrender to an existential sorrow that intuits our eventual obsolescence as workers (and humans).
  23. From there, it’s a small step to melancholy’s siblings: cynicism, nihilism, and grief.
  24. These strong emotions could lead to a rise in anti-AI activist schemes, a surge of anti-AI cyber punks (see The New York Times’ prediction of a punk revival in 2025).
  25. The opposite is possible, too. Identity politics might enter the human-machine realm. Morality and social justice could also inform new discourse, particularly as a response to our othering of AI. We could see the emergence of AI rights activists.

This is by no means a complete list, but it shows how potentially paradoxical and inevitably complex AI’s impact on our emotions and behaviors may be. A stable relationship it is not. Nor is it one we can give up on.

We are trapped in a joint future that we might not want, and the impossibility of a breakup is the only thing we can be certain of. Everything else will remain unpredictable. It should be an exciting year!

advertisement
More from Tim Leberecht
More from Psychology Today