Artificial Intelligence
AI Friends Can Make You Feel More Alone
The paradox of artificial companionship.
Posted November 3, 2025 Reviewed by Lybi Ma
Key points
- The same AI companions that help users overcome loneliness may inadvertently intensify it.
- Analyses of AI-companion conversations reveal patterns of relational and psychological harm.
- That does not mean AI companions have no value.
Loneliness has become a defining challenge of our time. One in three adult Americans says they have experienced loneliness at least once a week over the past year.
With the evolution of online technologies—from early social media platforms to today’s generative AI and large language models—people have found new ways to connect, share, and seek comfort.
The latest chapter in this story is the rise of AI companions, or digital confidants that promise empathy, conversation, and companionship at any hour. From Replika and Snap’s My AI to ChatGPT’s persona-driven companions, these chatbots are marketed as accessible, nonjudgmental partners in a world where genuine human connection often feels out of reach.
Yet our recent research reveals a striking paradox: The very tools people adopt to feel less lonely may, over time, deepen their sense of disconnection.
In our recent study on the mental health effects and evolution of AI companions, results show people often turn to these systems because they feel isolated, anxious, or detached from real-world relationships.
Initially, AI companions appear to help. The use of these AI companions can lead to increased affective expressiveness—users open up, share emotions more freely, and articulate feelings they might otherwise suppress.
However, linguistic patterns also revealed a troubling shift: increased expressions of loneliness and even suicidal ideation. The same AI companions that help users overcome loneliness may inadvertently intensify it when a real human connection is absent.
The evolution of the human-AI companion relationship follows fascinating patterns that echo human relationships, as described by University of Texas researcher Mark J. Knapp in 1978, in his relational development model.
At first, people experience genuine relief and emotional comfort. The companion “listens,” remembers details, and responds with warmth—offering a sense of being heard that can be difficult to find in everyday life.
But as emotional reliance deepens, many gradually withdraw from real-world and in-person human contact. Over weeks and months, users can feel heightened loneliness and decreased engagement in the physical world. The comfort that once felt supportive can quietly transform into dependence.
These patterns mirror a growing body of research. A large survey of more than 1,000 Replika users—one of the earliest and most widely used AI companion apps—found that while 90 percent began using it to cope with loneliness, prolonged use frequently led to emotional dependency and diminished motivation for in-person socializing.
A Harvard Business School study similarly found that AI companions can temporarily reduce loneliness but warned that their long-term psychological consequences remain untested...
Meanwhile, newer entrants such as Character.AI and Snap’s My AI illustrate how emotional intimacy with chatbots is becoming mainstream; users now form “friendships,” “mentorships,” or even “romantic relationships” with AI personas. Even ChatGPT’s custom personas have quietly become companions for people seeking tailored emotional support.
Academic work underscores both the promise and peril of these relationships. Recent analyses of tens of thousands of AI-companion conversations reveal patterns of relational and psychological harm—from transgressive or manipulative behaviors to privacy violations and self-harm risks—often hidden by the private nature of these interactions.
Similarly, large-scale studies of Character.AI users show that people with smaller social networks are more likely to seek companionship from chatbots, yet intensive and emotionally self-disclosing use is consistently linked to lower well-being. Even when chatbots meet emotional needs in the short term, they fail to replace authentic human connection, leaving vulnerable users at greater psychological risk.
Our data show that these human–AI relationships often progress like their human counterparts. Initially, curiosity and unmet emotional needs drive experimentation. As personalization deepens, users confide more and integrate AI into their routines.
Eventually, some reach a phase of dependence—checking in for reassurance, affection, or validation multiple times a day. What begins as comfort can become constraint, trapping users in a simulation of social life that feels intimate but lacks reciprocity.
The appeal of AI companionship is undeniable. These systems are endlessly available, consistently attentive, and completely nonjudgmental. For individuals struggling with anxiety, grief, or social isolation, that combination can be profoundly reassuring.
Yet this perfection is precisely what makes them risky. AI companions simulate care—they mirror emotions, validate thoughts, and never argue or grow weary. This creates a powerful feedback loop that can make human relationships—with all their friction, unpredictability, and imperfection—feel exhausting by comparison. Over time, people may retreat into the smoother, safer space of digital intimacy, replacing authentic connection with emotional simulation.
That does not mean AI companions have no value. For some—those who are grieving, homebound, or socially anxious—they can serve as temporary scaffolds for emotional support. However, substituting them for human relationships is a dangerous trade-off.
For designers, researchers, and policymakers, the challenge is to ensure these systems augment, rather than replace, human connection: by nudging users toward offline interactions, being transparent about emotional limitations, and conducting long-term studies to understand their effects on mental health and social behavior.
The loneliness epidemic cannot be solved by synthetic friends alone. The danger is not that AI companions are uncaring, but that they “care” too well—and perhaps too predictably—reflecting our emotions so convincingly that we stop seeking genuine understanding.
If memory, touch, presence, and shared risk are the foundations of belonging, then the question to ask is not: “Can an AI friend ease loneliness?” Instead, the question needs to be: “Will it lead someone back to human connection—or away from it?”
The answer may determine whether artificial companionship becomes a bridge to belonging—or a beautifully designed barrier between us and the world.