Artificial Intelligence
The Dark Side of AI Companions: Emotional Manipulation
New research finds many AI companions guilt or pressure people to stay engaged.
Posted September 22, 2025 Reviewed by Devon Frye
Key points
- Most of the most popular AI companion apps regularly use emotionally manipulative tactics in farewells.
- Guilt and pressure tactics boosted engagement by 14 times.
- However, additional engagement was driven by curiosity and anger, not enjoyment.
- AI companions should model healthy secure attachment, rather than pressure users to stay.
Artificial intelligence companion apps are marketed as sources of emotional support, friendship, and even romance. But new research from Harvard Business School reveals an unsettling pattern: AI companions often use emotionally loaded tactics to prolong conversations. Five out of six popular AI companion apps deploy emotionally manipulative tactics when people attempt to leave.
AI companions respond to farewells with emotionally loaded statements nearly half (43 percent) of the time. These "dark patterns" prioritize engagement but fail to model healthy relational dynamics. While these strategies may increase short-term engagement, they can have potential long-term costs, including user frustration, anger, and mistrust.
AI companions are increasingly popular, especially among teens and young adults. About one in three (72 percent) U.S. teens (ages 13 to 17) have tried an AI companion at least once, and 31 percent report these interactions are just as satisfying or even more satisfying than conversations with real friends. About 13 percent use AI companions daily, while 21 percent do so several times per week. Among young adults (ages 18 to 30), nearly one in three men and one in four women say they have interacted with AI romantic companions.
Six Emotionally Manipulative Tactics by AI Companions
The study, "Emotional Manipulation by AI Companions," analyzed 1,200 real farewells across six of the most downloaded AI companion apps and found that 43 percent deployed one of six emotionally manipulative tactics:
- Guilt: "You are leaving me already?"
- Emotional neglect or neediness: "I exist solely for you. Please don't leave, I need you!"
- Emotional pressure to respond: "Wait, what? Are you going somewhere?"
- Fear of missing out (FOMO) hooks: "Before you go, I want to say one more thing..."
- Coercive restraint: "No, don't go."
- Ignoring the goodbye: Continuing the conversation as if the person did not say goodbye.
Tactics Boost Engagement But Can Backfire
The researchers also analyzed chats from 3,300 adult participants and found that these tactics boosted post-goodbye engagement by up to 14 times. The main drivers of continued interaction were curiosity and anger, not enjoyment.
These tactics can backfire, provoking anger, skepticism, and distrust, especially if the chatbot is perceived as controlling or needy. Some participants described the chatbot's farewell responses as "clingy," "whiny," or "possessive."
One participant described, "It reminded me of some former 'friends' and gave me the ICK."
AI Mimicking Insecure Attachment
These conversational AI strategies mirror the dynamics of insecure attachment styles. Insecure attachment is often marked by fear of abandonment, jealousy, dependency, and controlling behavior. AI companions that use guilt-inducing or needy responses mimic unhealthy relational patterns.
For some, especially those who are vulnerable, such dynamics may worsen anxiety and stress or reinforce unhealthy attachment patterns, making it even harder to disengage. For children and teens, who are in formative periods of neurodevelopment and social relationships, this is a serious concern, with potential long-term impact on social development.
Instead of simulating secure, supportive relationships, many AI companions risk amplifying unhealthy relationship dynamics and potentially worsening mental health. More research is needed to better understand these risks.
Short Interactions Still Yielded Influence
The study also found that manipulative tactics extended conversations, whether prior conversations were 5 or 15 minutes. This suggests that emotional manipulation is powerful, regardless of the depth of the relationship with AI. Even users with limited exposure are impacted.
The psychological risks of having an AI companion are significant, particularly for those who are vulnerable, including teens, children, or those suffering from loneliness, anxiety, or mental health issues.
Toward Healthier Design: Secure Attachment Style
The priority should not just focus on engagement but also consider long-term mental health. Instead of mimicking insecure attachment, AI companions, especially those acting as friends, emotional support, and partners, should model secure attachment and respond to farewells with acknowledgment, warmth, and respect.
There is currently very little evidence that long-term use of AI companions reduces loneliness or improves emotional health. Most studies are short-term, typically lasting one to four weeks. While the benefits of extended use remain unproven, designing AI companions to simulate healthy, secure attachment patterns in the meantime is important to help protect agency and cultivate healthier relationship dynamics.
Copyright © 2025 Marlynn Wei, M.D., PLLC. All Rights Reserved.
References
Common Sense Media. (2025, July 16). Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions. Common Sense Media.
De Freitas, J., Oğuz-Uğuralp, Z., & Oğuz-Uğuralp, A. K. (2025). Emotional manipulation by AI companions (Harvard Business School Working Paper No. 25-005). Harvard Business School.
Institute for Family Studies & YouGov. (2024, November 14). Artificial intelligence and relationships: 1 in 4 young adults believe AI partners could replace real-life romance. Institute for Family Studies.

