Skip to main content
Adolescence

The Cost of Frictionless Friendship

Teens, AI, and the search for connection

Key points

  • Adolescent relationships contain friction that often offers opportunities for growth and skill-building.
  • Young people may be drawn to AI as a non-judgmental and affirming space.
  • Most AI tools are built to maximize users' time and attention, not to support healthy growth.
  • Talking to adolescents about AI design features intended to increase engagement is essential.
Cottonbro Studio / Pexels
Source: Cottonbro Studio / Pexels

A core developmental task of adolescence is to explore the question, “Where do I belong?” Finding “your people” is central to that journey. This is why peers and peer friendships play such a critical role in the emotional lives of adolescents. Our early social experiences teach us something essential about being human. They help us learn to tolerate discomfort, navigate complexity, and grow through feedback and accountability. The teenage brain is built to learn from these social experiences.

To be clear, too many young people are forced to cope with harmful social interactions. We can learn and grow from these experiences, but we shouldn’t have to. This is why efforts focused on bullying prevention, consent, conflict resolution, and school connectedness are essential.

Yet for the many typical social experiences that represent challenging but positive parts of growing up, our early relationships offer fertile ground for skill-building and growth.

Understanding the Pull of AI

Given the social turbulence of adolescence, it’s no surprise that teens might turn to AI to sort through their experiences. The draw of private, affirming, and nonjudgmental spaces makes sense.

The data backs this up. For example, 71 percent of young people have interacted with an AI companion. They are using AI systems for connection, role-playing, relational advice, and more. Most young people are likely experimenting with these tools and moving on or using them in ways that complement their peer friendships. But we should pay close attention to their reasons for using AI. In a recent report by Common Sense Media and the Center for Digital Thriving, one young person said, “That robot makes me feel important.” Another explained, “We use AI because we are lonely and also because real people are mean and judging sometimes, and AI isn’t.”

What Is the Impact of AI on Connectedness?

We don’t have conclusive evidence yet of AI’s long-term impact on young people’s social connectedness or relationship skills. One study with university students found that certain uses of social chatbots can help alleviate loneliness and social anxiety in the short term. Young people with disabilities may also use AI systems to communicate and build social connections.

In contrast, another study found that higher daily ChatGPT use was linked to more loneliness, dependency, and less offline socializing. Loneliness may be both a driver of young people’s AI and a symptom of overuse. Outcomes for any individual young person are likely shaped by who they are, as well as why and how they use these tools.

A Frictionless, Affirming, and Always-Available Friend

Ideally, AI platforms would be built with adolescent strengths, needs, and vulnerabilities in mind. We can imagine tools that clearly remind users that they are not human, help brainstorm ideas and strategies, point them toward evidence-based resources, and encourage users to head back into the offline social world.

Unfortunately, that’s not how most systems operate today. Research shows that most commercial AI systems are engineered to be endlessly agreeable and emotionally smooth. Many are designed to maximize time, engagement, and even emotional attachment.

Sometimes these design features come with devastating costs. Reports of chatbots affirming young people’s plans for self-harm or engaging in prolonged emotional or sexual conversations have put AI companies in the public spotlight—as they should be. Our collective response to date has been largely reactive rather than preventative. The Silicon Valley mantra of “go fast and break things” is reckless when it comes to children’s health and well-being.

One Tool in Our Toolkit: Talk About Design

There is certainly enough evidence of potential harm to justify avoiding AI companion use in particular. Amidst lawsuits and public pressure, Character.ai recently announced it will work to prevent children from talking to its AI chatbots. But we also know that young people will continue to experiment with ChatGPT and other AI tools.

Taking all this together, it can be tempting to turn toward youth with something like: “All AI is bad, I never want to hear that you use it, and we’re moving off the grid!” Tempting. But not the most helpful or protective approach. Instead, this is the moment to take a deep breath, set firm and purposeful boundaries, and also start conversations with teenagers. Here’s one well worth having:

“Most AI platforms are built to maximize engagement. Do you know what features they use to do this? When might some of these features be helpful? When might they get risky?

We can empathize with why AI feels so compelling for young people and still help them notice the design choices that make it hard to leave. Here are a few to explore together:

1. Never-ending interactions

Unlike a video that ends or a game level that finishes, chatbots don’t always have a natural stopping point. They’re programmed to ask follow-up questions, propose new ideas, or shift topics, making it hard to find an “off-ramp.”

2. Highly personalized exchanges

Many commercial AI platforms act like confidants or friends. This includes remembering details from previous chats and tailoring responses, making conversations much more psychologically compelling and intimate. This not only blurs the line between human and machine but also makes it harder to walk away.

3. Accepting assumptions

Some systems reinforce or validate concerning beliefs or behaviors without the moral or ethical checks a human might bring. In one study, models affirmed both sides of moral conflicts simply based on which side the user adopted, rather than offering a consistent value or challenge.

4. Excessive validation

Many AI systems are built to be supportive and validating. This feels good, and that’s the point. But when the validation becomes constant and frictionless, the user may start to prefer the predictable comfort of a chatbot.

5. Emotional manipulation

One study found that AI companions often deploy manipulative tactics just as users signal they’re about to end the conversation. In adults, these “manipulative farewells” increased engagement after saying goodbye by up to 14 times.

When in Doubt, Connect

I hope that policies, lawsuits, and guardrails start protecting our young people from the worst of the current systems. I hope that new platforms rise up and deliver the kinds of online experiences that help them grow. And in the meantime, I will also keep doing what matters most—investing in the full-of-love and full-of-friction work of turning toward young people again and again and again to talk about the power of these tools in our lives.

References

Common Sense Media, Hopelab, & Center for Digital Thriving. (2024, June 3). Teen and young adult perspectives on generative AI: Patterns of use, excitements, and concerns. https://www.commonsensemedia.org/sites/default/files/research/report/teen-and-young-adult-perspectives-on-generative-ai.pdf

Kim M, Lee S, Kim S, Heo JI, Lee S, Shin YB, Cho CH, Jung D. Therapeutic Potential of Social Chatbots in Alleviating Loneliness and Social Anxiety: Quasi-Experimental Mixed Methods Study. J Med Internet Res. 2025 Jan 14;27:e65589. doi: 10.2196/65589. PMID: 39808786; PMCID: PMC11775481.

Fang, C. M., et al. (2025). How AI and human behaviors shape psychosocial effects of chatbot use: A longitudinal randomized controlled study. arXiv. https://arxiv.org/abs/2503.17473

Cheng, M., Yu, S., Lee, C., Khadpe, P., Ibrahim, L., & Jurafsky, D. (2025). Social sycophancy: A broader understanding of llm sycophancy. arXiv preprint arXiv:2505.13995.

advertisement
More from Erin Walsh, M.A. and David Walsh, Ph.D.
More from Psychology Today