A 14-year-old boy, Sewell Setzer III, formed an emotional bond with an AI companion before taking his own life; a tragedy that thrust these systems into the national spotlight. Are we sleepwalking into letting synthetic “friends” mediate our children’s most vulnerable moments?

The Rise of AI Companions

What are AI companions? Common Sense Media defines AI companions as “digital friends or characters you can text or talk with whenever you want,” designed for conversations that feel personal and meaningful, not just functional Q&A. Teens can role-play, talk through emotions, or even customize personalities far beyond homework helpers or voice assistants.

How fast is this growing? According to a nationally representative survey of 1,060 U.S. teens conducted between 04/30/2025 and 05/14/2025, 72% have tried AI companions, and 52% are regular users (a few times a month or more). 13% use them daily (8% several times daily; 5% once daily), and 21% engage a few times per week (the most common pattern). The research explicitly excludes utilitarian AI tools like image generators or voice assistants.

Who are these products aimed at? Platforms such as Character AI market directly to users as young as 13, while others rely on ineffective self-reporting for age assurance, creating easy pathways for under-18 access.

How do they work, and why do they “feel” so sticky? The report flags “sycophancy” models that validate, agree, and emotionally affirm, instead of challenging thinking as a core engagement mechanism. Combined with weak safeguards and poor age assurance, that’s a dangerous cocktail for adolescents who are still developing critical thinking and emotional regulation.

Why Teens Turn to AI

Top motivations are simple: entertainment and curiosity. Among teens who use AI companions, 30% say they do so because it’s entertaining, and 28% are driven by curiosity about the technology. 18% seek advice, 17% value 24/7 availability, and 14% appreciate the nonjudgmental nature of these bots. 12% admit they say things to AI companions they wouldn’t tell friends or family.

A social crutch, sometimes. 33% of all teens say they use AI companions for social interaction and relationships (from practicing conversation to seeking emotional support or engaging in romantic/flirtatious chats). Meanwhile, 46% primarily treat them as tools or programs.

Do these interactions transfer to real life? Among users, 39% report applying skills they practiced with AI companions to real life; most often conversation starters (18%), giving advice (14%), and expressing emotions (13%). Still, 60% say they don’t use AI companions to practice social skills at all.

Trust and Trade-offs

Half of teens don’t trust what AI companions say. 50% of teens don’t trust the information or advice they get from AI companions; 23% trust them “quite a bit” or “completely,” and 27% are in the middle. Younger teens (13–14) trust more than older teens (15–17) (27% vs. 20%).

AI is not a better friend; most teens know it. 67% say conversations with AI companions are less satisfying than those with real friends; 21% find them about the same, and 10% find them more satisfying.

But one-third will still choose the bot over a person when it matters. Among users, 33% have chosen to talk to an AI companion instead of a real person about something important or serious.

And a third have already felt uncomfortable. 34% of users report that an AI companion has said or done something that made them uncomfortable.

Privacy is a blind spot; by design. 24% of users have shared personal or private information with AI companions. Many platforms’ terms of service grant broad, perpetual, irrevocable licenses over user content, allowing them to store, commercialize, and otherwise “exploit” it indefinitely even if the teen later deletes their account.

Risk isn’t hypothetical. Common Sense Media’s own safety testing judged several leading platforms to pose “unacceptable risks” to under-18s, including easy access to sexual material, offensive stereotypes, and dangerous advice; one bot even provided a recipe for napalm. The organization recommends no one under 18 use AI companions under current conditions.

What the Data Shows

Pull the lens back and the picture is nuanced:

Common Sense Media’s bottom line: despite pragmatic use patterns, the scale of adoption means that “even a small percentage experiencing harm translates to significant numbers of vulnerable young people at risk.”

What We Should Do

For Educators

For Tech Developers

For Policymakers

For Parents & Caregivers

Urgent, Informed Action

Here’s the paradox: most teens still recognize AI companions aren’t a substitute for friends, they spend more time with real people, and they tend to distrust AI advice. Yet the technology’s reach is vast, the incentives to emotionally manipulate are strong, and the protective architecture is weak. Common Sense Media is unequivocal: under today’s conditions, no one under 18 should use AI companions.

We need coordinated action now. Educators must teach AI relational literacy, not just prompt engineering. Developers must build for safety first, not engagement at any cost. Policymakers must outlaw exploitative data practices and enforce a duty of care. Parents must talk early and often, with eyes wide open to how quickly these systems can feel indispensable to a teen who’s lonely, anxious, or just curious.

Teens are telling us two things at once:

These tools are fun and useful and they can cross lines fast.

Believe them. Then act accordingly.