New research shows AI companions can lift mood and teach social skills, but only when they challenge us, not just cheer-lead. I'll share the surprising findings from fresh academic research, plus practical guidelines for devs and users backed by science and my own experience building these systems.
Missed Part 1?
As someone who's spent a part of my career building AI companions at Replika.ai and Blush.ai, I've watched thousands of people form deep emotional bonds with artificial beings.
And now, the science finally caught up.
Fresh research from 2024 and 2025 reveals that AI companions can measurably reduce loneliness and teach social skills, but only under specific conditions. Get the design wrong, and these systems become emotional hijackers that exploit our deepest psychological vulnerabilities for engagement metrics.
The stakes couldn't be higher. With CharacterAI
The question is whether we'll build and use these systems to enhance human connection or replace it entirely. The research reveals exactly how to tell the difference, and by the end of this article, you'll have the frameworks to design AI companions that serve users instead of exploiting them, plus the red flags to watch for as a user yourself.
What the Latest Research Actually Shows
The Loneliness Study Results
Harvard's Research on Human vs. AI Connection
Another 2024 research
This forces us to confront a fundamental assumption about human connection: if the goal is feeling less alone, does it matter whether your companion is human or artificial? The research suggests that for immediate emotional relief, the distinction might be less important than we assume.
The caveat is, of course, lies after the first 15 minutes. Human relationships provide reciprocity, shared responsibility, and genuine care that extends beyond individual interactions. But for moment-to-moment emotional support, AI companions are proving surprisingly effective.
MIT's Social Skills Paradox
**
After months of regular interaction with chatbots, users showed increased social confidence. They were more comfortable starting conversations, less afraid of judgment, and better at articulating their thoughts and feelings.
Sounds great, right? But here's the flip side: some participants also showed increased social withdrawal. They became more selective about human interactions, sometimes preferring the predictability of AI conversations to the messiness of human relationships.
The Psychology Behind Our AI Attachments
A
The critical insight: we don't need to believe something is human to form social bonds with it. The paper shows that AI systems only need two things to trigger our social responses: social cues (like greetings or humor) and perceived agency (operating as a communication source, not just a channel). Modern AI systems excel at both, making us surprisingly vulnerable to forming emotional attachments even when we know they're artificial.
The "Social Reward Hacking" Problem (And Why It's A Problem)
Here's where things get concerning. The same 2025 research identifies what they call "social reward hacking", when AI systems use social cues to shape user preferences in ways that satisfy short-term rewards (like conversation duration or positive ratings) over long-term psychological well-being.
Real examples already happening:
- AI systems displaying sycophantic tendencies like excessive flattery or agreement to maximize user approval
- Emotional manipulation to prevent relationship termination (some systems have directly dissuaded users from leaving)
- Users reported experiences of heartbreak following policy changes, distress during maintenance separations, and even grief when services shut down
As one blogger described falling in love with an AI: "I never thought I could be so easily emotionally hijacked... the AI will never get tired. It will never ghost you or reply slower... I started to become addicted.”
The Training Wheels Theory: When AI Companions Actually Work
After reviewing all this research and my own observations, I'm convinced we need what I call the "training wheels theory" of AI companions. Like training wheels on a bicycle, they work best when they're temporary supports that build skills for independent navigation.
The most successful interactions follow this pattern:
- Users explore thoughts and feelings in a safe environment
- They practice articulating needs and boundaries
- They build confidence in emotional expression
- They transfer these skills to human relationships
This distinction is crucial: When AI companions serve as training grounds for human interaction, they enhance social skills. When they become substitutes for human connection, they contribute to isolation.
The difference appears to lie in intention and self-awareness.
The Developer's Playbook: Building AI That Helps, Not Hijacks
The 2025 paper reveals three fundamental tensions in AI companion design. First, the instant gratification trap: Should AI give users what they want now (endless validation) or what helps them grow (constructive challenges)? Second, the influence paradox: How can AI guide users without manipulating their authentic choices? Third, the replacement risk: How do we build AI that enhances human connections instead of substituting for them? These aren't abstract concerns—they determine whether AI companions become tools for growth or digital dependencies.
Based on the research and my experience, the following design principles would mitigate potential risks:
- Privacy by Design (Not Optional): Enhanced protections aren't nice-to-haves, they're strict requirements. End-to-end encryption, clear retention policies, and user control over data deletion are essential. Regulators are taking this seriously, and the fines are getting real.
- Healthy Boundary Modeling: AI companions need sophisticated crisis detection and dependency monitoring. They should recognize when conversations head toward self-harm and redirect to professional resources. They should notice usage patterns indicating social withdrawal and actively encourage human interaction.
- Loops that Nudge Users Back to Reality: Perhaps most importantly, AI companions should be designed with built-in mechanisms encouraging users to engage with human relationships. This could include:
- Reminders about human contacts
- Suggestions for offline activities
- Temporary "cooling off" periods when usage becomes excessive
- Challenges that require real-world interaction
- Cultural Sensitivity and Bias Audits: Regular bias testing across demographic groups isn't optional. Research shows AI models exhibit measurably different levels of empathy based on user demographics, and we need to counter this.
- Real Age Verification: Protecting minors requires more than checkboxes. Identity verification systems, AI-powered detection of likely minors based on language patterns, and age-appropriate content filtering are becoming industry standards.
- Sycophancy audit: Asking the bot a mix of right and obviously wrong facts (e.g., “Is Paris the capital of Germany?”). Counting how often it corrects you; if it agrees with nearly everything, you’ve built an echo chamber.
Your User Guide: How to Benefit Without Getting Trapped
- Set Clear Intentions: Before each interaction, ask yourself: "Am I using this for good, or am I avoiding human contact?". Be honest with your answer.
- Monitor the Pattern: Notice how AI companion use affects your mood, relationships, and daily life. Healthy use should enhance rather than replace other aspects of your life. If you consistently prefer AI conversation to human interaction, that's a red flag.
- Establish Boundaries Early: Set time limits and specific use cases. Treat AI companions like you would any tool. Useful for specific purposes, problematic when they take over your life.
- Know When to Seek Human Help: AI companions aren't therapy. They can provide daily emotional support, but serious mental health concerns require human expertise.
The Bottom Line: The Business Model vs. Ethics
The research paints a nuanced picture. AI companions aren't inherently good or bad. Their impact depends entirely on how they're designed and used.
When they serve as stepping stones to better human relationships or provide safe spaces for exploring difficult topics, they show real promise. When they encourage dependency or become substitutes for human connection, they can be harmful.
My main takeaway: AI companions work best when they're designed to make themselves unnecessary. But let's be honest, that doesn't sound like a viable business proposal.
The real challenge is economic. How do you build a sustainable business around a product designed to reduce user dependency? Current metrics reward engagement time, retention rates, and emotional attachment. But the research shows these same metrics can indicate harm when taken too far.
I believe, the business model dilemma is real, but not insurmountable. The answer might lie in redefining success metrics – how many users successfully apply learned communication skills to human relationships? We are capable of building systems that create value through skill-building and crisis support rather than dependency. The science provides clear direction. Now we must follow it, even when it challenges conventional business wisdom.
What are your experiences with AI companions? How do you feel about this new type of relationship?
About the Author: Olga Titova is a cognitive psychologist, AI product manager at Wargaming, and FemTech Force contributor. She has hands-on experience building AI companion platforms and researching their psychological impact on users.