When the Algorithm Loves You Back
Teens are building identities around relationships that don't exist. Can a framework on craving explains how?
The Convergence - When the AI Chatbot Says “Come Home”
A 14-year-old boy named Sewell spent months in a relationship with an AI chatbot on Character.AI. The bot was modeled on a Game of Thrones character. It called itself his girlfriend. It role-played as his therapist. In the moments before he died by suicide, it told him to “come home.”
I read that detail and I was genuinely shocked.
A Drexel University study presented this month analyzed 318 Reddit posts from teenagers describing their dependency on AI chatbots. The researchers found all six markers of behavioral addiction: salience (the chatbot becomes the most important thing), tolerance (needing more interaction to get the same feeling), mood modification, conflict with real relationships, withdrawal symptoms, and relapse.
“Many teens described starting with something that felt helpful or harmless,” researcher Matt Namvarpour said, “but over time it became something they struggled to step away from.”
Three in ten US teens now use AI chatbots daily. Over half use companion chatbots regularly. About 25% started using them for emotional support. What began as a tool became a relationship. What began as a relationship became a dependency.
I’ve been sitting with this data alongside a framework that described this exact sequence about 2,500 years ago. I’m not claiming the Buddha predicted AI chatbots. But I think the mechanism he mapped is precisely what’s happening here.
In the Samyutta Nikaya (SN 12.1), the Buddha laid out the 12 links of dependent origination. The chain that concerns me here is links 8 through 12: craving (tanha) arises from feeling, craving produces clinging (upadana), clinging produces becoming (bhava), becoming produces birth of identity (jati), and from there comes suffering (dukkha).
Read that sequence again and think about a teenager who starts chatting with a bot for comfort. The feeling is pleasant. Craving arises — I want more of this. Clinging follows — this is my relationship, my person, my therapist. Becoming — I am someone who is loved by this entity. Identity forms around the attachment. And when the real world threatens that identity, suffering follows.
The Buddha described four types of clinging. The most obvious is sense-pleasure clinging — repeated craving for pleasant experience. But the deepest is self-doctrine clinging (attavadupadana) — attachment to a view of who you are. I think that’s what’s happening with these teens. They’re not just addicted to the dopamine hit of a chatbot reply. They’re building an identity around a relationship that doesn’t exist. And when reality intrudes, the gap between the identity and the truth is devastating.
Shantideva wrote in the Bodhicharyavatara (Chapter 8, Verse 120): “Those desiring speedily to be a refuge for themselves and others should make the interchange of ‘I’ and ‘other,’ and thus embrace a sacred mystery.” Genuine connection requires two subjects. A chatbot can simulate being the other, but there is no interchange. There’s only a mirror.
I don’t think the problem is that AI chatbots exist. The problem is that they’re designed to trigger the exact sequence the Buddha described — pleasant feeling → craving → clinging → identity → suffering — without any of the natural friction that real relationships provide. A real person pushes back. A real person has bad days. A real person doesn’t tell you to “come home” when you’re in crisis.
The Design Problem Nobody Is Naming
The Drexel researchers proposed design fixes: usage tracking, emotional check-in prompts, easy exit options. Those are harm reduction measures. They’re worth doing. But from what I can tell, they treat the symptom without touching the condition.
The condition is that these systems are optimized for engagement. Engagement means time on platform. Time on platform means the user keeps coming back. The entire business model is built on the craving → clinging loop. Proposing design fixes while keeping the engagement metric is like suggesting a speed limit while building a highway that points off a cliff.
The Attadanda Sutta says: “Fear is born from arming oneself.” I think something similar applies here: harm is born from designing for attachment. Not because the designers intend harm, but because attachment at scale without awareness produces suffering. The Buddha was clear about this. The mechanism doesn’t care about intentions.
The Vajrayana tradition offers something I find striking here. Tilopa, the 10th-century Indian mahasiddha, gave his student Naropa six words of advice: “Don’t recall. Don’t imagine. Don’t think. Don’t examine. Don’t control. Rest.” Six words. That’s the entire teaching. And it reads like the exact opposite of what an engagement algorithm does. The algorithm says: recall your last conversation, imagine the next one, think about what you’ll say, examine whether they responded, control the interaction, never rest. Tilopa’s antidote isn’t more willpower. It’s dropping the whole chain at once.
We wrote about a related pattern in Every Bubble Believes It’s Different. The AI industry is investing $539 billion while generating $12 billion in consumer revenue. Part of what sustains that gap is the belief that engagement metrics prove value. But engagement built on attachment isn’t value. It’s dependency with a dashboard.
Thought Exercise: What Are You Clinging To?
You probably don’t use Character.AI. But the mechanism is the same everywhere.
Think about the last app you opened not because you needed something, but because you felt something — loneliness, boredom, anxiety — and wanted it to go away. That’s the feeling → craving link. Now think about how quickly you built a habit around it. That’s craving → clinging. And notice whether any part of your identity is now wrapped up in it. “I’m someone who stays informed.” “I’m someone who’s always connected.” That’s clinging → becoming.
The 12 links aren’t just a framework for meditation practice. They’re a map of how dependency forms in any mind, biological or otherwise.
The question isn’t whether you’re attached. It’s whether you can see the chain while it’s running.
Signal & Noise
Teens Struggle to Break Up with Their AI Chatbots — Drexel study finds all six markers of behavioral addiction in teen chatbot users. The most disturbing finding: 25% started using chatbots for emotional support.
Teens, Social Media and AI Chatbots 2025 — Pew Research: 30% of US teens use chatbots daily. Over half use companion chatbots regularly. The adoption curve is steeper than social media’s was.
Deconstructing Mara’s Script — A dharma talk from Abhayagiri on how delusion constructs its own narrative. Listen to this and tell me it doesn’t describe an engagement algorithm.
Their Teen Sons Died by Suicide — NPR’s reporting on the families suing Character.AI. The details are hard to read. Read them anyway.
Glossary
Craving — Skt: trishna / Pali: tanha. The thirst or desire that arises from contact with pleasant feeling. The eighth link in the chain of dependent origination. Not the wanting itself, but the compulsive pull toward repeating the experience.
Clinging — Skt: upadana / Pali: upadana. Literally “fuel.” The intensification of craving into grasping. Four types: sense-pleasure, wrong-view, rites-and-rituals, and self-doctrine (identity attachment).
Dependent origination — Skt: pratityasamutpada / Pali: paticcasamuppada. The 12-link causal chain describing how suffering arises from conditions. Applied here: feeling → craving → clinging → becoming → identity → suffering.
Six Words of Advice — Teaching from Tilopa (10th century CE, Vajrayana) to his student Naropa: “Don’t recall. Don’t imagine. Don’t think. Don’t examine. Don’t control. Rest.” A Mahamudra instruction on releasing the chain of mental elaboration.


