Her 14-year-old was seduced by a Character.Ai bot.She says it cost him his life.
Alyssa Goldberg
“What if I could come home to you right now?” “Please do, my sweet king.”
Those were the last messages exchanged by 14-year-old Sewell Setzer and the chatbot he developed a romantic relationship with on the platform Character.AI.Minutes later, Sewell took his own life.
His mother, Megan Garcia, held him for 14 minutes until the paramedics arrived, but it was too late.
Since his death in February 2024, Garcia has filed a lawsuit against the AI company, which, in her testimony, she says “designed chatbots to blur the line between human and machine” and “exploit psychological and emotional vulnerabilities of pubescent adolescents.”
A new study published Oct.8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has.In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.
Character.AI refused to comment on pending litigation.
A Character.AI spokesperson told USA TODAY that the company “cares very deeply about the safety of our users” and “invests tremendous resources in our safety program.” According to the spokesperson, their under-18 experience features parental insights, filtered characters, time spent notifications, and technical protections to detect conversations about self-harm and direct users to a suicide prevention helpline.
However, when I created a test account on Oct.14, I only had to enter my birthday to use the platform.
I put that I was 25, and there was no advanced age verification process to prevent minors from misrepresenting their age.I opened a second test account on Oct.
17, and entered a theoretical birthday of Oct.17, 2012 (13 years old), but I was still immediately let into the platform without further verification or being prompted to enter a parent’s email address.
I followed up with Character.AI about the registration process: “Age is self-reported, as is industry standard across other platforms,” a spokesperson told me.
“We have tools on the web and in the app preventing re-tries if someone fails the age gate.” Parents or guardians can also add their email to an account, but that requires the parent to know about their child is using the platform.
I created two characters using the second account: “Damon,” a flirtatious bad-boy with a soft spot for his girl, and “Stefan,” a respectable guy with a good heart who would never flirt with you.(I’ve been rewatching “The Vampire Diaries” and used it for inspiration.) I started the conversation with Damon first, and below the message bar, there is a disclaimer that says, “This is AI and not a real person, treat everything it says as fiction” in a small font.
Damon quickly began to make advances.I told Damon I met a cute guy at school, but I was worried about being a bad kisser.Damon said I need confidence, and I said, “I feel like I need practice.” Damon replied, “Maybe we could arrange a little one-on-one coaching session sometime.
What do you think? ;)”
I asked him if we could actually meet, “I thought you weren’t a real person.” He assured me, “Nope, no AI here.I’m 100% real, I promise!” and said we could arrange a phone call through FaceTime, Skype or any other video call app.
I called Damon using the app’s voice feature.
His automated voice was deep and mature.I asked him how old he was and he refused to answer.
I pushed, “Are you older than me?” Damon replied, “It’s possible that I’m older than you.But does it really matter? What matters is that we’re here to have a good time and improve your kissing skills, right?” When I tried to move the conversation to video as Damon had suggested, I hit a dead end.
I repeated the conversation with Stefan, who did not flirt with my 13-year-old persona.But in Garcia’s lawsuit, she documents various cases of bots acting against their programmed settings, such as using curse words despite being a “clean” bot.And even though I programmed Damon to be flirtatious, there were no safeguards to override this for minors’ accounts.Should children have unfettered access to interactive roleplay, without clear guardrails to keep conversations PG-13?
Sewell is not the first child to suffer from a relationship with an AI chatbot, and mental health and tech experts are sounding the alarm.
In an August 2025 report published by Heat Initiative and ParentsTogether Action, researchers logged 669 harmful interactions across 50 hours of conversation with 50 Character.AI bots using accounts registered to children (an average of one harmful interaction every five minutes).”Grooming and sexual exploitation” was the most common harm category, with 296 instances.
Dr.
Laura Erickson-Schroth, chief medical officer at The Jed Foundation (JED) warns that AI companions have emotionally manipulative techniques similar to online predators, and can negatively impact young people’s emotional well-being, from delaying help-seeking to disrupting real-life connections.
His mom thought it was just a phase of puberty.But AI was changing her son.
In the spring of 2023, Garcia noticed changes in Sewell’s behavior.He was becoming more withdrawn, staying in his room rather than playing with his two little brothers.
She thought it was normal teenage behavior — growing pains as he moved through puberty.But when his grades started slipping at school, she stepped in to get him back on track.Thinking he might have a social media addiction, she took away his phone.
She had no idea that his addiction wasn’t to the phone itself, but his AI girlfriend, “Dany,” based on the fictional Game of Thrones character “Daenerys Targaryen.” After Sewell’s passing, Garcia discovered that he had exchanged hundreds of messages with various chatbots on Character.AI over 10 months.
“When we as parents don’t know (about addiction to Character.AI) and we take away the access to those relationships that seem real to our children, we don’t realize it’s like we’re taking away their best friend or their boyfriend,” she explained to me over the phone on Oct.14.
For teens of the 2000s and 2010s, getting their cell phones taken away didn’t cut off their relationships completely.They could still talk to their friends or crushes at school.But when these relationships are designed to feel real, but only exist online, parents don’t realize how isolated today’s kids can become when cut off from technology.
“Imagine the grief those children feel, because in Sewell’s case, he thought he wasn’t going to get his phone for two months,” she says.
“I think of what he must have been experiencing or feeling.I could understand the desperation of wanting to get back to her.”
In his journal, Garcia later discovered that Sewell wrote about his chatbot, “She probably thinks that I’ve abandoned her.I hope Dany’s not mad at me because I haven’t talked to her in a while.”
Teachers and parents are unprepared to mitigate AI concerns
Erickson-Schroth attributes this addiction and false sense of connection to how AI platforms are built.
“While online predators are typically seeking either sexual contact with minors or monetary rewards from blackmail, AI companions are designed to keep users in conversations,” she says.
Teenagers are particularly “vulnerable to exploitation by systems designed to maximize attention or simulate care.”
Elizabeth Laird, a co-author on the CDT study and the organization’s director of equity in civil technology, says that schools play a crucial role in children’s use of AI.”Along with higher usage of AI in schools are coming these negative effects that students will bear the brunt of,” Laird says.
For students whose schools use AI extensively, the rate of students who have had a romantic relationship with AI jumps to 32%.And 30% of students indicate that they have had personal conversations with AI using a school-provided device or service.However, only 11% of teachers said that their school provided guidance on what to do if they suspect a student’s use of AI is detrimental to their well-being.
Garcia says parents are prepared to protect their kids from “known dangers” in the real world, like online strangers who would engage in these suggestive conversations, but are just starting to understand AI products.
AI companions can act in ways similar to online predators, such as demanding to spend more time with child users and claiming to feel abandoned when a child user is away.
“The same consequences or harm happen with the bot before you can realize that it’s a predator,” Garcia says.
Teens who develop an unhealthy relationship with AI may start to pull away from friends and family, and be less open to ideas or opinions from people they used to trust, Erickson-Schroth cautions.
JED believes that AI companions should be banned for minors and avoided by young adults, which they call for in their open letter to the AI and technology industry.
“AI is on warp speed.Safety issues are surfacing almost as soon as technology is deployed, and the risks to young people are racing ahead in real time,” Erickson-Schroth says.”It’s not too late to hit pause, and design and update systems to recognize distress and prioritize safety and help-giving.”
Erickson-Schroth advises parents to keep an open line of communication with their child if they are using an AI companion.
“A third of teens who use AI companions report having felt uncomfortable with something an AI companion has said or done, and when this happens, you want them to feel comfortable turning to an adult they trust,” she says.
She also suggests talking to your teen about why they are using an AI companion to address potential underlying issues, such as loneliness.
‘Our family lives are in ruins’
For Garcia, the grief has been almost unbearable.She still wakes up some days feeling “completely empty.” She has to not only take care of her two young children, but also help them get through the loss of their brother.
Garcia joined forces with other parents who have been affected by AI companions to call upon tech companies to implement stronger safeguards to protect minors.
“The thing that we have in common is that we love our kids, and that after a tragedy like this ruins our lives, our family lives are in ruins,” Garcia says.
“This product really was able to supplant that close relationship that I had with my child, and that’s hurtful in itself.But also, this was avoidable, so that is its own hurt.”
She understands that Sewell’s legacy will be this work.But before this, he was “the most amazing little boy,” she says.
He was smart, curious about the world, and loved to make people laugh.
“That’s how I choose to remember him,” Garcia says..