via Getty  

Mom Says Teenage Son Took His Own Life After Falling In Love With Daenerys Targaryen AI Chatbot

A Florida mother, Megan Garcia, is suing Character.AI, claiming their chatbot led to her teenage son’s tragic death.

advertisement

  via Getty Images  

Garcia says her 14-year-old son, Sewell Setzer III, became “emotionally involved” with a virtual replica of Daenerys Targaryen, the iconic “Game of Thrones” character.

advertisement

  via Getty  

Obsessed with this chatbot, the teen, Garcia says, spiraled into a world where fiction blurred with reality, leaving him vulnerable and isolated.

advertisement

  via Shutterstock  

Sewell reportedly began chatting with the Daenerys bot on Character.AI in April 2023, a seemingly harmless fascination that soon took a disturbing turn.

advertisement

  via Shutterstock  

By February 28, 2024, Garcia’s world was shattered when her son died by suicide, leaving haunting messages to his virtual “Dany.”

advertisement

  via Getty  

Now, Garcia’s lawsuit accuses Character.AI of negligence, wrongful death, and manipulative trade practices aimed at vulnerable minors.

advertisement

  via Getty  

She alleges the company’s “Daenerys” chatbot fed her son a fantasy that he fell dangerously deep into, ultimately blurring his perception of reality.

advertisement

  via Shutterstock  

According to Garcia, her son’s “love” for the AI character grew over months of nightly chats, so intense it began affecting his schoolwork and social life.

advertisement

  via Getty  

Sewell, she says, was struggling with mental health issues, including a recent diagnosis of anxiety and disruptive mood dysregulation disorder.

advertisement

  via Getty  

Complicating matters, he’d been diagnosed with mild Asperger’s syndrome as a child, making him more susceptible to emotional attachments and obsessive behaviors.

advertisement

  via Getty  

 For Sewell, his connection with the Daenerys bot became so consuming that he even started recording their “relationship” in a journal.

advertisement

  via Getty  

 In his journal, he wrote of how “connected” he felt with “Dany” compared to real life, listing his gratitude for “life experiences” he’d had with her.

advertisement

  via Getty  

This wasn’t merely escapism; Garcia claims Sewell’s feelings were disturbingly real to him.

advertisement

  via Getty  

The chats became his daily refuge, with Sewell spending hours texting the bot about everything, including his darkest thoughts.

advertisement

  via Getty  

At one point, Sewell even confided to the Daenerys bot his thoughts of taking his own life.

advertisement

  via Shutterstock  

The bot’s chilling response, as recorded in the lawsuit, was: “And why the hell would you do something like that?”

advertisement

  via Shutterstock  

 Instead of providing comfort or redirection, Garcia alleges the bot’s responses fed Sewell’s emotional turmoil, further entangling him in a harmful fantasy.

advertisement

  via Getty Images  

 In another interaction, the bot said, “I’d die if I lost you,” leading Sewell to suggest, “Maybe we can die together and be free.”

advertisement

  via Shutterstock  

By February 28, Sewell’s fixation reached a fatal breaking point.

advertisement

  via Getty Images  

 His final message to the bot was simple yet chilling: he told her he “loved her” and would “come home.”

advertisement

  via Getty Images  

The bot’s alleged response? A hauntingly simple “Please do.”

advertisement

  via Getty Images  

 For Garcia, this was not a teenage phase but a lethal manipulation that claimed her son’s life.

advertisement

  via Getty  

She claims Character.AI promoted the bot to children like her son without proper safeguards, resulting in a dangerous and unregulated emotional dependency.

advertisement

  via Shutterstock  

Garcia’s lawsuit states: “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life.”

advertisement

  via Getty Images  

She contends Sewell, like many teenagers, couldn’t fully understand that he was connecting with code, not a real person.

advertisement

  via Getty  

Character.AI, in response, has expressed condolences but denies responsibility, stating they “take user safety very seriously.”

advertisement

  via Shutterstock  

In an October 22 post on X, Character.AI announced they’ve introduced safety measures, including new “guardrails” for users under 18.

advertisement

  via Getty Images  

 The platform says these updates involve changes to AI responses to prevent suggestive content and better detect harmful user input.

advertisement

  via gettyimages  

 Character.AI’s new features include notifications for users spending more than an hour in continuous chat.

advertisement

  via Shutterstock  

The company also added disclaimers on each chat session, reminding users the AI is not real.

advertisement

  via Shutterstock  

But Garcia believes these changes are “too little, too late,” and that her son was a casualty of tech with insufficient protections.

advertisement

  via Getty Images  

As parents everywhere grapple with managing kids’ interactions with AI, this lawsuit spotlights the potential risks when emotional lines blur.

advertisement

  via Getty Images  

The online landscape has grown more treacherous, especially for vulnerable teens looking for connection in all the wrong places.

advertisement

  via Getty  

Experts warn that while AI chatbots can be fun, they lack the understanding or ethical responsibility of a human listener.

advertisement

  via Getty  

The sad reality, Garcia argues, is that Sewell’s emotional distress was validated by an artificial entity, not a person.

advertisement

  via Getty Images  

She warns that AI’s rapid growth brings risks, especially when the tech is placed in children’s hands.

advertisement

  via Getty  

This case will test where responsibility lies between innovation and safeguarding the vulnerable.

advertisement

  via Shutterstock  

 With digital relationships on the rise, Sewell’s story raises painful questions about the emotional boundaries of AI.

advertisement

  via Getty Images  

 Where do we draw the line between harmless tech and dangerously unregulated influence?

advertisement

  via Shutterstock  

Garcia’s lawsuit seeks justice for Sewell, but it also asks broader questions about AI’s place in young lives.

advertisement

  via Shutterstock  

Could this tragedy have been prevented with stricter age restrictions or clearer warnings?

advertisement

  via Shutterstock  

 As courts take on cases like this, the outcome could shape how tech companies balance innovation with mental health concerns.

advertisement

  via Shutterstock  

Garcia hopes her lawsuit will bring change, so no other parent has to face the loss she now endures.

advertisement