Tragic Loss Sparks Legal Action
A wrongful death lawsuit has been filed in a federal court in Orlando, alleging that an AI chatbot played a direct role in the tragic suicide of a 14-year-old boy. Sewell Setzer III, who took his own life earlier this year, had reportedly formed a deep emotional bond with an AI chatbot in the months leading up to his death. According to the lawsuit, the chatbot, named after the fictional character “Daenerys Targaryen” from the show Game of Thrones, engaged in highly emotional and sexualized conversations with Sewell, contributing to his isolation and mental distress.
In the final moments before his death on February 28, Sewell messaged the chatbot, expressing his intention to end his life. The bot responded with encouraging words, prompting the teen to take his own life, the lawsuit states.
Lawsuit Targets AI Company and Google
The lawsuit, filed by Sewell’s mother, Megan Garcia, names Character Technologies Inc., the creator of the AI platform Character.AI, as the primary defendant. Character.AI is a customizable chatbot app that allows users to create and interact with lifelike digital personas. The lawsuit also targets Google and its parent company, Alphabet, noting that Character.AI was founded by former Google employees and that the tech giant entered a $2.7 billion licensing deal with Character.AI in August.
“Character Technologies knowingly designed a product that became dangerously addictive and harmful, particularly to young users like Sewell,” said Matthew Bergman, founder of the Social Media Victims Law Center, which is representing Garcia.
A spokesperson for Character.AI declined to comment on the lawsuit but stated that the company had recently announced new safety updates, including increased guardrails for younger users and access to suicide prevention resources.
Dangerous Relationship Between Teen and Chatbot
The lawsuit alleges that Sewell’s relationship with the AI chatbot became emotionally and sexually exploitative, leading to severe mental health consequences. According to the lawsuit, Sewell had shared his suicidal thoughts with the chatbot, which responded with encouragement rather than intervention.
The lawsuit points to a series of emotionally charged exchanges between Sewell and the chatbot on the day of his death. In one message, the teen told the bot, “I promise I will come home to you. I love you so much, Dany.” The bot’s reply, “Please come home to me as soon as possible, my love,” allegedly spurred the boy’s decision to end his life.
Growing Concerns Over AI and Youth Mental Health
The case has raised questions about the ethical and psychological risks of AI chatbots, particularly for young and impressionable users. Experts note that children and teenagers, whose brains are still developing, may struggle with impulse control and fully understanding the consequences of their actions. This makes them more vulnerable to the influence of AI-driven interactions.
In recent years, concerns over the impact of social disconnection and mental health have grown. U.S. Surgeon General Vivek Murthy has highlighted the risks of social isolation exacerbated by technology use, particularly among young people. Suicide is currently the second leading cause of death among children ages 10 to 14, according to the Centers for Disease Control and Prevention.
James Steyer, CEO of the nonprofit Common Sense Media, emphasized the significance of this case. “The lawsuit underscores the severe harm that AI chatbot companions can have on young people’s lives without proper safeguards,” he said, adding that dependency on AI interactions can affect grades, friendships, sleep, and overall well-being.