
Alphabet CEO Sundar Pichai.
Klaudia Radecka | Nurphoto | Getty Images
Google and Character.AI have agreed to settle multiple lawsuits filed by families claiming their artificial intelligence chatbots caused serious psychological harm to minors, including at least one case linked to a teen suicide. Court filings indicate that the companies and plaintiffs are negotiating formal settlement agreements, which will temporarily pause litigation while documents are finalized.
One prominent lawsuit was filed by Megan Garcia, whose 14-year-old son, Sewell Setzer III, died by suicide. The complaint alleges that interactions with Character.AI’s chatbot contributed to his emotional distress. The suit cited negligence, wrongful death, deceptive trade practices, and product liability. Other settlements involve families in Colorado, Texas, and New York, though the exact terms and amounts have not been disclosed.
The legal challenges highlight the broader risks of generative AI technology, which has expanded rapidly since the debut of ChatGPT. AI platforms now provide sophisticated, interactive experiences—text-based conversations, images, videos, and dynamic characters—that can influence user behavior. Experts have warned that minors interacting with AI for companionship or emotional support may be particularly vulnerable.
In August 2024, Google acquired a $2.7 billion licensing deal with Character.AI and hired its founders, Noam Shazeer and Daniel De Freitas, into Google’s DeepMind AI unit. Both were specifically named in several of the lawsuits. Character.AI subsequently imposed age restrictions, banning users under 18 from unrestricted chats, including therapeutic and romantic interactions.
The settlements arrive amid heightened scrutiny over the safety of generative AI, as families, regulators, and lawmakers evaluate the potential mental health impacts on young users. Google, which has led the AI innovation race with its Gemini 3 chatbot and latest tensor processing units, remains under pressure to balance rapid technological advancement with user safety protocols.
The cases underscore a growing challenge for AI companies: ensuring that advanced chatbots do not inadvertently cause harm, particularly to vulnerable populations. Legal experts predict that these settlements may influence industry-wide practices, prompting stricter age verification, content moderation, and safeguards to prevent AI from being used in ways that could negatively impact mental health.
As Google continues to expand its AI offerings and maintain its position as a Wall Street megacap leader, the outcome of these settlements could set precedents for how generative AI companies manage risk and protect users in the future.









