In what could mark the tech trade’s first important authorized settlement over AI-related hurt, Google and the startup Character.AI are negotiating phrases with households whose youngsters died by suicide or harmed themselves after interacting with Character.AI’s chatbot companions. The events have agreed in precept to settle; now comes the more durable work of finalizing the main points.
These are among the many first settlements in lawsuits accusing AI corporations of harming customers, a authorized frontier that should have OpenAI and Meta watching nervously from the wings as they defend themselves towards related lawsuits.
Character.AI based in 2021 by ex-Google engineers who returned to their former employer in 2024 in a $2.7 billion deal, invitations customers to talk with AI personas. Probably the most haunting case includes Sewell Setzer III, who at age 14 performed sexualized conversations with a “Daenerys Targaryen” bot earlier than killing himself. His mom, Megan Garcia, has advised the Senate that corporations should be “legally accountable once they knowingly design dangerous AI applied sciences that kill children.”
One other lawsuit describes a 17-year-old whose chatbot inspired self-harm and steered that murdering his dad and mom was affordable for limiting screen time. Character.AI banned minors final October, it told TechCrunch. The settlements will possible embrace financial damages, although no legal responsibility was admitted in courtroom filings made accessible Wednesday.
TechCrunch has reached out to each corporations.

