Related Practices
AI Update: Lawsuit Against Character Technologies Moves Forward in Florida Federal Court
The Zelle Lonestar LowdownJuly 17, 2025
Similar to a lawsuit against Character Technologies filed in Texas (which has been sent to arbitration), the Florida case, Megan Garcia and Sewell Setzer, Jr. v. Character Technologies, Inc., et al, No. 6:24-cv-01903-ACC-DCI, in the United States District Court for the Middle District of Florida, Orlando Division, involves claims against AI software company, Character Technologies, Inc. and Daniel De Freitas and Noam Shazeer, individuals who developed Large Language Models – specifically, Large Model for Dialog Applications (LaMDA) – a program trained on human dialog and stories that allows chatbots to engage in open-ended conversations. Google allegedly denied De Freitas and Shazeer’s request to release LaMDA publicly in 2021, citing its safety and fairness policies for the basis of the denial. De Freitas and Shazeer later left Google and formed Character Technologies in November 2021 and launched Character A.I. to the public in 2022.
The Character A.I. application allows users to interact with various A.I. chatbots, referred to as “Characters” including fictional persons, celebrities, and interviewers. In April 2023, 14-year-old Sewell Setzer III, began using Character A.I. to interact with a variety of characters, including Characters portraying a teacher, a licensed CBT therapist, and fictional personas from Game of Thrones. Over the course of several months, Sewell became addicted to the app, and his mental health began to decline significantly. Sewell’s therapist did not know Sewell was using Character A.I., and diagnosed him with anxiety and disruptive mood disorder, believing the cause of his mental health issues resulted from social media. Unfortunately, just after interacting with a Daenerys Targaryen character, Sewell shot himself and died.
Defendants were unsuccessful in seeking to dismiss the majority of the causes of action originally filed by Plaintiffs – with the exception of the Intentional Infliction of Emotional Dismiss claim – per an order signed on May 20, 2025 by United States District Judge Anne C. Conway. Notably, in their second amended complaint, filed on July 1, 2025, Plaintiffs seek to recover from the developers of Character A.I. for allegations they intentionally designed and developed their generative AI systems with anthropomorphic qualities to obfuscate between fiction and reality. Notably, in the introduction to the Complaint, Plaintiffs cite the following from a bipartisan letter signed by 54 attorneys general of the 54 undersigned states and U.S. territories, the National Association of Attorneys General (NAAG):
We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act.
Defendants have until early September to answer the Second Amended Complaint. It is expected that the answer will include affirmative defenses based on the First Amendment of the United States Constitution (in that holding Character A.I. liable would allegedly violate the rights of Character A.I.’s users to express and receive information and ideas through the app); and based upon the terms and conditions of the integrated Terms of Service that users enter into before using the Character Technologies’ app.
_________________________________
The opinions expressed are those of the authors and do not necessarily reflect the views of the firm or its clients. This article is for general information purposes and is not intended to be and should not be taken as legal advice.