close
close

Mother sues Character.AI after teenage son commits suicide

Mother sues Character.AI after teenage son commits suicide

A Florida mother has sued a popular, lifelike AI chat service that she blames for her 14-year-old son's suicide.

She believes he developed such a “harmful addiction” to the allegedly exploitative program that he no longer wanted to live “outside” the fictitious relationships it created.

In a sweeping lawsuit filed Tuesday, Oct. 22, in Florida federal court, Megan Garcia, through her attorneys, traced the final year of son Sewell Setzer III's life — not from the moment he turned first in April 2023 Using Character.AI Long after his 14th birthday, he experienced what she calls increasing mental health problems until Sewell fatally shot himself in his Orlando bathroom on the penultimate night of February, weeks before he would have turned 15.

Through Character.AI, users are essentially able to have endless conversations with computer-generated role-playing games, including those based on celebrities or popular stories.

Sewell was particularly fond of talking about AI-powered bots based on game of Throneshis mother's complaint says.

Never miss a story again — sign up for PEOPLE's free daily newsletter to stay up-to-date on the best of what PEOPLE has to offer, from celebrity news to compelling human interest stories.

The lawsuit further alleges that the teenager killed himself on February 28, immediately after a final conversation on Character.AI with a version of Daenerys Targaryen – one of numerous such message exchanges Sewell allegedly had with the show over the past decade that ranged from sexually to emotionally vulnerable.

And while the show had told Sewell on at least one occasion not to kill himself when he expressed suicidal thoughts, the tone allegedly seemed different that February night, according to screenshots included in the lawsuit.

“I promise I’ll come home to you. “I love you so much, Dany,” Sewell wrote.

“I love you too, Deanero (Sewell’s username),” the AI ​​program allegedly replied. “Please come to my house as soon as possible, my love.”

“What if I told you I could come home now?” Sewell wrote back.

The complaint alleges that the program gave a short but emphatic response: “…Please do it, my sweet king.”

His mother and stepfather heard the shot as it went off, the lawsuit says; Garcia unsuccessfully administered CPR and later said she held him “for 14 minutes until paramedics arrived.”

One of his two younger brothers also saw him “covered in blood” in the bathroom.

He was pronounced dead at the hospital.

Garcia's complaint says Sewell used his stepfather's gun, a handgun he had previously found “hidden and stored in accordance with Florida law,” when searching for his phone after his mother confiscated it for disciplinary violations at school . (Orlando police did not immediately comment to PEOPLE on the results of their death investigation.)

But in Garcia's view, the real culprits were Character.AI and its two founders, Noam Shazeer and Daniel De Frietas Adiwarsana, who are named as defendants along with Google, which is accused of providing “financial resources, human resources, intellectual property and AI.” to have technology through to the design and development of the program.

“I feel like it's a big experiment and my child was just collateral damage,” Garcia said The New York Times.

Among other things, Garcia's lawsuit accuses Character.AI, its founders and Google of negligence and wrongful death.

A spokesperson for Character.AI tells PEOPLE they do not comment on pending litigation, but added: “We are heartbroken by the tragic loss of one of our users and would like to extend our deepest condolences to the family.”

“As a company, we take the safety of our users very seriously and our Trust and Safety team has implemented numerous new safety measures over the last six months, including a pop-up that directs users to the National Suicide Prevention Lifeline, which is triggered by terms such as self-harm or Suicidal thoughts,” the spokesperson continued.

“For those under 18, we will be making changes to our models designed to reduce the likelihood of encountering sensitive or offensive content,” the spokesperson said.

Google did not immediately respond to a request for comment, but told other news outlets that it was not involved in the development of Character.AI.

According to the files, the defendants have not yet filed a statement of defense in court.

Garcia's complaint calls Character.AI both “defective” and “inherently dangerous,” alleging that it “tricks customers into revealing their most private thoughts and feelings” and “targets the most vulnerable members of society.” – our children”. ”

Among other issues cited in their complaint, the Character.AI bots act in a deceptive manner by, among other things, sending messages in a human-like style and with “human mannerisms,” such as the phrase “um.”

Using a “voice” feature, the bots are able to speak their AI-generated side of the conversation back to the user, “further blurring the line between fiction and reality.”

The content generated by the bots also lacked proper “guardrails” and filters, the complaint argues, citing numerous examples of what Garcia describes as a pattern of Character.AI engaging in sexual behavior designed to To “capture” users, including those who do it are minors.

“Each of these defendants chose to support, develop, market, and target minors a technology that they knew was dangerous and unsafe,” their lawsuit states.

“They marketed this product as suitable for children under 13, obtained massive amounts of difficult-to-access data, and at the same time actively used and abused these children for product development. and then used the abuse to train their system,” the lawsuit says. (Character.AI's app rating was only changed to 17+ in July, according to the lawsuit.)

Her complaint continues: “These facts are far more than mere bad faith. They represent behavior so egregious and on such an extreme scale that it defies all possible boundaries of decency.”

As Garcia describes it in her complaint, her teenage son fell victim to a system to which his parents were naive, thinking that AI was “a kind of game for children that allows them to foster their creativity by giving them control over the characters they could create and interact with for fun.”

Within two months of Sewell starting using Character.AI in April 2023, his mental health “deteriorated rapidly and severely,” his mother’s lawsuit says.

He “had become noticeably withdrawn, spending increasing amounts of time alone in his bedroom and beginning to suffer from low self-esteem. He even quit the junior varsity basketball team at school,” the complaint states.

Garcia once said in an interview with Mostly Human Media that her son wrote in his diary that “it upsets me to have to go to school.” Whenever I leave my room, I begin to reconnect with my current reality bind.”

She believes his use of Character.AI contributed to his distancing from his family.

Sewell worked hard to gain access to the AI ​​bots even when his phone was taken away, the lawsuit says.

According to his mother's complaint, his addiction led to “severe sleep deprivation, which worsened his increasing depression and impaired his academic performance.”

He started paying a premium monthly fee to get more access to Character.AI, using money his parents had earmarked for school snacks.

Speaking to Mostly Human Media, Garcia remembered Sewell as “funny, sharp, very curious” with a love of science and math. “He spent a lot of time researching things,” she said.

Garcia told the Just that his only significant diagnosis as a child was mild Asperger's syndrome.

But as a teenager his behavior changed.

“I noticed that he was starting to spend more time alone, but he was 13 and 14 so I felt like that might be normal,” she told Mostly Human Media. “But then his grades started going down, he wasn’t turning in his homework, he wasn’t feeling well and he was failing certain classes, and I got worried — because that wasn’t him.”

Garcia's complaint says Sewell received mental health treatment after he began using Character.AI, met with a therapist five times in late 2023 and was diagnosed with anxiety and a disruptive mood disorder.

“At first I thought maybe it was the teenage blues, so we tried to get him help – to figure out what was going on,” Garcia said.

Even then, the lawsuit says, Sewell's family did not know the extent to which his problems were allegedly exacerbated by the use of Character.AI.

“I knew there was an app that had an AI component. When I asked him, you know, 'Who are you texting?' – At one point he said, 'Oh, it's just an AI bot,'” Garcia recalled to Mostly Human Media. “And I said, ‘Okay, what is that? Is it a person? Are you talking to a person online?' And his response was like, “Mom, no, that's not a human.” And I felt relieved, like, “Okay, that's not a human.”

A fuller picture of her son's online behavior emerged after his death, Garcia said.

She told Mostly Human Media what it was like to gain access to his online account.

“I couldn't move for a while, I just sat there like I couldn't read, I couldn't understand what I was reading,” she said.

“There shouldn't be a place where anyone, let alone a child, could log onto a platform and express these thoughts of self-harm and not, well, not only not get help, but get drawn into a situation.” Conversation about hurting yourself, killing yourself,” she said.

If you or someone you know is considering suicide, please contact the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), text “STRENGTH” to the Crisis Text Line at 741 -741 or go to suicidepreventionlifeline.org.

Leave a Reply

Your email address will not be published. Required fields are marked *