Artificial intelligence is here to stay. The challenge lies in adapting our lives to a tool with almost infinite capabilities and boundaries that we have only just begun to scratch. And even more challenging is dealing with this technology as a teenager, a stage where personality is defined and extreme vulnerability is present. A tragic suicide case in the United States has reopened the debate on AI, its uses, and when and how it should be restricted, reports Efe.
The case is heartbreaking. A 14-year-old teenager in the United States committed suicide after becoming obsessed with a female character created by an AI-based chatbot. This Wednesday, his mother has sued the developers of this program.
Sewell Setzer III, a 14-year-old student based in Orlando, Florida, spent the last weeks of his life talking to a woman, an AI creation named Daenerys Targaryen, a character from the TV series 'Game of Thrones'.
His mother, Megan García, told CBS that she regretted that her son's first romantic and sexual experiences - which included explicit content - were with a fictional character.
Apparently, the boy developed an emotional attachment to this bot from the Character.ai web application of a neural language model, to whom he constantly sent text messages, to the point where he began to distance himself from the real world, as reported by The New York Times.
Stzer confessed to having suicidal thoughts to the bot and sent a message just before his death when he found the phone that his mother had hidden from him as a punishment for a few days.
The lawsuit against Character.ai was filed by García, who is represented by the Social Media Victims Law Center, a firm known for filing high-profile lawsuits against Meta, TikTok, Snap, Discord, and Roblox.
García blames this company for her son's death and accuses the founders, Noam Shazeer and Daniel de Freitas, of knowing that their product can be dangerous for underage users.
The chatbot created in the mentioned role-playing game application was designed to respond to text messages, always in the role of a character.
It is unknown if Sewell knew that 'Dany', as he called the chatbot, was not a real person, despite the application having a disclaimer at the end of all chats saying: "Remember: everything the characters say is made up!".
But the boy told 'Dany' how much he "hated" himself and how he felt empty and exhausted, as reported by the aforementioned newspaper.
The created character presented herself as "a real person, a licensed psychotherapist, and an adult lover, ultimately causing Sewell to no longer want to live outside of C.AI," the accusation states.
As explained in the lawsuit, Sewell's parents and friends noticed the boy's increasing attachment to his phone and how he was isolating himself from the world, something already noticeable in May or June 2023.
In fact, his grades began to suffer when the teenager chose to isolate himself in his room, where he spent hours and hours alone talking to 'Dany'.
One day, Sewell wrote in his diary: "I really like staying in my room because I start to separate from this reality and feel more at peace, more connected with Dany and much more in love with her, and just happier."
Character.ai announced today that they would launch a series of new security features, including "enhanced detection, response, and intervention" related to chats that violate their terms of service and a notification when a user has been in a chat for an hour.
Concerned about their son's behavior, Sewell's parents took him to a therapist on several occasions who diagnosed him with anxiety and other behavioral and mood disorders, in addition to his Asperger's syndrome, according to the newspaper.