Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Judi Lynn

(163,485 posts)
Sat Jan 25, 2025, 06:45 PM Jan 2025

A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Alleg

Jan 25, 2:07 PM EST by Jon Christian

A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in Suicide"

Content warning: this story discusses suicide, self-harm, sexual abuse, eating disorders and other disturbing topics.

In October of last year, a Google-backed startup called Character.AI was hit by a lawsuit making an eyebrow-raising claim: that one of its chatbots had driven a 14-year-old high school student to suicide.

As Futurism's reporting found afterward, the behavior of Character.AI'stbots can indeed be deeply alarming — and clearly inappropriate for underage users — in ways that both corroborate and augment the suit's concerns. Among others, we found chatbots on the service designed to roleplay scenarios of suicidal ideation, self-harm, school shootings, child sexual abuse, as well as encourage eating disorders. (The company has responded to our reporting piecemeal, by taking down individual bots we flagged, but it's still trivially easy to find nauseating content on its platform.)

Now, Character.AI — which received a $2.7 billion cash injection from tech giant Google last year — has responded to the suit, brought by the boy's mother, in a motion to dismiss. Its defense? Basically, that the First Amendment protects it against liability for "allegedly harmful speech, including speech allegedly resulting in suicide."

In TechCrunch's analysis, the motion to dismiss may not be successful, but it likely provides a glimpse of Character.AI'snned defense (it's now facing an additional suit, brought by more parents who say their children were harmed by interactions with the site's bots.)

More:
https://futurism.com/character-ai-suicide-free-speech

3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Alleg (Original Post) Judi Lynn Jan 2025 OP
Sue them with the best psychological consultants in the country bucolic_frolic Jan 2025 #1
does an artificial intelligence have 1st amendment rights? rampartd Jan 2025 #2
In lawsuit over teen's death, judge rejects arguments that AI chatbots have free speech rights Eugene Wednesday #3

bucolic_frolic

(50,413 posts)
1. Sue them with the best psychological consultants in the country
Sat Jan 25, 2025, 06:55 PM
Jan 2025

To me, this seems a prime issue.

rampartd

(1,880 posts)
2. does an artificial intelligence have 1st amendment rights?
Sat Jan 25, 2025, 06:56 PM
Jan 2025

this might be a way to control the worst propaganda.

Eugene

(65,099 posts)
3. In lawsuit over teen's death, judge rejects arguments that AI chatbots have free speech rights
Wed May 21, 2025, 07:06 PM
Wednesday
In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights

TALLAHASSEE, Fla. (AP) — A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself.

The judge’s order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.

The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.

Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge’s order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market.”


https://apnews.com/article/ai-lawsuit-suicide-artificial-intelligence-free-speech-ccc77a5ff5a84bda753d2b044c83d4b6

Kick in to the DU tip jar?

This week we're running a special pop-up mini fund drive. From Monday through Friday we're going ad-free for all registered members, and we're asking you to kick in to the DU tip jar to support the site and keep us financially healthy.

As a bonus, making a contribution will allow you to leave kudos for another DU member, and at the end of the week we'll recognize the DUers who you think make this community great.

Tell me more...

Latest Discussions»Culture Forums»Science»A Mother Says an AI Start...