Switch to ADA Accessible Theme
Close Menu
Rhode Island Personal Injury Attorney
You Don't Pay Unless We Recover 401-453-2700
Rhode Island Personal Injury Attorney / Blog / Personal Injury / AI Chatbots: Can They Use The 1st Amendment As A Defense If They Cause Injury?

AI Chatbots: Can They Use The 1st Amendment As A Defense If They Cause Injury?

Robot_Yelling

Just a few years ago, they used to make movies about people who would have entire conversations—even personal relationships—with computers. Today, that’s reality, as artificial intelligence allows computers to talk and think almost like a human can or would.

But what about liability? Who is liable when a computer (specifically, artificial intelligence) does something that, if a human said or did it, would be illegal, or at least, negligent?

Family Sues Chatbot Programmer

That’s exactly what happened earlier this year in Florida, although there are actually a number of similar cases throughout the nation. The case involved a boy who committed suicide, after, the family alleges, the AI chatbot induced him to kill himself.

The boy was interacting with an AI language model that, his family says, continually pushed him to kill himself. The AI model interacted with the boy through “Game of Thrones” characters.

What About the First Amendment?

Normally, someone can be sued, if they do something or say something that would convince another person to kill him or herself. But whether an AI can also be sued for doing the same thing, is another legal issue.

In the lawsuit, the AI chatbot’s programmer or owner alleged that the AI actually had a 1st amendment free speech right, and as such, the company could not be liable for the boy’s death, or for what the AI chatbot may have said to the boy.

This isn’t unprecedented; it is generally accepted in law that businesses and companies do have 1st amendment protections—companies do, after all, express themselves through the human beings that own them or control them. But whether an individual AI chatbot, which induces someone to do harm, and which thinks independently of its humans, can use the 1st amendment as a defense, isn’t so established.

And, for humans, the 1st Amendment has in the past been used as a defense to claims that speech causes someone to hurt themselves—this is a common defense when, for example, a musician’s songs cause people to kill themselves.

AI is Just Programming (For Now)

But in the case of AI, the court, at least for now, ruled differently. The court found that although we may see AI as talking and thinking with an independent consciousness, AI is still, nonetheless, computer models and programming, and thus are not expressions protected by the 1st Amendment. Because AI is not actual expressive speech, the court allowed the lawsuit against the company to continue.

Humans’ Rights

That may change in the future, as many courts have held that it isn’t the AI’s right to speak that matters but the human users’ right to hear or use or talk to AI, which is protected.

But whether that would hold up in a similar lawsuit in the future, or in a different court, remains to be seen, as it is well established that it is negligent, and there can be liability, for inducing someone to hurt themselves and others—that is not, on its own, protected speech.

Contact our Rhode Island injury lawyers at Robert E. Craven & Associates at 401-453-2700 for help if someone you love was pushed to suicide, or to hurt themselves by someone else.

Sources:

courthousenews.com/florida-judge-rules-ai-chatbots-not-protected-by-first-amendment/

thefire.org/news/wave-state-level-ai-bills-raise-first-amendment-problems

fec.gov/legal-resources/court-cases/citizens-united-v-fec/

firstamendment.mtsu.edu/encyclopedia/case/corporations-first-amendment-rights/

Facebook Twitter LinkedIn
Contact Us phone: 401-453-2700 fax: 401-453-1910 Office Hours:
9:00am - 6:00pm (M-F)
Visit Us

7405 Post Road
North Kingstown, RI 02852

Get Directions
Follow Us
  • Facebook
  • Twitter
  • LinkedIn