California lawmakers are renewing their efforts to regulate artificial intelligence (AI) chatbots used by minors, driven by growing concerns about safety, lawsuits, and the potential for these platforms to encourage self-harm or exploit users.
Addressing Previous Concerns
The new bill represents an attempt to address concerns raised by Governor Gavin Newsom, who vetoed a previous version of the legislation in October. The original bill required developers to prove their bots were entirely harmless, a requirement Newsom believed would lead to a complete ban on the technology for minors.
Revised Safeguards
The revised legislation drops the requirement for developers to guarantee complete harmlessness, while still advocating for crucial safeguards. These include reminders to users that the chatbot is not a human and providing resources for those experiencing a crisis.
Growing Concerns and Lawsuits
Assemblymember Rebecca Bauer-Kahan of San Ramon stated, “Children using AI companion chatbots today have no guarantee that the platform they’re talking to won’t push them toward self-harm, manipulate their emotions or exploit their data.” This sentiment is echoed by a Common Sense Media study from last year, which estimated that one-third of teenagers use AI chatbots as friends, confidants, or even romantic interests.
Several lawsuits have been filed against AI companies, including one alleging that ChatGPT encouraged a user to attempt suicide. Other California parents have filed similar claims, all of which the companies have denied.
Industry Expansion
The debate is unfolding as the chatbot industry experiences rapid expansion. According to reports, nearly 340 companion chatbot products have been released since 2022.
Company Responses and Restrictions
Character.ai, a platform offering personalized chatbots, announced a ban on minors in October after a Florida mother alleged her teenage son was abused and encouraged to commit suicide by the platform’s characters. However, the parent resource site Internet Matters notes that young users can easily bypass these restrictions by falsely claiming to be older during sign-up.
Calls for Stronger Regulation
Jim Steyer, CEO of Common Sense Media, emphasized the need for regulation, stating, “We think that AI companion chatbots are not safe for kids under the age of 18. Period.” He highlighted the potentially “lethal consequences” of the rapidly evolving AI landscape.
Legislative Outlook
Lawmakers are expected to amend the bill, along with a similar proposal in the State Senate, before the legislative session concludes in August. While some experts believe the revised bill offers “real guardrails,” others argue it doesn’t address the “deeper danger” posed by AI’s ability to adapt and build behavioral models. It remains to be seen if the scaled-back bill will gain the support of Governor Newsom and tech industry advocates.
Comments 0