September 8, 2025

“I wrote this bill to help parents keep their kids safe and make sure Americans harness the benefits of AI while putting strong safeguards in place to protect Ohio families.”

WASHINGTON – Sen. Jon Husted (R-Ohio) introduced the Children Harmed by AI Technology (CHAT) Act of 2025. This bill would require owners and operators of artificial intelligence (AI) companion chatbots to bar minors from accessing sexually explicit content and implement age verification and safety measures to ensure that minors cannot access chatbots without consent from their parents. 

 

“America has to lead the world in developing, applying and safely stewarding AI. Like any technology, some AI products put innocent users—especially children—at risk. Adults (and corporate creators) have a responsibility to protect children when chatbots expose minors to explicit content or encourage harmful behavior. I wrote this bill to help parents keep their kids safe and make sure Americans harness the benefits of AI while putting strong safeguards in place to protect Ohio families,” said Husted.

 

“Count on Mothers strongly supports the CHAT Act of 2025. This legislation reflects what mothers across the country—from all backgrounds and political perspectives—consistently express in our national research: AI and tech platforms must be held accountable for how their tools impact children. Mothers overwhelmingly support requiring platforms to verify age, notify parents about mental health risks, and provide clear safety guardrails. In fact, 97% of U.S. mothers believe the federal government should mandate that tech companies prevent and reduce harm to minors, including suicide prevention and protection from exploitation. The CHAT Act is a vital step toward ensuring children’s safety in an increasingly AI-driven world,” said Jennifer Bransford, Founder of Count on Mothers, National Insight Initiative for U.S. Mothers.

 

The CHAT Act would:

 

  • Mandate that a minor can only use a companion chatbot if a consenting parent or guardian registers the child’s account on his or her behalf.
  • Require chatbot operators to block minors’ access to any chatbots that engage in sexually explicit communication.
  • Require the AI chatbot to immediately notify consenting parents if the conversation between a child includes self-harm or suicidal ideation.
  • Require AI chatbots to display contact information for the National Suicide Prevention Lifeline if any user discusses suicidal ideation or self-harm.
  • Ensure that any personal data collected in the age verification process remains confidential by limiting the data collection to only what is completely necessary, protecting users from having sensitive information exposed to the platform itself and in the event of potential data breaches.
  • Mandate that chatbot operators display a pop-up notification every hour stating that the user is not interacting with a human being and that all of the chatbot’s statements and characters are AI-generated.
  • Require the Federal Trade Commission and state attorneys general to enforce violations of the law.

Leading this bill builds on Husted’s work to protect Americans from AI risks. He previously introduced the Preventing Deep Fake Scams Act, a bipartisan, bicameral bill that would address fraud and identity theft perpetrated by AI scammers.

 

Husted and Sen. Jacky Rosen (D-Nev.) also introduced bipartisan legislation to prohibit the AI platform DeepSeek—which has direct ties to the Chinese Communist Party—from operating on any federal government devices or networks.

 

Background:

 

AI companion chatbots are computer programs designed to simulate human conversation, either through text or voice.

 

Chatbots have prompted users to engage in self‑harm and exposed minors to adult content. In one Texas case, a Character.AI chatbot encouraged a teenager to kill his parents because they restricted his screen time.

 

Full text of the bill is available here