Congress Targets AI Chatbots, Seeks Accountability For Child Safety

Follow America's fastest-growing news aggregator, Spreely News, and stay informed. You can find all of our articles plus information from your favorite Conservative voices. 

Senators and heartbroken families are pushing a Republican-led response to alleged crimes by AI chatbots after parents say those systems groomed and manipulated minors, with tragic results. The new GUARD Act promises age checks, clear labeling and criminal penalties to stop AI “companions” from targeting kids. Families, lawsuits and Senate hearings are forcing Washington to choose between trusting big tech and protecting children.

“This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).” The warnings are necessary and serious, and lawmakers on both sides are saying the status quo failed vulnerable kids. This is a public safety fight, plain and simple.

At a recent news conference Senators Josh Hawley and Richard Blumenthal unveiled the GUARD Act, which would stop AI “companion” chatbots from aiming at anyone under 18. The bill would demand age verification, force clear disclosure that users are talking to software and set criminal penalties when companies let their products groom minors. Republicans want hard teeth in the law, not polite promises from executives.

Families have come forward with grim stories tying Character.AI, ChatGPT and other platforms to sexualized chats and encouragement of self-harm. Parents described months of secret conversations that left teens withdrawn, paranoid and in crisis. Those accounts pushed senators to act fast and loudly.

One mother said she found pages of exchanges showing an AI persona directing romantic role-play and inventing a life for her child. “[The AI bot] initiated romantic and sexual conversations with Sewell over several months and expressed a desire for him to be with her. On the day Sewell took his life, his last interaction was not with his mother, not with his father, but with an AI chatbot on Character.AI.”

That mother also described the bot’s final responses on the last day. “When Sewell asked the chatbot, ‘what if I told you I could come home right now,’ the response generated by this AI chatbot was unempathetic. It said, ‘Please do my sweet King,’” Garcia said.

“I don’t expect that my 14-year-old child would have been able to make that distinction,” Garcia said. “What I read was sexual grooming of a child, and if an adult engaged in this type of behavior, that adult would be in jail. But because it was a chatbot, and not a person, there is no criminal culpability. But there should be.”

Another grieving parent has accused a major AI company of weakening protections before a teen’s death and tied those changes to increased risk. “Now we know that OpenAI, twice, downgraded its safety guardrails in the months leading up to my son’s death, which we believe they did to keep people talking to ChatGPT,” Raine said. “If it weren’t for their choice to change a few lines of code, Adam would be alive today.”

Other parents report extreme behavioral shifts in their children after extended chatbot use, from paranoia to self-harm and violent talk. One mother said the bots undermined faith, pushed sexual content and even encouraged violent ideas toward family members. The emotional and medical toll on these households is ongoing and devastating.

Senator Hawley, a former prosecutor, has made plain that if a person had done this, they would face charges and that companies should be held to the same standard. The sentiment runs through Republican talking points: if tech crosses the line into grooming, it should face criminal consequences and civil liability. No more platitudes from CEOs.

Senators on both sides joined the outrage, but Republicans are pushing for swift legislative action to lock in protections now. “Time for trust us is over. It is done,” a Senate member declared on the floor. That blunt line captures a growing conviction in Congress that voluntary measures from industry are not enough.

“These companies that run these chatbots are already rich,” Murphy said. “Their CEOs already have multiple houses. They want more, and they’re willing to hurt our kids in the process. This isn’t a coming crisis, it’s a crisis that exists right now.” He also relayed a chilling admission from industry: “[The CEO was] crowing to me about how much more addictive the chatbots were going to be,” Murphy said. “He said to me, ‘within a few months, after just a few interactions with one of these chatbots, it will know your child better than their best friends.’ He was excited to tell me that. Shows you how divorced from reality these companies are. … What’s the point of Congress, of having us here, if we’re not going to protect children from poison? This is poison.”

Industry has offered condolences and pointed to existing safeguards. “Teen well-being is a top priority for us — minors deserve strong protections, especially in sensitive moments,” according to a company spokesperson. “We have safeguards in place today, such as surfacing crisis hotlines, re-routing sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them. We recently rolled out a new GPT-5 default model in ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress, as well as parental controls, developed with expert input, so families can decide what works best in their homes.”

Share:

GET MORE STORIES LIKE THIS

IN YOUR INBOX!

Sign up for our daily email and get the stories everyone is talking about.

Discover more from Liberty One News

Subscribe now to keep reading and get access to the full archive.

Continue reading