Members of Congress raised continued concerns this week about the rapid rise of AI chatbots and their growing role in mental health support, pressing psychiatrists and privacy experts on how to protect users as the technology evolves far faster than regulation.
During a House subcommittee on Oversight and Investigations hearing, lawmakers said they were aiming for a balanced conversation about both the benefits and the risks of AI tools. But much of the testimony focused on where the technology can fall short, particularly when vulnerable users rely on chatbots in moments of emotional distress.
Psychiatrists told the committee that AI chatbots are increasingly being used as a first stop for mental health support. Dr. Marlynn Wei, a psychiatrist and psychotherapist, testified that an estimated 25 to 50 percent of people now turn to AI tools for emotional or psychological guidance, warning that chatbots tend to “sycophantically” flatter users.
RELATED STORY | OpenAI announces new safety measures for teens and users in crisis on ChatGPT
“This validation can feel good, but may not be right. AI chatbots endorse users 50 percent more than humans would on ill advised behaviors,” Dr. Wei testified. “They can also hallucinate, producing false or misleading information, and are not equipped to anchor users in reality. When used in moments of emotional distress, AI chatbots can have crisis blind spots.”
Experts noted that none of the most widely used chatbots are bound by the ethical, clinical or safety standards that apply to licensed professionals. They recommended Congress consider funding dedicated research, including through the National Institutes of Health, to identify specific gaps and evaluate effective guardrails.
Several witnesses described the need for multiple, overlapping layers of safety. Using what they called the “Swiss cheese model,” they advised lawmakers to consider approaches where, if one layer fails, others still protect the user. They pointed to early efforts by companies like OpenAI to implement age verification, noting the attempt is promising but often easy to bypass.
Dr. John Torous, the Director of Digital Psychiatry, Department of Psychiatry, Beth Israel Deaconess Medical Center, raised particular concerns about long, ongoing conversations between users and chatbots, noting evidence that extended dialogue can cause guardrails to erode — equating the models to “poorly trained dogs.”
“When people have very long conversations — maybe over days, over weeks, over months — even the chatbot itself seems to get confused, and those guardrails quickly go away,” Dr. Torous said, calling the current environment “a grand experiment” involving millions of Americans.
RELATED STORY | Should you be worried about AI in this year's Christmas toys?
Lawmakers, for their part, largely asked detailed questions — inquiring about appropriate ages of users and how to prevent harm to users — and appeared open to forming tangible policy proposals. Rep. Erin Houchin (R-Ind.) took time during the hearing to announce that she is, alongside Rep. Jake Auchincloss (D-Mass.), launching the Bipartisan Kids Online Safety Caucus, to serve as “a forum in Congress to keep members current on the fast-moving issue, provide a venue for practical solutions, and focus conversations with researchers, parents, schools, and industry.”
Outside of mental health factors, data privacy emerged as a key theme as well: witnesses emphasized that there is little transparency around what personal information is collected during chatbot conversations, how it is stored and whether it is used to train AI models.
“If Americans’ mental health data and stories and journeys are being used to train AI, people should give explicit informed consent and not a checkbox buried in the terms and conditions,” Dr. Torous testified. “It would be a tragedy if we let that happen and you can prevent it.”
Privacy expert Dr. Jennifer King urged lawmakers to mandate that AI developers disclose data sources and processing methods. She also warned against automatically opting users into model training by default and called for stronger limits on how sensitive information can be collected or reused.
Committee leaders framed the hearing as an early but necessary step in shaping clearer rules for an increasingly influential technology. As lawmakers weigh new transparency requirements, consent standards and safety guardrails, witnesses stressed that consumers should not be left navigating the risks of AI chatbots on their own.