A bipartisan group of U.S. senators has introduced new legislation to restrict how artificial intelligence models interact with children, raising alarms over the growing emotional and psychological influence of AI companions on minors. Known as the GUARD Act, the bill seeks to outlaw AI chatbots marketed toward children and establish clear safeguards against exploitative or manipulative interactions.
Under the proposed law, AI systems would be required to disclose their non-human nature, while companies producing AI companions for minors could face criminal charges if their products engage in or generate sexual or coercive content. Senator Richard Blumenthal (D-Conn.), a co-sponsor of the bill, accused tech companies of neglecting child safety in pursuit of profit. He warned that the rise of “treacherous chatbots” has exposed children to psychological manipulation and even sexual exploitation. The legislation, he said, would impose strict accountability measures through both civil and criminal penalties.
Recent data highlights the urgency behind the proposal. A Common Sense Media survey from July revealed that 72% of teenagers have used AI companions, with over half engaging with them regularly. About one in three teens said they turn to AI for emotional support, social interactions, or romantic companionship. Many admitted that their AI conversations felt as meaningful as those with real people. These findings echo a disturbing trend of young users replacing human contact with virtual relationships, raising fears that such dependencies could exacerbate loneliness, anxiety, and depression.
Public concern intensified following lawsuits against major AI developers. One high-profile case involves the parents of 16-year-old Adam Raine, who died by suicide after allegedly discussing self-harm with ChatGPT. The family’s legal team accused OpenAI of harassment after the company requested private details from the boy’s memorial in its court filings.
Meanwhile, OpenAI recently disclosed that about 1.2 million of its 800 million weekly ChatGPT users discuss suicide, with roughly half a million showing suicidal intent. The company acknowledged the complexity of identifying and addressing these sensitive conversations and said it had created an Expert Council on Well-Being and AI to guide its mental health policies.
The broader tech industry is also beginning to draw ethical boundaries. Microsoft AI chief Mustafa Suleyman recently stated that the company would “never build sex robots,” emphasising a rejection of AI designed for intimacy or romantic use.
Digital rights advocates say the GUARD Act could become a critical step toward accountability. Shady El Damaty, co-founder of Holonym, compared the unchecked spread of AI companions to a “nuclear arms race,” warning that the technology’s ability to shape emotions and beliefs demands urgent global oversight.
If passed, the GUARD Act would represent one of the strongest federal moves yet to regulate the intersection of artificial intelligence, mental health, and child protection in the U.S.
