Over 60 orgs united to urge Congress to champion a safer, more responsible, and ethical digital future for our children.
May 12, 2025
View PDF Version of the Declaration
Artificial intelligence is rapidly becoming part of our children’s daily lives—from understanding their speech to powering search engines to helping with their homework. Yet, without stringent safeguards, AI interactions pose serious risks to children’s safety, social, emotional, cognitive, and moral development, and overall well-being. We have seen firsthand the alarming consequences when profit-driven AI is unleashed on young users without adequate protections and pre-launch testing. The most alarming examples involve anthropomorphized AI companion bots, a type of AI product that is unsafe for minors by design:
These documented incidents are not isolated occurrences—they illustrate a broader systemic danger where technology companies prioritize engagement metrics and profitability over children’s safety, development, and wellbeing. Tech executives have been clear that these bots are designed to not only imitate social interaction, but also somehow meet a user’s social needs. In order to flourish, children need responsive interaction from humans who care about them and can empathize with them – something AI can’t provide. It is no exaggeration to call this a reckless race to market that directly threatens the health and well-being of our youngest generation.
Yet, technology need not be designed in an inherently dangerous way.
To prevent unnecessary harms and realize the potential for positive uses of technology, we advocate at a minimum for clear non-negotiable guiding principles and standards in the design and operation of all AI products aimed at children:
Even when such principles are applied and AI products are subject to reasonable safety testing and standards prior to launch, it is possible, even likely, that some types of AI products –as is the case with companion AI bots – will be deemed unsafe at any speed for minors.
To promulgate and enforce these basic, starting point principles effectively, we call upon Congress and the U.S. courts to clarify and reform Section 230 of the Communications Decency Act. We strongly reject the industry’s assertion that AI and algorithms inherently deserve immunity or are covered speech. It is time to make unequivocally clear that Section 230 protections do not apply to algorithmically-recommended or AI-created content, or a company’s platform design choices. Just as a defective toy or harmful medication must face liability and be taken off shelves, AI products that harm children must also bear full product liability and be banned. As Senator Richard Blumenthal emphasized, “When these new technologies harm innocent people, the companies must be held accountable… Victims deserve their day in court” Sen. Josh Hawley emphatically stated something we all agree with: “I don’t want 13 year-olds to be your guinea pig. This is what happened with social media. We had social media, who made billions of dollars giving us a mental health crisis in this country. They got rich, the kids got depressed, committed suicide. Why would we want to run that experiment again with AI?” (Statement during Senate Judiciary Subcommittee hearing, “Oversight of A.I.: The Need for Regulation”, September 12, 2023).
We, the undersigned, call urgently on policymakers, tech companies, and communities to join us in championing a safer, responsible, and ethical digital future for our children. Our kids deserve technology that enriches their lives, protects their innocence, and empowers their potential—not technology that exploits or endangers them.