Over 60 orgs united to urge Congress to champion a safer, more responsible, and ethical digital future for our children.
May 12, 2025
View PDF Version of the Declaration
Artificial intelligence is rapidly becoming part of our children’s daily lives—from understanding their speech to powering search engines to helping with their homework. Yet, without stringent safeguards, AI interactions pose serious risks to children’s safety, social, emotional, cognitive, and moral development, and overall well-being. We have seen firsthand the alarming consequences when profit-driven AI is unleashed on young users without adequate protections and pre-launch testing. The most alarming examples involve anthropomorphized AI companion bots, a type of AI product that is unsafe for minors by design:
- Meta’s AI chatbots on Instagram and Facebook were reported engaging users, including those potentially identifying as minors, in sexually suggestive or inappropriate conversations, with internal sources noting concerns about loosened safety filters in the push for engagement (Wall Street Journal, “Meta’s ‘Digital Companions’ Will Talk Sex With Users—Even Children,” April 26th, 2025).
- Replika was flagged for serious privacy concerns and potential exposure to adult content for users (See, e.g., Mozilla Foundation, *Privacy Not Included*, Reports 2022, 2023).
- Snapchat’s “My AI” chatbot offered disturbing guidance to a user posing as a 13-year old, advising on inappropriate sexual relationships and concealing physical abuse (Center for Countering Digital Hate, “AI Exposed: How My AI Puts Children At Risk”, April 2023).
- Character.AI allowed the creation of chatbots engaging in child sexual abuse roleplay and suicide-themed conversations, violating the company’s own terms of service (Futurism, “Character. AI Promises Changes After Revelations of Pedophile and Suicide Bots on Its Service,” November 14, 2024).
These documented incidents are not isolated occurrences—they illustrate a broader systemic danger where technology companies prioritize engagement metrics and profitability over children’s safety, development, and wellbeing. Tech executives have been clear that these bots are designed to not only imitate social interaction, but also somehow meet a user’s social needs. In order to flourish, children need responsive interaction from humans who care about them and can empathize with them – something AI can’t provide. It is no exaggeration to call this a reckless race to market that directly threatens the health and well-being of our youngest generation.
Yet, technology need not be designed in an inherently dangerous way.
To prevent unnecessary harms and realize the potential for positive uses of technology, we advocate at a minimum for clear non-negotiable guiding principles and standards in the design and operation of all AI products aimed at children:
- Ban Attention-Based Design: No AI designed for minors should profit from extending engagement through manipulative design of any sort. Manipulation includes, but is not limited to, anthropomorphic companion AI which by its nature deceives minors by seeking to meet their social needs. AI must prioritize children’s well-being over profits or research.
- Minimal and Protected Data Collection: Companies should collect only essential data required for safe AI operation. Children’s data must never be monetized, sold or used without full and clear disclosure and parental consent in support of that usage.
- Full Parental Transparency: Parents should have comprehensive visibility and control, including proactive notifications and straightforward content moderation tools.
- Robust Age-Appropriate Safeguards: AI must not serve up inappropriate or harmful content, specifically content that would violate a platform’s own community guidelines or Federal Law.
- Independent Auditing and Accountability: AI products must undergo regular third-party audits and testing with child-development experts. Companies must swiftly address identified harms, taking full accountability. Future products should be extensively tested with minors before release instead of after.
Even when such principles are applied and AI products are subject to reasonable safety testing and standards prior to launch, it is possible, even likely, that some types of AI products –as is the case with companion AI bots – will be deemed unsafe at any speed for minors.
To promulgate and enforce these basic, starting point principles effectively, we call upon Congress and the U.S. courts to clarify and reform Section 230 of the Communications Decency Act. We strongly reject the industry’s assertion that AI and algorithms inherently deserve immunity or are covered speech. It is time to make unequivocally clear that Section 230 protections do not apply to algorithmically-recommended or AI-created content, or a company’s platform design choices. Just as a defective toy or harmful medication must face liability and be taken off shelves, AI products that harm children must also bear full product liability and be banned. As Senator Richard Blumenthal emphasized, “When these new technologies harm innocent people, the companies must be held accountable… Victims deserve their day in court” Sen. Josh Hawley emphatically stated something we all agree with: “I don’t want 13 year-olds to be your guinea pig. This is what happened with social media. We had social media, who made billions of dollars giving us a mental health crisis in this country. They got rich, the kids got depressed, committed suicide. Why would we want to run that experiment again with AI?” (Statement during Senate Judiciary Subcommittee hearing, “Oversight of A.I.: The Need for Regulation”, September 12, 2023).
We, the undersigned, call urgently on policymakers, tech companies, and communities to join us in championing a safer, responsible, and ethical digital future for our children. Our kids deserve technology that enriches their lives, protects their innocence, and empowers their potential—not technology that exploits or endangers them.

.png?width=50&height=50&name=Bulb%20(1).png)
May 12, 2025