Responsible AI and Children: Towards a Rights-Based Approach to AI Governance

Recent breakthroughs in Large Language Models (LLMs), generative Artificial Intelligence (GenAI), intelligent agents, and other AI-driven technologies are driving the rapid expansion of AI in and across our everyday lives. As argued in a recent policy brief I co-authored for CIFAR’s AI Insights series, entitled Responsible AI and Children: Insights, Implications, and Best Practices, this is just as true for kids as it is for adults. In fact, any time kids use or encounter digital technologies, AI is likely already involved at some level. Or soon will be. Some of these interactions are intentional—as when a child uses DALL-E 3 to generate realistic “photos” of an imaginary creature they made up. Others will happen behind the scenes—as when an AI-driven age assurance system analyzes a child’s face to determine if they’re old enough to access a website.

Conversations about kids and AI seem to oscillate between wild optimism and existential dread. The one constant is that there are always more questions than answers when it comes to children’s deepening relationships with AI. Questions about AI’s impact and implications for children are urgent and should be pondered and researched extensively in the months and years ahead. But when it comes to developing policies to ensure that AI is developed and deployed in ways that are responsible to and for children, the time is now. Kids are already using AI, being tracked and assessed by AI, and having their data and images fed to AI. Guardrails must be put in place to ensure that children’s rights and safety are protected as these technologies evolve, so that harmful systemic problems like age-based algorithmic bias or commercial exploitation are avoided from the get-go.

In the systematic review of the literature on children and AI we conducted for our policy brief, we found substantial empirical evidence of data-centric technologies (the platforms and devices that collected so much of the data used to train AI) infringing on children’s privacy and other rights. We found compelling academic theories about the ambiguous roles AI technologies play in children’s creative and emotional lives. As well as unresolved legal debates about the limits of parental consent and age-based restrictions for ensuring that children’s best interests are upheld in the digital realm. We came to the conclusion that while more research is certainly needed, there is also a lot that AI policymakers can learn from the existing literature. Broader knowledge of the opportunities and challenges of “digital childhoods”—which are now the norm for Gen Alpha—allows for a more nuanced, balanced, culturally and historically grounded appreciation of the issues involved.

In the existing debates about kids and AI, there’s a tendency to focus on children’s privacy. Given the central role that data (personal, behavioral, created) plays in how AI is built and learns, privacy is an obvious area of concern. But it’s also an area of tech policy with a long and spotty history. For decades, children’s privacy laws failed to prevent the collection and manipulation of huge swaths of children’s data, including images and data about them posted by parents. Existing regulation also relies heavily on parental consent, which puts the onus on parents to track, monitor, and manage children’s ever-expanding digital footprint.

As argued in our policy brief, there are several important lessons here that policymakers should keep in mind. For one, policies aimed at protecting children’s privacy are often caught between a desire to promote innovation and business needs, on the one hand, and concerns about the harms that may occur if children’s data is misused or abused, on the other. Current attempts to protect children from the potential negative impacts of AI evoke a similar “push and pull” between corporate agendas and child safety, while children’s many other rights and interests are most often sidelined.

Meanwhile, research on children’s experiences of privacy in the digital environment shows that kids of all ages are very concerned about how companies are collecting and using their data. These concerns are not adequately addressed in current debates about child safety, which instead emphasize external social threats like bullies and predators. Many teens are also worried about being able to control what data different people in their own lives (such as parents, teachers, and friend groups) can see. They also desire a level of online anonymity so they can seek information or express themselves without fear of reprisal or worse. This is particularly important for LGBTQ2S+ children for whom online communities can be a lifeline. These diverse needs are overlooked in most age assurance and parental consent frameworks, which instead apply a “one size fits all” approach to children based on numeric age alone. Vast differences found in children’s home lives, bodies, identities and experiences are ignored in the process.

For the kids I’ve talked to in my own research, privacy is one very important right among many rights that children care about and prioritize. This finding aligns powerfully with a children’s rights framework, which is one of the key recommendations we make in our policy brief. Specifically, we take the position that policies aimed at regulating AI must: 1) consider the presence of children from the outset, while addressing their rights and best interests; 2) ground any decisions, guidelines, or recommendations in emerging and existing evidence about children’s uses, interactions, and experiences of data-centric technologies; and 3) include children and adolescents in the research and development of AI technologies. Our third recommendation reflects mounting evidence of the value and importance of involving children in tech design processes (e.g., the JGCC’s Designing with Kids initiative), as well as children’s right to be involved in decisions that impact them (as asserted in the UN Convention on the Rights of the Child (UNCRC)).

Organizations including UNICEF, the 5Rights Foundation (UK), and the Girl Scouts (USA) similarly argue that a child rights approach would best protect children and children’s interests as they encounter and engage with AI. First and foremost because the UNCRC addresses multiple facets and implications of AI, including children’s right to privacy but also their right to access information, their right to play, their right to express themselves, and many others (54 in total). The UNCRC has even adopted a General Comment (25) specifying how these rights apply in the digital environment, which by their definition includes AI.

Ultimately, we recommend that policymakers heed the growing global call to action for AI guidance informed by children’s rights. This call was amplified in September 2024 with the release of the final report from the UN Secretary General’s High-level Advisory Body on AI, Governing AI for Humanity. Throughout the report, the authors emphasize that AI Governance must focus on children: “Children generate one third of the data and will grow up to an AI-infused economy and world accustomed to the use of AI.”

It’s a crucial argument to make. Children’s needs, vulnerabilities, and best interests should always be on the agenda when discussing AI governance. Yet, policymakers often seem to forget about children when drafting new AI bills and guidelines. Canada’s proposed Artificial Intelligence and Data Act, for example, doesn’t even mention them—apart from a brief reference to children as an example of a “more vulnerable group” buried in a 32-page companion document.

Luckily, this isn’t the case everywhere. The European Union recently passed the groundbreaking AI Act which, in addition to being the world’s first comprehensive AI law, explicitly recognizes children’s rights and sets out a framework for child safety and risk assessment. Hopefully, other policymakers—in government, non-governmental organizations, and the tech industries—will soon follow suit.

 

Sara GrimesSara M. Grimes, PhD, is the Wolfe Chair of Scientific and Technological Literacy at McGill University in Montreal, Canada and the author of the award-winning book, Digital playgrounds: The hidden politics of children’s online play spaces, virtual worlds, and connected games.

Announcing Our 2025 Well-Being by Design Fellows

We are thrilled to kick off the new year by announcing the 10 fellows who will join our second cohort of the Well-Being by Design Fellowship, supported by Pinterest, foundry10, and Google.org. This year’s exceptional kids’ media designers and researchers were selected from more than 90 applicants. Our fellows represent a diverse range of organizations, including the University of Oregon’s Reality Lab, the Natural History Museum of Utah, Killer Snails, Amazon Kids, Tiny Docs, and Fred Rogers Productions.

These fellows are committed to prioritizing children’s well-being at the early stages of product development and are currently working on projects that will be workshopped throughout the fellowship period. The program will feature interactive virtual sessions over five months, beginning with an in-person gathering at Sesame Workshop’s headquarters in New York City, and culminating in a virtual webinar to share our learnings.

The Cooney Center will support the fellows through workshops, consultations with industry experts and young people, and collaborative learning opportunities as they make progress on their respective designs.

At the conclusion of the fellowship, we will publish the fellows’ work in case studies that explain their approach and highlight new features designed to support well-being in digital products for kids. Please sign up for our newsletter if you’d like to stay up to date.

 

The 2025 Well-Being by Design cohort includes:

Jen Chiou
CodeSpeak Labs
Jen Chiou 趙燕妮 (she/her) is the founder of CodeSpeak Labs, a computer science education social enterprise that empowers K-12 students to use technology to build a better world. Over 20,000 students in California and New York have taken their classes. This year, Jen and her 10-year-old son co-founded Quest Craft, an online role-playing game platform that helps kids develop social-emotional skills through culturally diverse, youth-driven storytelling. Inspired by Dungeons & Dragons, Quest Craft uses research-backed methods to foster creativity, friendships, and collaboration. Prior to founding CodeSpeak Labs, she was the founding Executive Director of Crisis Text Line, the first nationwide SMS-based crisis hotline for teens; an early team member at the global NGO Teach For All; and a nonprofit consultant at the Bridgespan Group. She graduated Phi Beta Kappa from Stanford.


Mariana Diaz-Wionczek
Marshmallow Project
Dr. Diaz-Wionczek is a children’s media advisor, producer, and educational consultant dedicated to creating meaningful, inclusive, and educational content for young audiences. Mariana combines her expertise in cognitive development with her passion for storytelling to craft engaging and impactful media experiences. As the principal of MDW Consulting, Mariana collaborates with a wide range of partners to develop innovative media and technology projects that foster children’s cognitive, social-emotional, and language development. Her work emphasizes cultural authenticity and diversity, crafting narratives that resonate with audiences from all backgrounds. She is Executive Producer for PBS’s Rosie’s Rules and was Co-Executive Producer for the first season of Dora, the reboot of Dora the Explorer, where she served as Head of Education and Research and Producer for over a decade. Mariana’s recent work with generative AI through The Marshmallow Project underscores her commitment to leveraging innovative tools to enhance children’s learning experiences.


Phoebe Jiang
PBS SoCal
A kids media expert with a global perspective, Phoebe Jiang has spent a decade designing interactive digital experiences for kids and their grown-up guides. She’s an avid storyteller in all mediums, especially weaving curriculum into content to spark deeper learning. Her experiences span interactive games to short- and long-form video content [live-action, puppets, animation, etc.]. After completing her master’s degree at Teachers College, Columbia University, Phoebe’s been fortunate to create content for kids and caregivers with Sesame Workshop, Little Airplane Productions, ABCmouse, and Tencent. Currently, she’s an Early Learning Manager at PBS SoCal, where she develops programming for early math, as well as advocates for content that fosters well-being and honors the range of kids’ lived experiences. Phoebe’s specialty is maximizing educational impact for mini media moments.


Madlyn Larson
Natural History Museum of Utah
Madlyn Larson is an educator, program designer, and community builder dedicated to creating learning experiences that ignite curiosity and nurture the whole child. As the project leader behind Research Quest, a digital platform reaching hundreds of educators and their thousands of students nationwide, Madlyn has guided a talented team to develop investigations that empower elementary and middle schoolers to think and act like real scientists. By combining museum collections, real-world research, and learner-driven exploration, her work fosters curiosity, critical thinking, and skill-building. She collaborates with educators, museums, and community organizations to design programs that engage learners, cultivate their critical thinking, and inspire them to take ownership of their learning with curiosity and open-mindedness. Driven by a love for collaboration and innovation, Madlyn excels at bringing people together to tackle big ideas. She is passionate about crafting meaningful educational experiences that prepare young minds to navigate the opportunities and challenges of the future with confidence and creativity.


Kyrsten Novak
Amazon Kids
Kyrsten Novak is a design researcher with a background in human factors and developmental psychology. She began her career as a researcher at the University of Pittsburgh’s Office of Child Development, where she focused on after-school and community programs. Her passion for education led her to pursue K-5 teaching and later, the virtual education space for 6-12th grade students, teachers, and administrators. After earning her MS in Human Systems Engineering, Kyrsten advanced to leading global research initiatives that shaped health technologies and multi-generational products at Apple, including the Apple Watch for Kids, Family Setup, ScreenTime, Health Sharing, and SchoolTime. Currently, she is a Senior User Researcher at Amazon Kids, focusing on devices and services for children ages 3-12 and their families. Her work sits at the intersection of child development and innovation, delivering impactful, user-centered solutions that resonate with customers of all ages.


Danny Pimentel
University of Oregon
Dr. Danny Pimentel is an Assistant Professor of Immersive Media Psychology, Fellow at Yale’s Program on Climate Change Communication, and co-Director of the Oregon Reality Lab at the University of Oregon’s School of Journalism and Communication. As a developer and researcher, Danny creates augmented and virtual reality (AR/VR) storytelling experiences, using mixed methods approaches to quantify their impact on prosocial and pro-environmental outcomes. The majority of his work explores the psychological and behavioral implications of embodying human and non-human characters in AR/VR, with emphasis placed on understanding how such experiences can foster human-nature connectedness among youth. His AR/VR projects have been supported by Meta, Snap AR, Unity 3D, Google AR Core, and National Geographic, among others. Danny is a first-generation Latino raised in South Florida and received his Ph.D. in Communication from the University of Florida.


Noelle Posadas Shang
Killer Snails
Noelle is an award-winning interaction designer with over a decade of experience crafting engaging media for children. A Pratt Institute graduate with a degree in communication design and a focus on illustration, Noelle initially dreamed of becoming a children’s book illustrator. Instead, she entered the emerging world of mobile app development in 2011, joining Dreamkind to create groundbreaking apps for giants like Sesame Street, Disney, Brain Quest, and the NFL. These early projects sparked her passion for interaction and game design. In 2016, Noelle joined Killer Snails, a team of scientists, educators, and developers funded by an NSF SBIR grant. Noelle and Killer Snails have since developed multiple award-winning digital games and immersive experiences, all dedicated to inspiring students to see themselves as scientists and pursue STEM careers. Through her work, Noelle continues to bridge creativity and education, shaping the future of interactive learning.


Maried Rivera Nieves
Bilingual Generation
Maried Rivera Nieves (she/her) is the Senior Director of Operations and User Experience at Bilingual Generation. A multi-disciplinarian at heart and by training, Maried has spent her career supporting organizations to design, implement, and sustain programs and systems that center equity and anti-racism. Her proficiency lies in designing user-friendly processes and visuals, as well as project management. Throughout a project’s lifecycle, Maried carefully attends to data collection, analysis, and visualization, making sure insights are effectively communicated. Through her work on the Bili app–Bilingual Generation’s first foray into digital learning tools for children–she has stretched to support the team with product design, UX/UI, story editing, voice acting, graphic design, and illustration. Grounding all of her work is a conviction that everyone deserves loving learning spaces where their play, creativity, questions, and dreams are nurtured and championed. Maried was born in Puerto Rico, where much of her family remains.


Rubin Soodak
Fred Rogers Productions
Rubin Soodak is a creator of digital things for learning with radical impact. As an Interactive Producer at Fred Rogers Productions, he leads interactive production for the preschool series Alma’s Way and other brands across digital platforms. Rubin’s games span a wide range of learning domains, including flexible thinking, persistence, self-awareness, social awareness, and compassion. He believes that good stuff for learning must listen to and trust learners, meeting them where they are in order to take them where they want to go. Rubin has dedicated his career to the empowerment of children through media and digital literacy. Before joining Fred Rogers Productions, he worked as a producer for Nick Jr./Noggin, Encantos, and Fablevision, and also taught students at the high-school and undergraduate level. Rubin holds an Ed.M. in Technology, Innovation, and Education from the Harvard Graduate School of Education, and a B.A. in Physics from The University of Chicago.


Sunny Williams
Tiny Docs
Sunny Williams wants to live in a world where humans pursue a passion driven life, Shark Week is at least twice a year, and he can grow a full, award-winning beard. In the past, he has been an English teacher in Spain, producer of a couple short films, and an attorney at a big law firm. Sunny discovered his true passion, however, when he founded Tiny Docs, an interactive web app that creates cartoons designed to educate kids about health in a fun and easy way to understand language. Tiny Docs’ mission is to improve kids’ health and make a billion people smile. When he’s not working on Tiny Docs, you can find him taking improv classes, training for the next marathon or hanging on the sofa with his wife, Gina and two puppies, Maya and Charley.