Responsible AI and Children: Towards a Rights-Based Approach to AI Governance

Recent breakthroughs in Large Language Models (LLMs), generative Artificial Intelligence (GenAI), intelligent agents, and other AI-driven technologies are driving the rapid expansion of AI in and across our everyday lives. As argued in a recent policy brief I co-authored for CIFAR’s AI Insights series, entitled Responsible AI and Children: Insights, Implications, and Best Practices, this is just as true for kids as it is for adults. In fact, any time kids use or encounter digital technologies, AI is likely already involved at some level. Or soon will be. Some of these interactions are intentional—as when a child uses DALL-E 3 to generate realistic “photos” of an imaginary creature they made up. Others will happen behind the scenes—as when an AI-driven age assurance system analyzes a child’s face to determine if they’re old enough to access a website.

Conversations about kids and AI seem to oscillate between wild optimism and existential dread. The one constant is that there are always more questions than answers when it comes to children’s deepening relationships with AI. Questions about AI’s impact and implications for children are urgent and should be pondered and researched extensively in the months and years ahead. But when it comes to developing policies to ensure that AI is developed and deployed in ways that are responsible to and for children, the time is now. Kids are already using AI, being tracked and assessed by AI, and having their data and images fed to AI. Guardrails must be put in place to ensure that children’s rights and safety are protected as these technologies evolve, so that harmful systemic problems like age-based algorithmic bias or commercial exploitation are avoided from the get-go.

In the systematic review of the literature on children and AI we conducted for our policy brief, we found substantial empirical evidence of data-centric technologies (the platforms and devices that collected so much of the data used to train AI) infringing on children’s privacy and other rights. We found compelling academic theories about the ambiguous roles AI technologies play in children’s creative and emotional lives. As well as unresolved legal debates about the limits of parental consent and age-based restrictions for ensuring that children’s best interests are upheld in the digital realm. We came to the conclusion that while more research is certainly needed, there is also a lot that AI policymakers can learn from the existing literature. Broader knowledge of the opportunities and challenges of “digital childhoods”—which are now the norm for Gen Alpha—allows for a more nuanced, balanced, culturally and historically grounded appreciation of the issues involved.

In the existing debates about kids and AI, there’s a tendency to focus on children’s privacy. Given the central role that data (personal, behavioral, created) plays in how AI is built and learns, privacy is an obvious area of concern. But it’s also an area of tech policy with a long and spotty history. For decades, children’s privacy laws failed to prevent the collection and manipulation of huge swaths of children’s data, including images and data about them posted by parents. Existing regulation also relies heavily on parental consent, which puts the onus on parents to track, monitor, and manage children’s ever-expanding digital footprint.

As argued in our policy brief, there are several important lessons here that policymakers should keep in mind. For one, policies aimed at protecting children’s privacy are often caught between a desire to promote innovation and business needs, on the one hand, and concerns about the harms that may occur if children’s data is misused or abused, on the other. Current attempts to protect children from the potential negative impacts of AI evoke a similar “push and pull” between corporate agendas and child safety, while children’s many other rights and interests are most often sidelined.

Meanwhile, research on children’s experiences of privacy in the digital environment shows that kids of all ages are very concerned about how companies are collecting and using their data. These concerns are not adequately addressed in current debates about child safety, which instead emphasize external social threats like bullies and predators. Many teens are also worried about being able to control what data different people in their own lives (such as parents, teachers, and friend groups) can see. They also desire a level of online anonymity so they can seek information or express themselves without fear of reprisal or worse. This is particularly important for LGBTQ2S+ children for whom online communities can be a lifeline. These diverse needs are overlooked in most age assurance and parental consent frameworks, which instead apply a “one size fits all” approach to children based on numeric age alone. Vast differences found in children’s home lives, bodies, identities and experiences are ignored in the process.

For the kids I’ve talked to in my own research, privacy is one very important right among many rights that children care about and prioritize. This finding aligns powerfully with a children’s rights framework, which is one of the key recommendations we make in our policy brief. Specifically, we take the position that policies aimed at regulating AI must: 1) consider the presence of children from the outset, while addressing their rights and best interests; 2) ground any decisions, guidelines, or recommendations in emerging and existing evidence about children’s uses, interactions, and experiences of data-centric technologies; and 3) include children and adolescents in the research and development of AI technologies. Our third recommendation reflects mounting evidence of the value and importance of involving children in tech design processes (e.g., the JGCC’s Designing with Kids initiative), as well as children’s right to be involved in decisions that impact them (as asserted in the UN Convention on the Rights of the Child (UNCRC)).

Organizations including UNICEF, the 5Rights Foundation (UK), and the Girl Scouts (USA) similarly argue that a child rights approach would best protect children and children’s interests as they encounter and engage with AI. First and foremost because the UNCRC addresses multiple facets and implications of AI, including children’s right to privacy but also their right to access information, their right to play, their right to express themselves, and many others (54 in total). The UNCRC has even adopted a General Comment (25) specifying how these rights apply in the digital environment, which by their definition includes AI.

Ultimately, we recommend that policymakers heed the growing global call to action for AI guidance informed by children’s rights. This call was amplified in September 2024 with the release of the final report from the UN Secretary General’s High-level Advisory Body on AI, Governing AI for Humanity. Throughout the report, the authors emphasize that AI Governance must focus on children: “Children generate one third of the data and will grow up to an AI-infused economy and world accustomed to the use of AI.”

It’s a crucial argument to make. Children’s needs, vulnerabilities, and best interests should always be on the agenda when discussing AI governance. Yet, policymakers often seem to forget about children when drafting new AI bills and guidelines. Canada’s proposed Artificial Intelligence and Data Act, for example, doesn’t even mention them—apart from a brief reference to children as an example of a “more vulnerable group” buried in a 32-page companion document.

Luckily, this isn’t the case everywhere. The European Union recently passed the groundbreaking AI Act which, in addition to being the world’s first comprehensive AI law, explicitly recognizes children’s rights and sets out a framework for child safety and risk assessment. Hopefully, other policymakers—in government, non-governmental organizations, and the tech industries—will soon follow suit.

 

Sara GrimesSara M. Grimes, PhD, is the Wolfe Chair of Scientific and Technological Literacy at McGill University in Montreal, Canada and the author of the award-winning book, Digital playgrounds: The hidden politics of children’s online play spaces, virtual worlds, and connected games.

TAGS: , , , , ,
More Content to Explore