October Highlights from Michael Preston
Dear Friends,
Happy October! The best month, in my opinion—although are we tired of pumpkin spice yet?
First things first: we were thrilled to see so many applications to our Well-Being by Design Fellowship. Thanks to all who have applied. We’ll be announcing our next cohort in December!
Now, some recent highlights:
We’re six months into the debate over Jonathan Haidt’s book The Anxious Generation. Agree or disagree with the book’s choice of data, conclusions, and recommendations, its publication kicked off an expansive dialogue drawing in professors, policymakers, parents, and pundits. Because all of us experience some form of digitally-induced feelings, it’s easy to make the leap from personal experience and observations to societal-level musings. On October 8, our colleague Candice Odgers, Professor of Psychological Science and Informatics at UC Irvine, shared the stage with Jonathan Haidt: “Making Sense of the Research on Social Media and Youth Mental Health,” hosted by UVA’s Thriving Youth in a Digital Environment initiative. Their conversation was a lively debate. We particularly appreciated the questions posed by the young people in the room.
Because we are deeply invested in how digital products are made for kids, we monitor new approaches to supporting well-being and safety for kids (and all people). This month saw several announcements about design-based opportunities to improve user experience and spur additional thinking and research:
- Meta’s release of Instagram for Teens brings long-awaited protections and controls for kids aged 13-17. The rollout will soon affect millions of young people around the world and kick off new, and we hope productive, conversations in households, and better online experiences for all.
- Pinterest’s engineering team, as part of their company-wide commitment to the Inspired Internet Pledge, recently published a Field Guide to Non-Engagement Signals that helps social platforms focus on user well-being by prioritizing quality content and long-term user retention.
And now that we are a month or two into the school year, we’re hearing more and more about the phones in schools. While education systems around the world are considering cell phone restrictions and outright bans, the LSE and 5Rights Digital Futures for Children joint research center is advancing the debate by reviewing the evidence on the efficacy of such policies in the UK, Singapore, and Colombia. See their newly published report and companion blog post. As with any digital product or service used by children, the key is to balance opportunity and risk.
Now, please read on for some news from the Joan Ganz Cooney Center!
Michael Preston
Executive Director
Different but complementary: Navigating AI’s role in children’s learning and development
As a researcher focusing on AI and child development (and also as a parent of two), I have seen many instances of kids talking to conversational AI agents like Siri, Alexa, or ChatGPT. It seems that kids turn to AI agents to satisfy their curiosity, asking things like what six plus six equals, how far away black holes are, or how to make an invisible potion. And sometimes kids engage in what feels like social chitchat: they share their favorite colors or princesses, or even ask if the AI has its own favorites. Very often, children seem amazed and baffled by how AI can understand and respond in ways that seem quite smart.
These observations have made me wonder whether the future might entail AI serving as a genuine conversational partner for children—something (someone?) kids can talk to, have fun with, and potentially learn from. We all understand how crucial it is for children to engage in conversations to learn about the world around them; if AI can offer similar conversations, children’s opportunities for learning would be significantly amplified, as children might not always have an engaging companion available to answer questions like “why” and “how”. Yet, I also share the common concern about the uncertain outlook for what could be called the “AI generation.” Is AI’s ability to provide quick answers stunting children’s learning? Is using the “Hey” command to wake up AI making kids forget about politeness? And perhaps most troubling, what if they become more attached to their AI than to the humans around them?
Both hopes and concerns about AI are valid, and it’s important to recognize that we are at a critical juncture in AI development, where its future trajectory is still being shaped. So, how should we approach AI? I believe we should view well-designed and child-centered AI as an additional source of support and learning opportunity for children, one that is different from but complementary to the interactions they have with their family, teachers, and peers.
In fact, my research, and that of many others, has already shown that children can effectively learn from AI, provided the AI is developed in alignment with learning principles. Over the past few years, I have designed and tested AI companions that engage children during book reading, television watching, and storytelling, by asking questions and responding to children based on their answers.
To give you a more concrete idea of what this type of AI-assisted conversation looks like—imagine a young child and their caregiver reading a picture book together. In my research studies, a smart speaker simulates the role of the caregiver, reading the story aloud and pausing intermittently to ask children questions—generally questions about the problems the characters are encountering, how the characters feel, or what the children think will happen next. The smart speaker listens to children’s responses and offers little hints, just as a caregiver would, if the child needs a bit of help answering.
We have repeatedly found that young children who engage in this type of dialogue with AI comprehended the story better and picked up more vocabulary than did those who merely listened to the story without this dialogue. What is even more intriguing is that, in some contexts, we found that an AI companion can lead to learning gains comparable to those from engaging in similar dialogue with a human.
If you’re expecting me to suggest that AI can replace human interactions, that’s not my argument. Even when studies show learning benefits, it does not mean that AI can replicate the unique benefits of authentic conversations children have with others. This is because conversation is not just about exchanging information; it is also about building relationships. Children thrive when they engage with someone they can relate to, and someone who can relate to them. So, the question comes down to whether children and AI can achieve this level of connection.
This is a very challenging question to answer, but we can gain some insights on children’s relationships with AI by examining how children talk to them and comparing this to how they talk to other humans. While children were quite talkative with the AI agents in my studies, they were even more talkative with human conversation partners. Moreover, when talking with a human, children are more likely to steer the dialogue, adding their own thoughts or following up with question after question when something puzzles them. These “child-driven” aspects of conversations are the active ingredient that fuels children’s cognitive and social development, and AI still falls short in encouraging this type of engagement.
Why do children engage with AI differently? It boils down to how children perceive, or feel about, AI. My studies with children have made it clear that even children as young as four recognize that AI simply doesn’t look, talk, or act like a human. They also sense that AI doesn’t have the same experiences that they have and can’t genuinely empathize with them. These factors, consciously or unconsciously, affect children’s engagement, making their interactions with AI fundamentally different from their interactions with people.
These differences are not necessarily negative. In fact, we can take advantage of these inherently different experiences to maximize the benefits for children while minimizing undue influences. I’d like to offer two suggestions that might help achieve this:
- First, we should help children maintain healthy boundaries with AI by being transparent about its nature. Ensuring they understand that they are interacting with a program, not a person, prevents confusion and strengthens their ability to differentiate between AI and humans. This is where the growing number of AI literacy initiatives—designed to teach children critical knowledge about AI—plays a crucial role, empowering them to approach AI interactions with awareness and confidence.
- Second, we should design AI to encourage parental involvement. In a recent study, our team developed an AI that allowed a beloved Sesame Street character to engage with children while reading a book together. The AI provided discussion prompts to actively involve parents, and we found that this approach not only supported children’s language development but also fostered meaningful and enriching family interactions
By thoughtfully adopting these approaches, I am hopeful that we embrace the new learning possibilities AI brings while preserving the irreplaceable human connections that nurture our children’s growth.
Check out our full papers here:
Xu, Y., Aubele, J., Vigil, V., Bustamante, A. S., Kim, Y. S., & Warschauer, M. (2022). Dialogue with a conversational agent promotes children’s story comprehension via enhancing engagement. Child Development, 93(2), e149-e167. https://doi.org/10.1111/cdev.13708
Xu, Y., He, K., Levine, J., Ritchie, D., Pan, Z., Bustamante, A., & Warschauer, M. (2024). Artificial intelligence enhances children’s science learning from television shows. Journal of Educational Psychology. Advance online publication. https://dx.doi.org/10.1037/edu0000889
Ying Xu is an Assistant Professor of AI in Learning and Education at Harvard University. Her research focuses on designing AI technologies that promote language and literacy development, STEM learning, and wellbeing for children and families.