Can AI Help Kids Feel Creative?
We talk about kids and AI, and we talk about creativity and AI. But while we know that creativity impacts children’s development, identity formation, and learning, children’s creative experiences with AI are often left out of the conversation. Conversations around AI and kids tend to focus on AI literacy – teaching them the skills they need to understand and use AI in their everyday lives. Certainly these skills are important as AI continues to be integrated into our everyday lives, but my colleagues and I also wondered, can generative AI be used to help kids feel more creative?
AI Tools Kids Used During Our Study |
ChatGPT: An AI language model that generates human-like text. |
DALL-E 2: An AI image generation model that generates different types of visuals from stylized paintings to realistic images. |
Magenta.js: A set of browser based apps used to generate music. |
Our co-design group, KidsTeam UW, wanted to better understand children’s perceptions of AI for creative work and the ways that using AI tools could help them think of themselves as creative too. In order to explore this, we asked 12 children (aged 7-13) to engage in a set of six activities within intergenerational co-design teams. In the first three sessions, we used ChatGPT for writing and exploring both “good” uses of AI tools, such as using AI to help stop global warming, and “bad” uses of AI, such as scamming other kids with a lemonade stand. Session four focused on creating images with DALL-E 2, a AI image-generation tool. Session five combined ChatGPT and DALL-E 2 to make storybooks. Finally, we introduced music composition with Google’s Magenta.js tools. During each session, we asked the children to tell us what they liked and disliked about the tools, as well as any ideas they might have to make the tools more helpful to them while creating.
We found that kids didn’t instantly grasp the creative possibilities of AI tools on their own, but with guidance from peers and adults, they were able to create meaningful experiences. We identified four key contexts that can support these experiences:
- Understanding what the system can do. Kids tend to understand AI systems based on their past experiences with other technologies, including other AI systems they may have used before. Because all AI tools are different from one another, children often need help grasping each tool’s capabilities. For example, one child became frustrated when DALL-E couldn’t remember what her character “Lisa” looked like without having to describe her each time. She didn’t realize until an adult explained that unlike ChatGPT, DALL-E doesn’t remember previous prompts.
- Understanding the creative process. Creativity requires flexibility and the willingness to try again. When kids used AI to create, they had to adjust their ideas and find ways to make the tool work for them—like adding extra instructions, rephrasing prompts, or even starting fresh with a new idea. For instance, when they asked ChatGPT to “Write a fictional sports story starting with ‘STEPHEN CURRY SCORES A TOP-RIGHT-CORNER IN THE BOTTOM OF THE NINTH INNING TO WIN THE SUPER BOWL FOR RUSSIA,'” ChatGPT pointed out factual errors. When they rephrased the request as “fanfic,” ChatGPT went ahead and wrote the story.
- Understanding the domain. Generative AI tools often use language that’s too advanced or geared towards adults rather than what kids know and care about. This can make it hard for kids to get their ideas across, especially in areas they’re interested in. For example, when one of the children asked ChatGPT about “dinosaur relationships,” it offered information about herding behavior rather than dinosaur friendships. Plus, children often struggle with knowing domain conventions, making it more difficult to evaluate the AI-generated work.
- Understanding one’s intention and creative environment. Kids, just like adults, have their own opinions about using AI in creative work. For instance, when asked how they would feel if a friend used ChatGPT to write a birthday card, their responses were mixed. Some felt sad and disappointed, thinking AI made the message less genuine and thoughtful. But another child said it was fine as long as ChatGPT was just used as a starting point. Children understand that intention and context of creative acts matter.
Our Tips for Using Generative AI to Support Children’s Creative Self-Efficacy
For us, the most exciting part of our work was seeing children build confidence as they created. Generative AI, unlike other tools, lets children try out ideas in rapid succession, often seeing that their creative ideas are valuable. This allows them to build creative self-efficacy, or the belief that they too can be creative. Self-efficacy can be created in a number of ways, such as seeing their ideas be realized, seeing successful creative uses of AI by peers, and feeling encouraged during their processes. Scaffolding any of the contexts listed above can help to create more meaningful creative interactions for children with AI. Below, we have listed a series of questions based on each context. These questions, when posed to children, can help them reflect on different aspects of the context that align with types of ways to build self-efficacy. In turn, this can help parents and teachers to identify where scaffolding may support children’s experiences. For example, a child who suggests that the AI tool did not give them what they expected to see might be supported by encouraging them to use new prompts, or helping them align their creative choices with what the AI tool can do.
Questions that can help kids build Creative Self-Belief when using AI
- I want to support the child’s relationship with an AI Tool.
-
- Did the AI give me what I expected to see?
- How easy do I think the AI tool is for me to use based on what I already know?
- How did I get feedback from the AI about why it gave me the response it did?
- How do I feel after using the AI tool?
- How likely is it that I can imagine a positive and successful experience using this tool?
- I want to support the process of working with an AI Tool.
-
- How much did the answer help me with what I was trying to create?
- How can I see other people doing things well that are like what I am doing?
- How much can I change what I am doing based on the feedback I get?
- How much do I care about the process of making my work with the tool?
- How can I imagine this experience with the tool being part of my overall creative process?
- I want to support the child’s domain understanding with an AI Tool.
-
- How did what I know help me create what I wanted?
- What are some examples of people using these tools in my area of interests?
- How much does what I made follow the usual rules for this kind of work (i.e., a story, song, or drawing)?
- How do I feel about working with this area or topic?
- How can I imagine using this tool in my specific area of work or interest?
- I want to support a child creative expression with an AI Tool.
-
- How well did the AI help me to finish what I wanted to make?
- Can I find examples of people using the tool well for what I want to do?
- How did the feedback help me with what I wanted to create?
- How can I keep myself motivated while working on my creation?
- How can I picture this tool helping me achieve my creative goal?
While these questions can offer both feedback and suggestions on how to scaffold certain contexts, here are a few extra tips for helping the kids in your life build their creative self-efficacy using Generative AI:
- Help children see themselves as creators. Encourage children to see themselves not only as students but as creators, capable of making art, stories, and music. Children have things they want to say and share, just like adults. Empowering children encourages them to see value in their own ideas and the process of trying to create, rather than only the product of creating.
- Guide creative adjustments. Offer guidance on how children can refine their ideas when using AI tools. Help them decide when they want to try something new, or stick to their original creative idea. This might look like suggesting a new approach when using a tool or giving them more ideas to work with. Note that supporting any context will help the others. For example, helping children learn more about a creative domain will in turn help them understand the potential of a system, new possible adjustments to their process, and help them define their intentions.
- Foster ethical conversations around AI in creative work. Don’t shy away from helping children explore the role of AI in issues of authorship, originality, responsible use, and authenticity. We suggest that offering children the chance to express themselves with AI can provide meaningful opportunities to discuss broader questions of ethics and the role of technology in their lives and identities.
- Frame AI as a creative tool. Encourage children to see AI as something that can support their creativity, while also building their own skills of self-expression. Remind them that AI is only one tool to help creatively express themselves, there are lots of other tools they can use from other computer programs to pen and paper! AI is not a replacement for creating, but simply another way they can create. They must still learn about their own preferences, domain conventions, and consider how they most want to share their unique experiences with the world.
Check out our paper about AI and Creative Self Efficacy here.
Newman, M., Sun, K., Gasperina, I.D., Pedraja, M., Kanchi, R., Song, M.B., Li, R., Lee, J.H., & Yip, J.C. (2024). “I want it to talk like Darth Vader”: Helping children construct creative self-efficacy with generative AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-18).
Michele Newman is a doctoral student in the Information School at the University of Washington advised by Dr. Jin Ha Lee. She is also a member of the UW Gamer Group and the Digital Youth Lab working with KidsTeam UW under Dr. Jason Yip. Her research broadly explores how the design and use of software can support creativity and aid in the creation, transmission, and preservation of cultural heritage.
“A Whole Lot Like Love”: Play Make Learn 2024
Ever since I joined the Joan Ganz Cooney Center’s Well-being by Design Fellowship, I have found myself subconsciously auditing my entire virtual world for its well-being design considerations. Does the team budget spreadsheet system promote feelings of competence for my colleagues? How does my wedding website support guests’ sense of identity? Did this airline-app-that-shall-not-be-named consider users’ feelings of autonomy in times of a global airline outage at all? I was delighted to attend the Play Make Learn annual conference at UW Madison, which offered a much more relevant place to explore this framework in action, as well as share lessons learned from the fellowship with Olivia Korchagin (Global Tinker), Keeana Saxon(Kidogo Productions), and Medha Tareduring our panel. “Play Make Learn” offered endless examples of well-being both in the programs and people they highlighted, and the overall conference learning experience, which was designed to center identities, creativity, and relationships. Here’s a quick rundown of my highlights:
Artist and educator Lynda Barry opened her classroom (lovingly named The Comics Room) to pre-conference attendees, creating a four-hour space for play, collaboration, and wonder. We delighted in each other’s collages. We drew with our eyes closed. We brought doodles of ourselves as astronauts and vegetables and animals to life. We learned about our own inherent special styles, and helped others recognize the lines of their own. She invited us to observe the feelings we had in our bodies after this space of connection and creativity. Making things together and being delighted by others’ work “feels a whole lot like love,” Professor Summer Sausage (Lynda Barry’s current preferred educator pseudonym) says.
Rebecca Millerjohn from Madison Public Library shared Observation Deck, a digital formative evaluation tool that helps library facilitators collect and analyze data from learning experiences they host. By capturing pictures and observations and tagging them to learning frameworks, facilitators have the ability to gather and analyze qualitative data and outcomes in library programs that might otherwise be cumbersome or challenging to measure, like “developing relationships, developing confidence, engaging in self-directed tasks, or exercising creativity.” The ability to measure and track success of the measures that matter most to librarians allows them to continue to grow their ability to impact learners.
Aaron Trammell invited attendees to consider the history of tabletop game development in conversation with the history of the United States through his book talk on The Privilege of Play. He posed the question of “who has had access to leisure?” in thinking about the development and popularization of games, to help us understand where we’ve been and to understand the barriers that need to be dismantled to build a more inclusive future.
Rilla Khaled offered speculative play as a powerful way of engaging with social and cultural questions through games that represent unfamiliar possible near futures with familiar game functions. At the center of speculative design, interaction design, and playful design, she posits that speculative play experiences offer a hands-on way of engaging with the realities of big challenges.
Mouna Algahaithi from PBS Wisconsin and Megan Schädlich from The Healing Library facilitated a live tour through a virtual world of free resources that support children and families specifically impacted by trauma. They offered their own perspective on a wide range of ways that facilitators of family experiences can consider and design experiences that meet families where they are.
Jake Sanford and Karah Peña from KID Museum in Bethesda, Maryland shared their design considerations in building an environment of choice and agency for their kids, finding unusual in the usual to inspire wonder and excitement.
Ronni Haydon from CU Boulder shared a reflection practice through a Zine (“What Does Equity Mean to Me?”) they developed for staff from informal learning organizations “to surface what equity means to you and use it as a starting point for conversations.” The booklet asks questions of designers around values-driven design, including: “Beyond your own values, whose values are showing up in your space?”
Medha Tare, Olivia Korchagin, Keeana Saxon, and I spoke to values-driven design practices we had explored within the Well-being by Design Fellowship in our panel, “Applying Principles of Well-being to Digital Design for Kids”. We shared the power of auditing our tools against the RITEC framework’s components, and learning from the strengths of each other’s work to grow our own. We talked about the importance of including youth voices in our design practices. And we underscored the importance of holding ourselves accountable not only to the outcomes of products, but the makeup of our process, to ensure that children’s well-being is considered from the start.
Just as I had experienced learning about others’ products during the fellowship, I saw evidence of well-being components across the programs we encountered across the conference. No one program or product or experience will be solely responsible for growing any child’s digital well-being alone; a landscape full of people making things together and delighting and reflecting on one another’s work that center children’s well-being, however, may get us closer to creating digital spaces that play a positive role in childhood. The more experiences we can provide for kids that “feel a whole lot like love,” the better. For now, I’m grateful to have been able to be a part of the Play, Make, Learn conference world; it was a gift to be surrounded by designers of all kinds from many places who believe that more playful, affirming, loving learning spaces for kids are surely possible.
Melissa Gedney is the Senior Manager of the Learn Together Project at PBS KIDS, supporting co-design of content and experiences to foster intergenerational moments between kids and their grown-ups. She will focus on workshopping and identifying ways to integrate curated family- and educator-facing resources alongside kids’ products that add value and play to their media experience. She was a Well-Being by Design Fellow in 2024.
AI Knocking: What Can Parents of Young Kids Expect?
Although we are still in the midst of discovering the implications and opportunities that various technologies bring to our children’s (and our own) lives, we’re seeing a new technological innovation getting a lot of attention: artificial intelligence (or AI) – and specifically Generative AI.
AI is something you’re likely already using in your daily life. If you’ve ever unlocked your phone with facial recognition, spoken to a voice assistant like Siri or Alexa, or relied on a navigation app like Apple or Google maps to choose the fastest route to your destination based on current traffic conditions – you’ve used AI. In principle, AI refers to many technologies (many of which we use on a daily basis) that imitate human-like thinking, learning, and problem-solving1. It’s an incredibly useful tool that makes our lives easier – you might be surprised to learn and test your knowledge of how embedded it is in the world around us.
Much of the buzz that we are hearing right now specifically refers to Generative AI (GenAI) – a type of AI that uses machine learning (explained wonderfully here) to uncover patterns in data from speech, writing, and art and generate new content that looks and sounds like it was made by humans. This means we can now generate full articles and essays, videos of specific characters and scenes, and even music and poetry, all by simply entering a prompt into a GenAI tool2.
It is important to note that GenAI tools are trained on massive amounts of already existing data (text, images, video, etc.) that it searches for patterns from which it learns and then creates novel outputs. GenAI can only use the information available to it when it is trained.
How should kids and families be using GenAI? Maybe your family already uses a voice assistant, like Siri, Google Home, or Alexa to help with your daily tasks, or even tell you a joke. Perhaps your kids have used some of these tools at school to generate ideas for stories or practice their reading, or maybe their teacher has used it to create practice tests or brainstorm questions.
In these and other constantly-evolving ways, GenAI is integrating itself into the daily fabric of our lives – and our children’s – so should we be worried? We wondered what parents of young children (ages 2-8) currently think about GenAI and the changes it brings to their lives.
How do parents feel about AI?
By surveying 53 parents across the U.S. of varying ethnicities (Black n=14, Asian n=2, Hispanic/Latinx n= 8, White n=28, Native American or Alaskan Native n=1) and households incomes (M=$25K-75$K) about their concerns, aspirations, and thoughts on AI coming into their home, we identified three themes that seemed top of mind on this topic:
-
- Most parents expressed a curiosity to learn more about GenAI and its capabilities while expressing a cautious optimism about the benefits AI might bring to their children and families.
“I think that artificial intelligence will make businesses in the whole world a lot better and it won’t burn out people.” (child age 8)
“We don’t really experience AI in our daily lives just yet. I am not sure how my kids would interact with it and if it would be beneficial for them. I do feel that I need to learn more and understand the technology as it is going to be incorporated into our daily lives more.” (children ages 6 & 7).
- Others expressed concerns regarding GenAI’s impact on their children’s privacy, playtime, and creativity, and the impact GenAI may have on their children’s learning abilities, with one parent saying:
“I am concerned about my child using AI in school. Will their school have policies against it? Could they become too reliant on it that they don’t learn? I would love it if AI made certain menial tasks in my life easier. I am cautiously optimistic about AI and still curious to see where it goes in the future” (child age 8)
“We worry about AI in our home [because] it tracks and broadcasts so much about our child. We are very careful to be sure that the baby monitor is not online, but is air-gapped so that no one can see and talk to our child. We are very careful about AI enabled technologies to protect our child from outsiders.” (children ages 2 & 4)
- Alongside these, we couldn’t help but notice a sense of inevitability from parents. One parent mentioned that AI entering the home is:
“….just the nature of the beast in today’s society; AI is taking over. We do monitor usage and act accordingly but we know that we live in a tech world and they will be exposed.” (children ages 2 & 8).
“We don’t experience AI, but I see it becoming a major aspect in the future. I don’t really worry about AI in the home because I know that’s what we are moving to. Everything is technology driven. I’m not very familiar with AI, so I know it will be introduced soon, and I will learn about it as we go.” (children ages 3 & 6).
Experts weigh in on potential opportunities and challenges
These parents’ mixed reactions are representative of how even experts who work with children and GenAI are expressing both potential benefits and downsides of this new technology coming into families’ lives. We surveyed experts, technologists, and scholars about GenAI and its potential opportunities and challenges for children. Their concerns coalesced around three themes:
- Experts expressed concerns about the potential consequences of children using GenAI applications without fully understanding their capabilities, data sources, and limitations. This may be because young children can easily develop parasocial relationships with humanized objects, including the personas of smart speakers, the way they often do with characters they see on screens. But while parasocial relationships can create a slippery slope from reality to fantasy, those connections have been shown to enhance children’s learning3. Describing a study where she explored how children can receive support from GenAI in the learning process, Professor Ying Xu explained:
We integrated AI into children’s television shows so that the kids (ages 4-8) could actually answer questions asked by the main character and receive feedback from the character as they watch TV, trying to simulate how a parent might support their kids when they all watch television together. Interestingly, we found that the kids actually enjoy the learning process more if the questions were coming from the characters rather than an adult, probably in part because the talking with the character creates a more immersive and playful experience.
- A second concern we heard from experts is the challenge of ensuring that a GenAI tool will generate a consistent output for a child. GenAI has been shown to provide incorrect, low-quality, and biased information, which can be problematic for a child who may not be able to distinguish between good information and bad.
- Additionally, some experts expressed concern about a potentially negative effect of GenAI tools on children’s creativity and critical thinking skills, including when used in schools. This is an ongoing debate in the education field, as schools debate whether to integrate AI educational tools in their classrooms.
Alternatively, we’d like to echo the cautious optimism of the parents surveyed above: GenAI may hold many potential opportunities and benefits for children. Take, for example, GenAI’s capability to personalize whatever learning or entertainment content children are consuming. Recent studies have shown interventions that can adjust texts to varying reading levels or incorporate personalized real-world connections that can foster intrinsic reading motivation in children4. One expert provided a vision for personalized storytelling:
“Imagine a world where anything can truly be a ‘choose your own adventure’ with just a little prompting. Kids could share their favorite characters, colors and sports to make a personalized bedtime story, or describe an image they have in their mind to bring their imaginary friends to life.” (Emily Schlemmer, Google, User Experience Researcher)
That’s a whole lot to be (cautiously) optimistic about!
AI is not magic! Some key recommendations
Given this very brief overview of the emerging and evolving insights we have on the impact GenAI may have on children’s lives as these tools become more prevalent, we’ve developed a few tips to keep in mind as we continue to follow GenAI’s journey. In essence, we encourage parents of children of all ages to keep in mind that this technology, like others before it, is what we make of it, and is not, as Bill Shribman, a senior producer at GBH emphasizes, magic:
“We try to remind kids that AI is a tool and that it is a copilot. It’s made by people…that’s why we need to rethink our kids’ media literacy.” (Bill wrote a great medium post about this subject.)
With that in mind, we encourage parents of young children to keep on reading and learning about GenAI and AI applications and their capabilities. Here are a few key recommendations:
- If it’s not helpful, you don’t have to use AI. Before adding AI applications to your home, such as voice assistants like Amazon Alexa or Google Home: “what are the potential benefits of this AI tool? Are they strong enough to offset the risks? Is it necessary?” Remember, some AI tools are not crucial to complete a task, and in some cases, the benefits of AI may not be enough to justify its usage.
- If you do decide to use AI:
- Check for child settings on the devices or media platforms you bring into your home. These settings can be hidden and therefore tricky to uncover at first glance, on both the device and application level.
- Use precise language when referring to AI, e.g., when talking about Alexa, remember “she” didn’t do anything – “it” did!
- When you or your child choose to disclose personal information to an AI tool, think of what could happen should that information be shared outside your home (and who is holding that information in the interim)
- Remember that it’s up to you and your family to decide what’s uniquely yours – for example, your language and culture – that AI may not have the nuance to represent.
- Remember that a positive use of AI is to support thinking. For example, AI can help reveal the process behind difficult math problems, or support a creative activity like storytelling – and not necessarily be a magic wand doing all the work for you!
Although we will undoubtedly see more and more GenAI applications in our daily lives, we retain the power to decide when we bring it into our homes, the conditions under which we feel comfortable doing so, and what precautions we should take to balance the demands of the companies that make AI applications.
1 CRAFT Stanford curriculum
2 For reference: this blog was written entirely by a human 🙂
3 Xu, Y. (2023). Talking with machines: Can conversational technologies serve as children’s social partners?. Child Development Perspectives, 17(1), 53-58.
4 van der Sande, L., van Steensel, R., Fikrat-Wevers, S., & Arends, L. (2023). Effectiveness of interventions that foster reading motivation: A meta-analysis. Educational Psychology Review, 35(1), 21.
Rotem Landesman is a PhD student in the Information School at the University of Washington, co-advised by Dr. Katie Davis and Dr. Amy J Ko. Her research looks at developing metrics for well-being design in digital spaces, alongside fostering youth’s technological wisdom. She is currently a graduate intern at the Joan Ganz Cooney Center.
Call for Applications: Well-Being by Design Fellowship 2025
The Call for Applications is now open through October 11, 2024.
If you follow our work, you may have seen the incredible results produced by our inaugural cohort of Well-Being by Design Fellows.
Building from the success of the first year of this program, the Joan Ganz Cooney Center is thrilled to announce a second year of the Well-being by Design Fellowship, supported by Pinterest.
We invite applications from mid-career designers of kids’ technology and media who want to prioritize designing for well-being at early stages in the product development process and are currently working on a project that can be workshopped throughout the fellowship period. This free program will involve interactive virtual sessions over five months, kicking off with an in-person gathering at Sesame Workshop’s headquarters in New York City and culminating in a virtual webinar to share our learnings.
Why continue this work?
The first cohort of fellows told us that they found it extremely valuable to receive guidance on how to design for children’s well-being, to receive feedback on their products from our youth design team, and to learn together with and feel supported by a group of their peers. All of the fellows were able to make changes to their products and processes based on the learnings from the Fellowship and continue to apply what they have learned.
A recent study from Sesame Workshop and The Harris Poll highlights that children’s well-being continues to be top of mind for families.
According to the study, half of Americans describe the average American child as anxious. However, several ways to improve well-being for children surfaced from the survey. For example, 88% of respondents felt that children’s well-being would improve if they had the tools to be more resilient. And 84% of parents noted the importance of addressing emotional well-being from a young age and wished that they had been taught how to understand and manage emotions as a child.
The Cooney Center has chosen to focus on educating and empowering the designers of digital spaces to help to promote children’s resilience and emotional regulation as well as other factors that research has shown to contribute to children’s subjective well-being.
With this fellowship, we aim to make children’s well-being a priority in the design of the digital world by (1) putting the voices of children at the center and (2) providing industry with the tools to prioritize children’s well-being in design.
We designed the fellowship experience with information gleaned from informational interviews with a wide range of children’s technology and media industry professionals including design directors, UX researchers, and product leads. We used this feedback to design a program that will address key themes and questions such as better understanding different well-being frameworks (example), how to measure well-being, and how to advocate for children’s well-being in product design. We hope to build on the ideas generated by previous fellows with new insights to improve the well-being of children and young people in digital spaces.
The deadline to apply for the 2025 Fellowship program is October 11, 2024.
Learn more about the program, and please share with your networks!