
Four-year-old Arturo wiggled in his chair, eyes wide with excitement, as we handed him the tablet and asked him to tap a colorful character on the screen to start the activity. He touched the screen repeatedly before it registered his tiny finger, but soon he was off, quickly moving through each task with surprising ease. Before we could say a word, Arturo spotted the tiny house icon in the corner, exited the app, and began searching the tablet’s home screen for other activities. In less than a minute, Arturo had challenged our assumptions about how young children would interact with the digital tool and highlighted opportunities to adjust.
Our team at MDRC, a non-profit social policy research organization, has had the privilege of learning from countless children like Arturo about building digital tools for young learners in pre-K classrooms across the United States. Through the Measures for Early Success Initiative (or “Measures Initiative”), we are supporting developer teams in creating brief digital assessments that not only accurately capture what children know and can do, but are also engaging and practical for pre-K classrooms and provide educators with real-time information to inform their instruction.
To accomplish this ambitious vision, we have partnered with early childhood programs across the country to provide multiple opportunities for developers to learn from assessment users, particularly children, throughout the development process. User insights have shaped all aspects of tool design – from the microdetails of colors and visual graphics to core interface features. Listening to users early and often means potential issues are addressed before they are baked into products, often with innovative solutions suggested by educators and families who know young children best. This approach has positioned tools well for real-world application – new assessments from the initiative are currently being used across 100 pre-K classrooms as part of a six-state pilot in the 2025-26 school year. It has also surfaced important lessons about how to effectively gather insights from our youngest learners.
Our Approach to Getting Candid Insights from Young Learners
The term “user testing” typically evokes the image of putting a product in front of users and asking for their opinions. But in the Measures Initiative, our primary users are three- to five-year-olds, and if you ask pretty much any child this age what they think of a tablet-based activity, you are going to get a resoundingly positive response. In initial testing of assessment prototypes, children often exclaimed that they loved it or wanted to play again, even when they visibly struggled with tasks that needed further adjustments. With young learners, actions often speak louder than words. Watching children’s hands, bodies, and faces closely offers clues into what’s working (and not) and how children are feeling about it. We quickly saw patterns in which items would cause children’s faces to light up in excitement, and which would bring the head to the hands for real thinking.
Insights from one-on-one user testing with young learners have been enriched by those from other sources, such as observations of children using the tools in their classrooms, surveys of educators about children’s experiences, and analysis of metadata from apps on usage patterns. These additional methods bring dimensionality to point-in-time impressions from user testing, highlighting how engagement patterns might shift over time and differ between children and contexts.
Making Digital Products Intuitive and Engaging for Young Learners
These invaluable sources have helped us understand the product features that make digital tools easier for pre-K-aged children to use.
- The kinds of spoken and written instructions that adults rely on are often too complex for young children, who are developing foundational language and literacy skills. Clear, short directions that break tasks into digestible steps help children know what to do. For example, on digital assessments, children are often more successful when they first hear the assessment question (“Touch the largest animal you see.”) and then get separate instruction on how to continue (“Now click the green arrow.”), rather than hearing both at once (“Touch the largest animal you see and then click the green arrow.”)
- Children benefit from upfront instruction about how to use digital tools before diving into using them. Assessment developers in the Measures Initiative have created short practice activities where children can try out motions like tapping, dragging, and even shaking the tablets. Features that prevent children from interacting with the tool in unintended ways – such as switching accounts or exiting the app – are also important.
- In addition, user testing has uncovered features that spark joy for young learners. Children appreciate being able to make choices, hearing encouraging feedback, and engaging with storylines, sounds, and imagery that capture their attention. In one Measures Initiative app, children have the chance to design their own character who then accompanies them as they complete assessment activities. We observed as children eagerly flipped through options for hairstyles, clothing, and even accessibility devices, and lit up when they saw their character appear later.
- Characters and environments that mirror the experiences of children support sustained engagement. Whenever possible, Measures Initiative developers include human characters with a variety of traits and that reflect a range of roles in the community. They have also embedded assessment prompts in different contexts familiar to children using the tools, such as at a birthday party in a grandmother’s home or waiting in line for ice cream.
It’s About So Much More Than Engagement
Designing digital tools that are engaging for young learners does much more than capture their attention – it helps support meaningful learning. From our work in digital assessments, child-friendly design means that tools are likely to capture what children actually know rather than how well they can use technology. It also frees up teachers’ time to focus more on instruction and learning activities instead of collecting data.
It’s easy to make assumptions about what we think a child will understand or be interested in, but it’s another thing to put a product to the test. Allowing all users, particularly the youngest among us, to engage with and weigh in on digital tools is an essential part of any product development process. At MDRC, we are excited to continue learning from the youngest learners about what works best for them.
For more information on the Measures for Early Success Initiative and what we’ve learned so far through this work, we encourage you to check out our website.
Acknowledgement: This piece is based on research funded by the Gates Foundation. The findings and conclusions contained within are those of the authors and do not necessarily reflect the positions or policies of the Gates Foundation.
Emily Hanno is a Senior Research Associate at MDRC where she directs large-scale early childhood education projects, including the Measures for Early Success Initiative. A former Head Start teacher and instructional coach, her research focuses on understanding how early childhood education programs can best support young learners, families, and communities.
Mallory Undestad is a Research Associate at MDRC where she oversees activities with project participants, including children, families, and educators. On the Measures for Early Success Initiative, she has led a range of user testing activities, as well as managed partnerships with assessment developer teams.