The Cooney Center Sandbox: Embedding Literacy Expertise in Edtech Innovation

Angelica DaSilva, a Cooney Center fellow for literacy and technology, offers research-backed recommendations for edtech developers.
Earlier this year, the edtech startup LitLab began testing a new feature in their classroom product that creates AI-generated storybooks for early readers. The feature is a “record me” option for kids reading aloud, backed by technology that can analyze the recordings to assess student progress and facilitate personalized feedback from both the app and human teachers. As a technology, the feature worked well in the pilot classrooms. The problem was that too many students seemed reluctant to use it.
“Forty percent of our recordings don’t have any students saying anything,” said LitLab’s head of learning, Drew McCann. A big part of the reason, she suspects, is that many of the students using the product, including early readers, struggling readers, and multilingual students, shy away from reading aloud because they’re not confident in their reading abilities.
LitLab is a partner in the Joan Ganz Cooney Center’s Sandbox for Literacy Innovations. The company’s founders and staff are former educators, including McCann, who was a special education teacher and did reading interventions with early elementary students. They understand the importance of bolstering confidence and self-efficacy in young readers, and it was one of the issues they raised when they met with the Sandbox literacy specialists, Diane Gifford, a senior literacy fellow at the Cooney Center and adjunct professor at SMU’s Simmons School of Education and Human Development, and Angelica DaSilva, a Cooney Center fellow for literacy and technology.
Every edtech developer partnering with the Sandbox initiative has the benefit of a literacy consultation before moving on to focus on things like universal design for learning and co-design sessions with kids. Through an initial workshop, followed by ongoing conversations and suggestions, Gifford and DaSilva help the edtech developers match the learning goals they’ve set for their product with best practices in literacy pedagogy, generating recommendations from decades of accumulated peer-reviewed research known as the Science of Reading.
“We get a perspective on their goals and the underlying rationale behind their product. And then we have a conversation,” Gifford explained. “We feel a real responsibility to support them with the evidence of what we know works.”
Ultimately, the goal of the literacy consultations and the rest of the Sandbox initiative is not simply to improve the products of the partner developers, but to serve as a model for the larger literacy edtech marketplace, which is filled with products developed without much or any guidance from the science of reading.
A 2024 meta-analysis by Stanford researchers found that independent research into the efficacy of literacy edtech products was overwhelmingly focused on phonics, with far fewer studies investigating edtech benefits for language and reading comprehension or writing proficiency. What’s more, despite edtech’s promise of personalized learning, the studies tended to ignore distinctions between learners, such as reluctant readers or English Language Learners. According to the study’s lead author and Stanford professor of education, Rebecca Silverman, who served as an advisor to the Sandbox, “If the Sandbox could encourage developers to pay attention to individual differences among learners, and how to support those in literacy, I think that could be huge.”
Studies suggest that about 60 percent of children need systematic early literacy instruction to achieve proficiency, and they’re not getting nearly enough of it, based on the dispiriting results of the most recent (2024) National Assessment of Education Progress, which indicated that only 31 percent of the nation’s fourth graders were proficient readers. Higher quality edtech informed by independent research could make a big difference. “Six out of ten children have issues with reading that need to be overcome, and early intervention is key,” said Gifford. “There’s a tremendous opportunity, because we have so many children who need help. “
What Works: The Science of Reading
There are decades of studies about how best to teach, support, and assess students as they progress to proficient reading, collectively known as the “science of reading.” But according to DaSilva and Gifford, many people misunderstand the science of reading as something that’s just about phonics, or they think it’s a specific reading curriculum or teaching methodology.
“It’s a robust body of research about how people learn to read,” said Gifford, which covers everything from sounding out and combining word sounds to understanding sentence structures to gaining enough vocabulary and background knowledge to comprehend a text. To summarize everything this research covers, literacy experts often cite Scarborough’s Reading Rope:

Adapted from Scarborough, H. S. (2001). Connecting early language and literacy to later reading (dis)abilities: Evidence, theory, and practice. In S. B. Neuman & D. K. Dickinson (Eds.), Handbook of early literacy research (Vol. 1, pp. 97–110). The Guilford Press.
There are many studies about each strand of that rope and the most effective ways to weave them together. The caveat, however, is that most of this well-vetted knowledge is based in brick-and-mortar classrooms with humans teaching other humans.
“There’s not a large body of research specific to digital reading instruction,” noted DaSilva. “So we try to take the best practices we know from the literature and find ways to mimic them in the digital environment.”
For example, there’s ample evidence that when adults read to or with children, they can boost engagement and comprehension by pausing occasionally to ask questions and draw the child’s attention to characters and plot points. What’s less understood is how that strategy translates to a digital reading buddy like the friendly, AI-powered octopus created by another Sandbox partner, E-Line Media, to help young readers learn with their new interactive storybooks.
“The idea of a reading buddy is really cool. But you have to be careful,” said Gifford. “There’s a fine line between engagement and distraction, and striking that balance can be tricky for interactions governed by algorithms.”
In another example, the Sandbox advisors observed that LitLab’s storybooks included some less-common words in stories generated by AI for kids to practice specific decoding skills. For instance, one story focused on word combinations with the prefix “un,” and included the word “unfazed”— the meaning of which would likely be unfamiliar to many of the youngest readers targeted by the product. So, Gifford and DaSilva suggested that LitLab consider adding short vocabulary lessons before reading begins to cover the rarer words appearing in their AI-generated stories. Another suggestion was underlining or highlighting certain vocabulary words within the story to reinforce learning.
While some of the guidance on literacy learning was quite specific, Gifford and DaSilva also helped partners brainstorm new product features. Michelle Newman-Kaplan, director of curriculum and learning design at Sesame Workshop noted that appreciated the “specific recommendations around which letter-sounds are easier to hear and blend with others,” as well as the help, “ideating additional literacy learning opportunities within our mini games to help elevate the educational value of the full game experience.”
Every idea to emerge from the literacy consultation is a recommendation, which may or may not be adopted, and will always require further refinement to implement, noted Gifford. Vocabulary builders, for instance, will need to judge what words are actually important for young readers to know to understand a passage, she noted, “because you want vocabulary learning that will support the reading experience but won’t clutter it up.”
McCann of LitLab said she and her team reach out regularly to Gifford and DaSilva “with questions big and small, to make sure we’re staying true to the research as we build in real time.” For example, she noted DaSilva’s help sketching out ideas for supporting multi-language learners that they then further developed with students in co-design sessions. “Having their guidance has helped us move forward with both confidence and clarity,” she said.
The literacy consultation also led to a “big shift” in how LitLab thought about the purpose of the read aloud feature in their product. They had originally designed the recordings to be part of a formative assessment “giving teachers insights into students’ decoding skills.” That meant that read-aloud time lacked a lot of the scaffolding and supports that helped students during the normal practice time of storybook reading.
“With Angelica and Diane’s help, we realized that removing those supports in every context probably undermines student progress,” said McCann. “Now we’re working toward more flexible designs that preserve data integrity while better supporting learners.”
It All Starts With Questions
While the Sandbox literacy experts prepare a bank of questions specific to each developer and their product, they love it when the edtech team comes to the consultation with questions of their own, like LitLab did. For example, McCann and her colleagues wanted to know about evidence-backed measures for assessing and tracking reading fluency and what strategies could work in a digital format to reinforce reading comprehension. DaSilva and Gifford prepared a thorough report of recommendations, including several focused on different aspects of readers’ self-confidence, such as providing strength-based feedback, normalizing mistakes, and offering more choices for reading aloud, such as reading with a partner, in small groups, or reading to a teacher.
One misconception that Gifford and DaSilva are eager to dispel is that their focus on rigorous evidence and sound pedagogy means that their ideas will ultimately make the product feel more like school and less fun, which edtech developers are eager to avoid. In fact, they say quite the opposite is true. The science of reading, and hence the literacy consultation based on it, focuses on maximizing young people’s engagement with reading. For instance, an online storybook might be filled with crazy characters, humorous storylines, great illustrations, sound effects, and other interactive features, but if it’s written at a level that’s too challenging for the intended audience, it will be a frustrating experience. Likewise, even short texts will be more engaging if they reflect students’ identities and interests, and if students are given more control over what they read. The suggestions that emerge from these sessions are meant to work in tandem with insights from the subsequent co-design workshops with kids to maximize learning and engagement.
In addition to the insights and suggestions, partners said the literacy consultation also offered some reassurances about the soundness of their overall approach. According to McCann, the conversations with Gifford and DaSilva, “helped affirm that what we’re building and putting in front of kids is truly grounded in best practices, and it’s also given us the confidence to take bold ideas and run with them, knowing we’re supported by research.”
We invite you to learn more about the Cooney Center Sandbox and follow us on LinkedIn. Please sign up for the Cooney Center newsletter for more updates.