Yes, this is another piece about the SpongeBob study. I wanted to provide my thoughts on it both from a scientific research perspective, but also as someone who has to help make production decisions even when there is not enough time and resources to do a thorough scientific study. Often we have to hypothesize about why particular content supports or detracts from children’s learning.
For the 2% of you (completely unscientific poll) who might be reading this but who have not read the study conducted by University of Virginia researchers and published in Pediatrics, here’s a brief description. In the study, the researchers assigned 4-year-olds to one of three conditions for 9 minutes: 1) watching SpongeBob (fast paced animation), 2) watching Calliou (slower paced animation), or 3) drawing. The kids were then tested on a variety of tasks that measure executive functioning (e.g. paying attention, delay of gratification, task persistence). The researchers found that children who watched the 9 minutes of SpongeBob had lower scores on the executive functioning tasks. The larger takeaway in the media has been that fast paced animation like that of SpongeBob may interfere with executive functioning skills. Many people from both the research as well as the television world have commented on the limitations of the study (all studies have some!) as well as the more global statements about the causal explanations of the findings. David Kleeman of the American Center for Children and Media provides a summary in the Huffington Post about the concerns of the study.
I would like to focus on the point David Kleeman raises about the fact that the shows differ on so many different variables beyond the pace of the animation. To add to that, I also think using one comparison (SpongeBob versus Calliou) is also risky in making assumptions about the cause of any difference. Perhaps I am envious that this particular study got so much attention, for when I attempted to publish my master’s thesis, I was told of several limitations that prevented it from being published. My study was one of Schoolhouse Rock. I examined whether children learned audio and visual information better in song or prose form keeping the visuals of the video the same. I used “Interplanet Janet” and created a professional spoken word version where the lyrics were mostly the same, except the sentences were changed a bit so that there was no rhyming. When I attempted to publish it, I was told by reviewers of several different social science journals that I only had one comparison and that I would need to replicate the findings with perhaps two other songs (and prose versions). Furthermore, I it was suggested that I should experimentally isolate what component of the song — was it the rhyme? the rhythm? or the music? — that would account for any differences in recall. Given that there are so many differences between SpongeBob and Calliou, it’s difficult to state with certainty (however logical it may be) that the pace was the cause of the effects. It could be that high exposure to the color YELLOW is associated with lower executive functioning for all we know!
At Sesame Workshop, we often have to make judgment calls about the reason certain clips, games, and apps work over others. To the extent that we can manipulate only one variable while keeping others constant, we do. For example, we once tested children’s appeal and comprehension of Ernie and Bert in puppet version and then the very same clip (with almost identical dialogue) in claymation. Another time, we measured children’s attention to a full Sesame Street hour in school settings and then in home settings to see whether our in-school methods of testing reveal similar findings to home viewing. And one time there was a question as to whether children would be more responsive to an actual button on screen or to an “imaginary” button: we created two videos, changing only the presence or absence of a button and the language referring to it. In each of these cases, we tried to only manipulate one variable and even then, we felt we wanted to use more examples before making conclusive statements about why we found what we did.
And sometimes we do have to make judgments about why certain designs or content works and others do not, in the absence of good experimental designs. But we constantly test our hypotheses and build on the learning from one test to another. If we notice that something isn’t working in one particular case, our recommendations are to change that specific instance. We then put that piece of evidence along with other instances where we may or have found the same thing before attributing a WHY to that finding. Since Sesame Workshop has been collecting this kind of information for over 40 years, we have many examples to draw from and make predictions, recommendations, and conclusions.
While the SpongeBob case may have ruffled many feathers and caused a media frenzy, studies like it are important for the dialogue it creates among scientists, as well as for the interest it creates among mass audiences. But we have to urge journal editors, those who submit publications, and the press to temper the language they use around global explanations until they have enough scientific evidence on to support a claim. Until then, I’m staying away from the color yellow before I have to do anything that requires my full attention.
Jennifer Kotler is the Vice President of Domestic Research at Sesame Workshop. She holds a PhD in Child Development from the University of Texas at Austin.
Image of SpongeBob from spongebob.nick.com.