Before the Future of Childhood: Immersive Media and Child Development salon took place in November 2018, we invited experts to share their visions about the ways VR and AR might impact childhood 10 years from now. Jeremy Bailenson, PhD, is the founding director of Stanford’s Virtual Human Interaction Lab and the author of Experience on Demand. Here he offers insights into some of the ways these technologies might affect basic human behaviors.
I am going to treat VR and AR separately, as I think the psychological processes and effects are very distinct for the two technologies.
The greatest impact of AR on childhood will surround multitasking. By definition, AR “registers” digital objects in the physical world, and allows users to hear, see, and in 10 years, very likely to smell and somewhat likely to touch them. The game Pokémon Go was not a fad, and last month, tens of millions of people played. Preliminary research at the Virtual Human Interaction Lab (we have just begun two separate NSF-funded projects to test how AR changes basic social behavior) indicates that AR changes performance and nonverbal behavior. People change where they look, where they sit, and how they walk in physical room when there are AR objects rendered onto goggles they are wearing. At scale imagine a classroom where each child is seeing different digital objects and digital colleagues in addition to the same set of physical ones. Common ground, to quote Herb Clark, will be shattered, in that people will experience different versions of AR reality while physically co-present. One initial finding from our studies shows that social behavior is impacted. On a positive note, we have replicated “social facilitation” effects—college students perform an easy task better when an AR-embodied agent watches them (compared to being alone). On a negative note, an AR event outlasts the experience, and people will avoid sitting in chairs where they previously saw an AR event occur. The benefits to “beaming in” other people will be transformative in terms of uniting people who live far away, removing travel that is considered prohibitive, and ultimately changing the structure of commuting to work and school. But they will change basic patterns of attention and performance in a way that is unprecedented.
Consider one of the most popular video games for the Microsoft Hololens, called Fragments. The game uses the simultaneous localization and mapping (SLAM) algorithm to scan one’s physical room, and then changes the layout of narrative events of the game so that they “fit” into the room when projected onto the goggles. A murder occurs in one’s physical living room, where both characters are perfectly standing on the floor and not intersecting a wall. Similarly, there is a window which is rendered on a wall to look like an actual window in your room. Fast forward 10 years, and imagine watching a scary movie in your bedroom. The antagonists will literally be climbing on your bed.
The biggest concern around VR 10 years from now will be reality blurring. The phenomenon has been studied, though we only have a few studies. Jakki Bailey, who is at the conference, can discuss her pioneering work. In addition, a small-sample study by Kathryn Segovia has shown that young children can confuse VR events from actual ones one week later. Ten years from now, the video and audio fidelity of VR and AR will be close enough to fool the perceptual system. I also suspect scent will be close to perfect, as rendering scent now is pretty easy (clearing the scene is challenging as there is no “refresh” for molecules). For better or worse, we will be able to produce digital experiences 10 years from now that will be, from a perceptual standpoint, perfectly real. So childhood will be defined by a paradox—any child can experience the most fantastical experience imaginable by programmers, but the perceptual system will treat it as a real one. This is a pretty unique moment in human evolution.
For both AR and VR, a theme to discuss will be addiction. We have very little empirical data on addiction to VR and AR. Of course there is plenty of work on gaming, but most of that surrounds reward/punishment schedules, not perceptual realism, integration into one’s body via tracking, and multi-sensory feedback. To my knowledge there is no study that randomly assigns people to tons of VR/AR use yet, but someone should study this (attendees, please take note). However, most research on “presence” in VR shows that immersive scenes are more engaging and persuasive than non-immersive ones.
Jeremy Bailenson is founding director of Stanford University’s Virtual Human Interaction Lab, Thomas More Storke Professor in the Department of Communication, Professor (by courtesy) of Education, Professor (by courtesy) Program in Symbolic Systems, a Senior Fellow at the Woods Institute for the Environment, and a Faculty Leader at Stanford’s Center for Longevity. He earned a B.A. cum laude from the University of Michigan in 1994 and a Ph.D. in cognitive psychology from Northwestern University in 1999. He spent four years at the University of California, Santa Barbara as a Postdoctoral Fellow and then an Assistant Research Professor. He is the author of Experience on Demand.