Into the Digital Future: Changing Gaming to Fight Hate with Daniel Kelley

This transcript of the Into the Digital Future podcast has been edited for clarity. Please listen to the full episode here and learn more about the podcast here.

Daniel Kelley is the Director of Strategy and Operations for the Anti-Defamation League (ADL)’s Center for Technology and Society (CTS). CTS works through research and advocacy to fight for justice and fair treatment for all in digital social spaces. Daniel leads the center’s work to fight hate and harassment in games. In 2019 and 2020, Daniel was the lead author of the first nationally representative survey of hate, harassment, and positive social experiences in online games. He also manages the center’s game research and conducts advocacy with the game industry, civil society, government, and the broader public to push for an all of society response to ensure that online games become respectful and inclusive spaces for all people. He is a member of the advisory board of Raising Good Gamers and an advisor to the Fair Play Alliance. 

 

Jordan Shapiro: This is a fascinating interview we’re going to play today, with Daniel Kelley, the associate director for the Center of Technology and Society at the Anti-Defamation League. One of the things that I love about this conversation is that I think we all know about the bullying and the trolling that happens online in general, which also happens in real life, but what we get from Daniel is a really fascinating question of how you start to evaluate, analyze and recognize the patterns involved in problematic behavior in an online space – in gaming and social media.

We talk about hate speech, misogyny, racism, anti-semitism, Islamophobia, etc – how we can start to think about them, what’s trending, what people’s experiences have been. I think people are going to find this conversation really fascinating. Especially as we think about questions of how people get radicalized online or how people fall down the rabbit hole online, this conversation really sheds some light on what that experience is like and what the ecosystem of online social injustice looks like.

…..

Daniel Kelley:  I’m Daniel Kelley. I work at the Anti-Defamation League, which is a 100-plus-year-old civil rights organization with the mission to stop the defamation of Jewish people and secure justice and fair treatment to all. In the Center for Technology and Society where I work, we focus on that same mission. But in digital spaces.

We’ve been working on that since the 1980s when we were looking at billboards. In the 90s, we were talking to AOL. And in the 2000s, we were talking to social media—Facebook and Twitter and the like. Since 2017, I’ve been focused on looking at online games as digital social platforms, and how can we bring ADL’s history and expertise in fighting hate, to online games as digital and social platforms?

Jordan: What first brought you to my attention was the recent Free to Play report.

Daniel: One of the central pillars of our work is an annual nationally representative survey in the US of adults who play online multiplayer games. I think there’s a tendency, especially in the US, to blame video games for everything. If there’s a moral failing or a shooting or any kind of problem in our society, we will blame it on video games. So it was important in putting together the survey that is both looking at positive social experiences in online games, and hate and harassment. To push us towards the really fact-based quantitative grounding for talking about ‘what is it like in online games or social spaces?’

In terms of the results, we found 95% of American adults have positive social experiences in online games. Whether that’s making friends or feeling they belong to a community or discovering interests—discovering things about themselves or others. I try to make that the grounding for the work that we do, fighting hate and harassment in online games.

Some of the games are the bad space where hate is happening and where young people are being radicalized. Bad things happen in games as they happen in other digital social spaces. The reason why we do this work is because we want these positive social experiences to be universal across all folks, [including] people that are being pushed out of these spaces and out of these meaningful social positive social experiences because of things like hate and harassment.

Laura Higgins: We know these spaces have never been more important in terms of socializing and spending time with your community and all of those things. And if it’s a place where you don’t feel safe, that’s a real issue. I think all of us would really love to see those spaces all working together, and with organizations like yours to improve their spaces and make sure that everybody feels welcome and safe.

Daniel: That’s the grounding that I take to this work, which is we want to fight hate. Hate is real in online games. Harassment is real, but there are real, meaningful, positive social experiences that should be available to all folks. We do find real high levels, exceedingly high levels, of hate and harassment in online games. 81 percent of American adults in 2020 reported some form of harassment in online games. 68 percent of American adults reported severe harassment. So that’s being discriminated against on the basis of your identity, being physically threatened, sustained harassment, stalking, these kinds of things.

In the last year, there have been three major studies focused on the coping mechanisms that players have in online game spaces. It’s so the norm for there to be hate, for there to be harassment, that what researchers are turning to is, ‘how are people coping right now?’ Or, given that this is the norm, ‘how do people, especially, gamers and streamers of color, how do they navigate racism?’ That is a given in these spaces. What we’re trying to push the companies and the public and sort of all of society to do, is try and change these norms because they are harmful. It’s not acceptable that this is the norm.

Jordan: You hear a number like 68 percent of adults experience some kind of discrimination. What does that look like?

Daniel: So we’re going to talk about discrimination based on identity. There have been a number of great qualitative studies by one of our fellows, Gabriela Richard, who did great work in this. Kishonna Gray and others who have looked at it from a qualitative perspective, talking to people who are targets.

One of the things that comes up again and again is this talking on the mic phenomenon. So in online games generally voice communication is one of the main modes of communication. Generally, the two modes that folks work in are either chat, like a text box where you type in text, and it appears in a flowing scroll of text. Or people speak using a microphone. When people, especially people from vulnerable and marginalized communities, come off the mic and start speaking in these spaces, suddenly their fellow players will treat them differently because of the way their voice presents.

There was a study where a Hispanic person was describing his experience and he was being targeted with anti-black racism. Because of his voice and because of the way he spoke. When you speak in a space, people make assumptions upon your identity. And it may not even be targeted because of your specific identity, but rather whatever biases or sort of ideologies the person you’re playing with has been exposed. You may be subjected to those perspectives and to being discriminated against in that way.

Laura: The Fair Play Alliance is doing [work on] some of these similar themes around healthy and positive communities. One project, which I believe they partnered with the Anti Defamation League, is the Disruption and Harms in Online Gaming Framework. Could you tell us a bit about that, please?

Daniel: Part of our work with social media and translating that to game companies is having discussions with companies about how they approach addressing these problems in their games. Sometimes if you’re looking at a large game company, that’s like a triple-A major game, there can be different ways that they talk about hate and harassment, even between titles.

Assuming you have a report feature, when someone reports hate and harassment in the game, how do you determine what that thing is, what that behavior is that’s being reported? And how do you determine what the consequences are for that action? I think right now game companies are in kind of an early social media place where they have one rule for, like, everything bad. Or just a handful.

I think Laura, Roblox is unique in this. In that you actually do have a fairly robust content policy. And actually explicate certain types of content, which I imagine also plays into your enforcement. But right now, [for] most game companies being able to understand what is going on on your platform and how to approach addressing it—that is still a huge problem.

So the idea was to create a framework which would allow companies to define different categories of harms in their game. We call it in the framework, ‘disruptive behavior,’ meaning anything that disrupts the intended experience of a game and then ‘harmful conduct,’ which is more in the realm of what I’ve been talking about around hate and harassment and those kinds of things. Part of the work for us this year and in concert with the Free Play Alliance, is to bring this framework to companies and to encourage them to adopt it, so that we can really get to a common language around hate and harassment in online games.

Ultimately, one of the goals to push to is measurement, which is so important. That’s part of the reason why we’re doing these surveys. Everyone is like, oh, things are bad, right? But like, how bad? And is anything working? I would love for the survey to become irrelevant because we have so many metrics from game companies around hate and harassment and how what they’re doing is working.

Laura: I think it’s going to be a really interesting process when this does start rolling out as the mainstream among games companies. The first thing, I can’t believe: it’s free. So that’s a really amazing way to get it in front of companies, because quite often we work with games companies all over the world who are really small. They might have 10 staff, and so whilst they might want to prioritize safety, they might not think it’s a priority in terms of finances, or even know where to start. 

The other key bit is getting people to really recognize what those disruptive behaviors are and actually calling it out. Saying ‘just because you put up with it until now, there’s 10 percent or 15 percent or 20 percent of the people on your platform who’ve had that, and they’ve left and they’re never coming back.’ How about we don’t do that to anybody and everybody just has a better experience on the platform?

Daniel: We actually did quantify that number. In the survey, we found that 22 percent of players quit playing certain games because of their experience with harassment. That’s a number that I know I’ve heard from folks who are in the industry that they use when they’re doing their own internal advocacy at companies. In addition to the moral argument, that hate and harassment are bad, there’s also a business argument, which is, this is also bad for business. People are leaving. Almost a quarter of your players, adult players, are quitting your game because of their experiences there. So, it needs much more investment from the industry.

Jordan: We’re living at a time in history that is difficult in terms of xenophobia, in terms of radicalization, in terms of nationalism. Everywhere around the world, we’re seeing these things spring up. Do you think that there’s a relationship between that and digital media?

Daniel: In the survey, we do ask about people’s exposure to white supremacist ideology in online games. We found that 9 percent of players were exposed to white supremacist ideology in online games. The high level of hate and harassment and the low level of exposure to white supremacy ideology, speaks to something else.

The New Zealand government put out a really wide ranging and expansive report on the Christchurch mosque shooting last year, where it goes really deep into the shooter’s journey of radicalization. And I think one of the things that echoes the data is that the shooter was radicalized in mainstream social media — Facebook and YouTube. But it also speaks to the beginning of his journey, being a gamer and being an online gamer. So I think what we see is not that games are a space for explicit radicalization, but that they’re a space where hate is normalized and then can allow for individuals to be radicalized in other spaces.

From what I’ve seen, it’s not that explicit radicalization or recruitment happens in online games. But that it’s one pillar of a space where hate, hateful ideologies, are normalized, or not challenged. And thus, allows someone to go along the path where they can start to accept those ideas as normal and then be more formally radicalized on more traditional social media.

Jordan: Many of the same things that lead to that kind of radicalization we’ve also seen lead to very good things, like people able to find support groups. How do we balance what’s great in so many ways, showing us what’s powerful and strong, but also provides the possibility for this kind of radicalization?

Daniel: I think one of the interesting things that we found was looking at Discord. It’s a social platform but is rooted in gaming [with] the same sort of affordances that make it a space where, [there are] these private servers where you can choose who comes in and make it potentially valuable to extremists or people who want to organize around a hateful ideology. [But those are] the same affordances that make it valuable for people who are targeted in other spaces. 

We found that when we talked to a group of folks who are trans streamers, they were like, ‘Discord is great because we can choose who comes into our space and we can set our own rules.’ It’s a really hard problem because you want to be able to create spaces where people can control who they interact with and how. But how do you avoid then, those same spaces being used for hateful and heinous ends?

I think that’s where tech companies really have a role in terms of, having values, and understanding what their values are. We believe in a very values-driven approach to social media content moderation, and I guess I’m thinking about the events of what’s going on around de-platforming.

Recently, we saw the de-platforming of the platform Parler. What was interesting there is this call from Apple and Google to implement more stringent content moderation policies or practices. Couldn’t the various game stores — couldn’t an Xbox and a PlayStation, the Epic Store, others, as part of their vetting process to allow games on those stores, create best practices for content moderation that would be required of all games in order to be in those spaces? Like a values-based way to use various leverage that companies have to ensure better play spaces for all people.

Jordan: When you look at the whole digital landscape and everything you know about it, everything you’ve learned from your research, are you more optimistic or more pessimistic?

Daniel: My optimism is in the fact that games are, I think, right now where social media was 10 years ago. We have very visible precedents to look back on now for what happens if the industry doesn’t take really significant steps to address it. In 2006 when Facebook was saying, ‘we’re going to go beyond college campuses for the first time,’ were we imagining that a big input into the 2016 and 2020 election would be this platform? Were we thinking that this platform is going to cause or be implicated in genocide in Myanmar? So game companies can look at the example of social media and really be aggressive in addressing the same mistakes that we see in social media. I’m really optimistic that we have a lot of information about how to do it poorly.

In the past, typically the game industry hasn’t spoken out on national tragedies. It was notable, in the case of the murder of George Floyd by law enforcement, that most major game companies at the very least made a statement. Some of them made donations. Some of them made commitments in terms of their products. A [similar] example is talking to somebody who was a target of harassment in games and them saying, “You know what would be great? It’d be really great if the game companies would come and say publicly that they stand against death threats.” The fact that companies are coming out and speaking on these issues, shows forward momentum.

Laura: What do you think are going to be the big developments over the next couple of years? What do you think are going to be the big changes we can look forward to?

Daniel: I think one of the things that we’re going to see is the expansion of audio as the area of content moderation. I think we’re going to see a massive push towards better tooling and better processes. And I guess the thing that I really hope for [the industry] is transparency. These platforms are huge, have huge impacts on our lives. Their decisions are huge. They need to be transparent in how they are making decisions, and in the degree to which there are problems, and the degree to which they’re getting better based on their efforts.

If I could wish upon a star… right now, we don’t know, for example, how much anti-Semitism is there on Facebook in New Jersey? We don’t know. Being able to get to that level of granularity with games, being able to say: this is how much hate, these are the communities being targeted, and this is what we’re doing about it. And here’s what’s working, and here’s what’s not working. If we can get to that place in a year, then we might have a fighting chance.

Jordan: Well, I have to say thank you. This is actually an inspiring conversation. You know, it’s sort of surprising. You sit down to talk about hate speech and end up inspired about possibilities. And that’s great.

Daniel Kelley

More Content to Explore