A Middlebury College Expert Explains the Ways Social Media Fuels Hate — and How to Stop It | Tech | Seven Days | Vermont's Independent Voice

News + Opinion » Tech

A Middlebury College Expert Explains the Ways Social Media Fuels Hate — and How to Stop It

By

Published October 19, 2022 at 10:00 a.m.


ANDREW MULHEARN
  • Andrew Mulhearn

The web has become an incubator for extremism. What can tech companies do about it?

To answer that, some have sought advice from Middlebury College's Center on Terrorism, Extremism and Counterterrorism. The outfit, which operates from the college's satellite campus in Monterey, Calif., blends data analysis with social science expertise to research political violence.

CTEC researchers have increasingly focused on online extremism and tech platforms, according to the center's deputy director, Alex Newhouse. The center produces public reports on extremist trends and consults with social media companies that are trying to combat them.

Newhouse, 27, studies how internet users become radicalized and looks at the evolving ways that extremist networks have organized. He's particularly interested in some of the most virulent corners of the internet, where white supremacists fantasize about and plot violent acts that they hope will stoke revolution. This strain of extremist thought, known as "accelerationism," has proliferated online in plain sight.

Newhouse, 27, brings to his academic work some experience in the private sector. He previously worked on data privacy compliance for PlayStation and at Uber, where he analyzed and contributed to the company's responses to natural disasters and terrorist attacks. But his interest in how online communities are "supercharging" extremism began as an undergrad at Middlebury, he says. Interning as a journalist for a video game magazine, he saw female colleagues become targets of misogynist "Gamergate" harassment campaigns, and he became intrigued by the movement's connection to the far right.

"I just really quickly realized that my passion is trying to figure out what's going on," he says. "How is the far right using the internet? How are they wielding it as a weapon? And then how can we actually disrupt it?"

Newhouse spoke to Seven Days from his home in Colorado. The interview has been condensed and edited for clarity.

SEVEN DAYS: Why have you focused on the internet's role in the proliferation of extremism?

ALEX NEWHOUSE: When you read academic research on extremism and terrorism from the 1970s and '80s — when the Knights of the Ku Klux Klan and the Covenant, the Sword and the Arm of the Lord were the big, scary far-right groups in the U.S. — what you discover is the focus was on the use of church groups and afterschool activities and other very geographically proximate spaces to facilitate radicalization and mobilization.

The internet has basically completely upended that entire way of understanding how radicalization works. You can be radicalized by just an amorphous consumption of content and interaction with pseudonymous people online. And, as a result of that, our ability to disrupt those processes has taken a pretty significant hit; it is much harder to disrupt a terrorist attack if that person doesn't actually interact with any standardized organizational hierarchy.

The flip side of this is that the internet also provides us with a stunning amount of data about how extremists interact. So we're able to track the evolution of extremist language and identification in a much more granular and extensive way than we were in the past.

SD: One manifestation of this kind of radicalization has been the mass shooting, which is often seen as a sort of American phenomenon. Is online extremism also distinctly American in some way?

AN: It's much bigger than that. The United States still operates as a center of gravity, in large part because our very strict freedom for expression protects a lot of the types of speech that would actually facilitate investigations in other countries. But transnational connections have become a very important part of extremism and far-right extremism in particular. The type of extremism I specialize in most, which is accelerationist violence — it's a very, very violent subset of far-right extremism — we've identified that in 35 to 40 countries. The resulting social media channels of these types of organizations are very multilingual, even if English is still the lingua franca for a lot of it.

SD: So this world of online extremism is very decentralized, and there's also a spectrum, as you just alluded to. Can you help chart us a path through this complicated landscape?

AN: One of the ways that we talk about this is that there are recruitment processes and mobilization processes — going from being a keyboard warrior to taking some sort of physical action, whether that's vandalism or violence.

When they're first getting recruited into an extremist frame of mind, conspiratorial thinking is a very common first cut. They'll start posting or consuming content that talks about why we should doubt all journalists, why we should reject the standard narratives given to us by the [Centers for Disease Control and Prevention] or whatever it might be. From that conspiratorial framework, then you enter into, "OK, we believe that there's this grand conspiracy to basically mislead the populace: Who is doing that?" The answer to that question, if you're far enough down, is usually Jewish people or Black people, or wherever the 'enemy' is. That is extremism.

Once you're there, it's a relatively small step to say, "OK, there's this existential threat; our livelihoods are at stake. We are literally facing extinction. The next step is to take violent action to defend ourselves."

Alex Newhouse - COURTESY
  • Courtesy
  • Alex Newhouse

SD: You also look at the people who consume this sort of content, such as the 18-year-old man who killed 10 people and wounded three at a Buffalo, N.Y., supermarket shooting this year. He was immersed in extremist channels and specifically targeted Black people.

AN: Social science research has found pretty compelling evidence that there's a difference between those who are consuming the content and then carrying out action and the people who are producing the content and posting it. Oftentimes, the producers of the content are the least violent people in these networks. This separation of labor is a really fascinating development and a really worrying one, especially in the United States legal context. It has allowed the contemporary far right to become particularly persistent and resilient to law enforcement pressure.

SD: Looming over these issues of violent extremism is the political movement that led to the January 6 insurrection, which seems to have been mobilized in part by these online communities. Can we understand January 6 through the frameworks that you study?

AN: There have been a couple of GOP politicians — I'm thinking of Rep. Paul Gosar [R-Ariz.] and Rep. Marjorie Taylor Greene [R-Ga.] — that have actually reposted memes or used songs or other types of visual indicators in social media posts that are highly, highly indicative of being at least obliquely familiar with communities like Terrorgram [a collection of neofascist channels on the social media platform Telegram]. Large sections of QAnon have pretty much fully embraced promoting civil war, undertaking a second American revolution, using political violence for their own ends. That desensitization of political violence is exactly the same type of framework. Its percolation down to the bigger movements is what drove January 6 and what might drive additional political violence in the near future.

SD: Is this an inevitable consequence of living in a world with social media? Is it bound to be fuel for extremist organizing?

AN: There are some parts of the internet that are always destined to be organizing places. Encrypted messaging apps are useful for a whole number of reasons. Just part of the intrinsic nature of them is that they're useful for terrorist organization.

On the radicalization side, the entire concept of raising advertising revenue based on engagement with content is intrinsically vulnerable to facilitating increasing amounts of consumption of radical, conspiratorial, emotionally intense content.

I do not personally think we just have to throw in the towel and accept it. My hope is that, for instance, the expansion of privacy rules and anti-algorithmic advertising laws over in Europe will hopefully expand to the U.S. and help crack down on the algorithmic, systemic radicalization pipelines. I also have hope that social media companies will continue to get better at content moderation, at network construction and at preventing extremist exploitation.

SD: Tech companies have obviously been slow to address these issues. At this point, does the industry understand the magnitude of the problem?

AN: There has been a massive sea change in the awareness around the risks of especially far-right extremism and terrorism on social media. One of the problems we're still facing is that even though that awareness has rapidly changed in a lot of cases, social media companies still unnecessarily handcuff themselves to some sort of strange legal interpretation of the First Amendment, even though they don't have to follow it in the same way the government does.

The motives there are understandable and probably admirable, but ultimately it makes social media platforms extremely inflexible to deal with developing trends in extremism and terrorism. There's still this black-and-white thinking that is just a product of a long history of doing things a certain way that we are fighting against and slowly, slowly, slowly, slowly changing.

SD: I would imagine the center's research into the origins and mutations of online extremism is important in helping these platforms to know what to look out for.

AN: Most tech companies can hire some subject matter experts, but ultimately we at CTEC are the ones who spend our days staring at this stuff and understanding the trends and how they evolve. We have done a lot of work to try to explain to companies and governments that the QAnon conspiracy theory about elite families cannibalizing children is a repackaging of a 700-year-old antisemitic conspiracy theory. A content moderator at Google or Facebook, more likely than not, is not going to understand that entire historical context. It's a lot of training and education work that we end up doing. It's why we exist.

SD: Has your research found anything to indicate whether banning users from a social media platform, a measure known as deplatforming, is effective?

AN: The question is still fairly open. What we can say for certain is that the influencers need their audience. They need the access to those massive platforms, and deplatforming does take the wind out of their sails, absolutely.

SD: Are there other steps these companies can take, beyond content moderation, that can reduce the risks here?

AN: I think so. Facebook, YouTube and Twitter all have been instituting various automated flags on different types of content, for instance on posts about election integrity. There is some initial evidence to suggest that that type of action does help.

But outside of the platforms' policies, civil society resilience programs, like increasing education around media coverage, misinformation, disinformation, conspiracy theories — all those things are really important, as well. And the answer to countering radicalization in all of its forms is to counter loneliness and alienation at the social level. It's a much, much bigger undertaking, but the single best predictor of radicalization that we have is perceptions of loneliness and alienation. So investing in all the types of social systems that we know to improve social relationships with people are all incredibly important for countering radicalization.

This interview was edited and condensed for clarity and length.

The original print version of this article was headlined "Hell Monitor"

Related Locations