main navigation
my pace

Outside the Beltway | PACE UNIVERSITY

News & Events

Sort/Filter

Filter Newsfeed

News Item

"Outside the Beltway" featured Pace University's Communication Studies professor Adam G. Klein in "Americans Want to Ban Hate Speech But Can't Define It"

03/21/2019

"Outside the Beltway" featured Pace University's Communication Studies professor Adam G. Klein in "Americans Want to Ban Hate Speech But Can't Define It"

Two-thirds want social media platforms to ban harassment and racist, sexist, and other offensive speech. 

Adam G. Klein, a Communication Studies professor at Pace University and the author of Fanaticism, Racism, and Rage Online: Corrupting the Digital Sphere, published an interesting essay two days before the Christchurch massacre titled “FEAR, MORE THAN HATE, FEEDS ONLINE BIGOTRY AND REAL-WORLD VIOLENCE.”

His setup:

When a U.S. senator asked Facebook CEO Mark Zuckerberg, “Can you define hate speech?” it was arguably the most important question that social networks face: how to identify extremism inside their communities.

Hate crimes in the 21st century follow a familiar pattern in which an online tirade escalates into violent actions. Before opening fire in the Tree of Life synagogue in Pittsburgh, the accused gunman had vented over far-right social network Gab about Honduran migrants traveling toward the U.S. border, and the alleged Jewish conspiracy behind it all. Then he declared, “I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.” The pattern of extremists unloading their intolerance online has been a disturbing feature of some recent hate crimes. But most online hate isn’t that flagrant, or as easy to spot.

As I found in my 2017 study on extremism in social networks and political blogs, rather than overt bigotry, most online hate looks a lot like fear. It’s not expressed in racial slurs or calls for confrontation, but rather in unfounded allegations of Hispanic invaders pouring into the country, black-on-white crime or Sharia law infiltrating American cities. Hysterical narratives such as these have become the preferred vehicle for today’s extremists – and may be more effective at provoking real-world violence than stereotypical hate speech.

So far, so good. Klein observes,

[S]ocial networks have the unique capacity to turn down the volume on intolerance if they determine that a user has in fact breached their terms of service. For instance, in April 2018, Facebook removed two pages associated with white nationalist Richard Spencer. A few months later, Twitter suspended several accounts associated with the far-right group The Proud Boys for violating its policy “prohibiting violent extremist groups.”

Still, some critics argue that the networks are not moving fast enough. There is mounting pressure for these websites to police the extremism that has flourished in their spaces, or else become policed themselves. A recent Huffpost/YouGov survey revealed that two-thirds of Americans wanted social networks to prevent users from posting “hate speech or racist content.”

As I’ve noted many times over the years, most recently in my post-Christchurch essay, “Media, Free Speech, and Violent Extremists,” I both support de-platforming those who foment violence and fear how government and/or the decisionmakers at social media sites might go about doing so.

The public definitely wants social media companies to block more content:

But what any of this entails is debatable, I’d argue.

What’s harassment, for example? Obviously, spewing racist insults against members of minority groups, sexist attacks against women, and the like qualify. But what about the swarming attacks that occur when high-follower-count people quote tweet people with whom they disagree, launching hundreds if not thousands of @’s against someone? Does it matter if that someone is a public figure or otherwise powerful? Is “dead-naming” a transgender individual harassment? Twitter says it is and I tend to agree. But let’s not pretend that this doesn’t stifle debate on a hotly-contested issue that we’re still very much struggling to come to terms with as a society.

It’s hard to make a case for the virtue of “spreading conspiracy theories or false information.” But who gets to says what’s a conspiracy or what’s false? Is it verboten to talk about UFOs and Area 51? How about the alleged Trump pee tape? Allegations that the Trump campaign and/or administration are tools of the Russian government? Apparently, we’ve decided that anti-vax lunacy must be banned. With about global warming denialism?

“Hate speech or racist content” combines the problems of both of the previous categories. There are some groups it’s okay to hate, right? Nazis and other fascists are the most obvious example. Surely, we’re not going to ban people who say they hate Nazis? Similarly, it would be ironic, indeed, to ban denunciation of the Ku Klux Klan or other hate groups as hate speech.

Indeed, the public that is overwhelmingly in favor of banning it doesn’t actually agree on what it is they want to ban:

I’m something of a free speech absolutist and don’t think any words are inherently “hateful” or “offensive.” Context always matters.

For as long as I can remember, black Americans have been trying to reclaim “nigger” and its variants. While he eventually came to regret it, Richard Pryor did it for much of his career and other black comics—most of them, actually—have followed suit. More recently, the gay community has done the same, reclaiming slur words that were frequently hurled against them, most notably “queer.” Surely, it’s neither hateful nor offensive in those contexts.

Statements like “transgender people have a mental disorder” or “homosexuality is a sin” are in a different category. Expressed earnestly, they’re almost inherently hurtful and offensive. Yet both were mainstream beliefs well into my adult lifetime. And, it seems to me, debating these topics openly has moved the needle quite a bit in the right direction.

The notion that “undocumented immigrants should be imported” is offensive, let alone hateful, is bizarre. What if it were phrased, “the United States should enforce its laws” would it be? Now, more inflammatory statements—oh, “most Mexican immigrants are rapists” or “we’re being invaded by people from shithole countries”–could certainly be construed as hateful or offensive. But I’m not sure that exposing those ideas to rational discourse isn’t still the best strategy.

Some of the other examples, such as “all white people are racist” or “America is an evil country” or “police are racist,” strike me as almost dangerous to include on the list. Those are matters of political opinion, all of which are ripe topics for healthy debate. The notion that they should be banned from any platform is highly problematic.

Read the article.