Tackling Online Hate Speech Against Religious Minorities—An Enlightening Hearing By USCIRF

Keyboard with hate on it
Photo by 80s Child/Shutterstock.com

1982 was the year I first laid hands on a personal computer. It was the beginning of what would essentially become a love affair between me and technology.

Fast-forward a little to 1995 and you find me joining the executive team at Earthlink, one of the early internet access companies bringing internet technology to U.S. households for the first time.

Summarizing it in the words of Earthlink’s founder, Sky Dayton, we were “removing time and space from communication.” For the first time in human history, I could send a message from Los Angeles to my uncle in Sweden and he would receive it in mere seconds.

Those were heady, pioneering times from which we all continue to benefit.

If you were to ask me then what I thought those innovations would lead to in the future, my answers would have brimmed with such optimism you would have expected me to hop on my unicorn afterward and ride off into a rainbow-crested sunset.

To put it another way, “runaway hate” would not have been on the list.

IBM computer
A 1983 IBM, one of the original home computers (Image by Twin Design/Shutterstock.com)

And yet that seems to be a cross we all have to bear now; it is undeniably part of the package, at least for the moment. Companies like Facebook and Twitter have whole teams scrambling to identify and censor hate from their platforms.

I’m a visual guy and one of my favorite tools for getting a pulse on trends is a special Google search application for literary works that turns up graphs allowing you to see how often a given word or phrase shows up in the over 40 million books scanned into Google Books. It can search through books going back to the 1800s.

For 200 years the very word “hate” itself was on trend, it would seem, to practically disappear from our literature one fine day.

Graph

Ironically, around the same time I would be introduced to my first computer, “hate” would stage a comeback, at least in our literature. It can be seen here riding on the back of our new friend the “internet.”

Of course, correlation does not equal causation, and I’m still unicorn-optimistic about the good that will yet come from all the internet-based innovations we continue to witness.

But we must face up to all that this “removal of time and space” means and brings. Somehow we have to get a handle on this hate issue, while still preserving our natural rights to freedom of speech.

So when I learned that the U.S. Commission on International Religious Freedom (USCIRF) had held a virtual hearing this winter on the impact social media was having on hate speech and religious minorities and I could view it on YouTube after the fact, I was more than interested.

We must face up to all that this “removal of time and space” means and brings. 

Entitled “Combatting Online Hate Speech and Disinformation Targeting Religious Communities,” the hearing was directed by USCIRF Chair Gayle Manchin and attended by USCIRF commissioners and experts. USCIRF is a bipartisan government commission created in 1998 for promoting global freedom of religion.

Speakers included USCIRF Vice Chairs Tony Perkins and Anurima Bhargava and USCIRF Commissioners Gary Bauer and Frederick Davie as well as David Kaye, Clinical Professor of Law at the University of California, Irvine; Susan Benesch, founder of the Dangerous Speech Project; Dr. Shakuntala Banaji, London School of Economics Professor of Media, Culture and Social Change; and Dr. Waris Husain, Senior Staff Attorney for the American Bar Association’s Center for Human Rights.

At the hearing’s start, Ms. Manchin summarized the core challenges we face:

  • Social media platform algorithms incentivize users to post provocative content.
  • Rumors, disinformation and hate speech which would have remained localized prior to the advent of today’s social platforms can now spread around the world before they can be debunked.
  • Hate speech “loads the gun” and disinformation “pulls the trigger.” Sometimes that “trigger” results in real-world, physical violence.
  • No finite definitions for the terms “hate speech” and “disinformation” currently exist under international law.
  • Social platforms must therefore dictate their own rules for the censorship of such speech and disinformation.
  • Facebook alone is currently removing about 3 million instances of hate speech per month—or 4,000 per hour.
Candles
Photo by Artem Z/Shutterstock.com

Ms. Manchin described the purpose of this particular hearing: to “explore the complex role that social media has played in fomenting conflict... hate, violence and discrimination toward religious communities” and to “consider how the United States government and social media companies can better contribute to combating the digital spread of this information and hate speech.”

David Kaye was the first to weigh in with the following four points:

  • It’s important to understand the relevant international law already in effect, particularly that embodied in the U.N. International Covenant On Civil and Political Rights, ratified by the U.S. in 1992.
  • Companies like Twitter, Facebook and others are currently opaque in terms of how they are adopting and implementing censorship rules. As a result, the individual companies possess all the information, and bodies like USCIRF only have “shadows” of that information.
  • Governments themselves sometimes incite violence and promote religious discrimination—a frightening reality that needs more review and attention.
  • Rights like freedom of expression, religion and assembly are interdependent—in other words, they rely on each other to exist at all. The U.S. has a strong role to play in promoting human rights because of our history of defending them.

Susan Benesch was next to offer her point of view, starting with a description of some of the patterns her Dangerous Speech Project has been able to identify when it comes to hate speech and disinformation.

I personally found this fascinating.

Ms. Benesch stated that, across countries, cultures, languages and religious communities, such content “is defined at least as much by fear as by hatred, since what is most powerful in it is that it is designed to generate violent fear of other people—since violent fear, in turn, makes violent reactions seem defensive and often morally justified.”

Around the world, Ms. Benesch has seen where this often begins with rhetoric suggesting there is something inherently wrong with a given religion and therefore its followers. 

Facebook alone is currently removing about 3 million instances of hate speech per month—or 4,000 per hour. 

Criticizing a religion while dehumanizing its individual followers serves as a “dog whistle” for attacks on the targeted religion’s community, she explained. The release of this dehumanizing content can surge at times, leaving social platforms scrambling to remove hateful content.

Ms. Benesch ended with a call to “protect freedom of expression vigorously even while finding the most effective ways to counter hateful content and disinformation,” and by offering her suggestions for doing so. The two she deemed essential were:

  • Exercising oversight of which content social media companies choose to remove or regulate (as it stands, we know virtually nothing about the decisions being made).
  • Conducting a “robust study” of the effects of the various interventions which are undertaken so that the correct approaches can be selected on the basis of data, rather than “groping around in the dark.”

You can find her full list of suggestions here at 35:00 in the talk.

Woman
Photo by Claus Mikosch/Shutterstock.com

Dr. Shakuntala Banaji focused mainly on problems in India, citing that there have been hundreds of thousands of malicious, orchestrated misinformation, disinformation and hate speech campaigns against Christians in the country, as well as Dalits (“untouchables”) who have converted from Hinduism to Islam. 

Here the proliferation of mobile technology preloaded with certain apps and combined with political disinformation piped through those apps (e.g., “Muslims are intentionally infecting Hindus with COVID-19”), sometimes from India’s own government, has opened the door for Indian citizens to be harassed and intimidated, threatened and even killed.

Dr. Banaji gave a stirring call for less talk and more action, particularly when it comes to hate speech produced by mainstream media in the country which is then delivered all too rapidly via state-sponsored mobile phones being distributed to citizens at an unprecedented rate.

I invite you to hear Dr. Banaji’s whole testimony at this point in the hearing.

Dr. Waris Husain (@53:00) focused his comments on regional developments in Southeast Asia. He noted that, of the 4 billion people in Asia, 2 billion now possess cell phones. This has opened the door for a 16-year-old’s “TikTok video in some remote Pakistani village to go more ‘viral’ than a hard-hitting story by the BBC,” he said. Per Dr. Husain, this presents opportunities for both good and bad-faith actors along with religious bigots to use such media for expanding the reach of their destructive messages.

Governments themselves sometimes incite violence and promote religious discrimination.

Dr. Husain also noted that a lack of digital education for newer mobile phone users is making it difficult for them to differentiate between actual information and disinformation.

While some governments are trying to apply existing laws to the problem, he pointed out that these can be ineffective or even harmful due to their not being entirely relevant to today’s online problems. Further, religious bigots, he said, have become increasingly adaptive in making use of the speed of social technology, while governments, activists and social media companies on the other hand are lagging far behind in implementing effective solutions.

At the end of the hearing there was a productive Q&A period. Here is a brief index and link to each topic should you wish to jump to that point in the video:

  • How can we best intervene when a government is the one spreading disinformation? (@1:03:14)
  • How do we balance limiting hate speech with the broader question of protecting freedom of speech? (@1:10:14)
  • What are possible social and economic incentives for social media companies to turn this tide? (@1:18:28)
  • How can we be less reactive and more anticipatory in enforcement and oversight? Does technology exist that will filter out violators prior to the appearance of the actual hate speech on a given platform? (@1:26:17)

In summary, the hearing was an enlightening discussion on the subject of online religious hate speech as it exists around the globe today. Clearly there are significant problems here which have more than just the potential for existential harm to our fellows; these problems exist largely because of the veritable tsunami of easy-to-access social media content which is outpacing our ability to keep up with and address some of the often violent flashpoints this access is fueling around the globe. It’s a very real peril, particularly for religious minorities, whose own governments may be promoting hate and disinformation, complicating the resolution of the problem.

From a policy perspective, we still need to reach the most fundamental milestone: establishing a universally agreed upon definition of hate speech and disinformation.

From there, policies must also include a layer of data transparency within both governments and private companies so that, as a people, we can all rest assured we are taking the right steps toward both freer speech and less hate.

We have our work cut out for us. But if we see it through, perhaps, one day, we’ll get “hate” back on its original trend.

AUTHOR
Hans Eisenman
Husband, father of five, software development exec, Scientologist for 35+ years. Consummate geek and foodie with insatiable willingness to laugh.