Artificial Intelligence + Hate Speech = Murky Waters

I manage a software development team and have worked in and with technology for the last three decades. I also belong to a religion that gets bashed regularly with online bigotry, so the fact that Google just created an artificial intelligence (AI) tool called “Perspective” that will flag online hate speech is of great interest to me. The tool is intended for use by platforms like Facebook to make it easier for the owners to find and address speech deemed injurious to others. Twitter also just unleashed a similar capability.

While the effort is much-needed and welcomed, I’d offer a few points to help navigate these murky waters regarding what exactly constitutes hate speech.

First and foremost, reducing unprovoked vitriol and its harm to our fellow human beings is a laudable goal that we should acknowledge at the outset of any such discussion. We’ve all been on YouTube or other “social” forums and witnessed the astonishingly primitive way our brothers and sisters mistreat each other.

This kind of behavior has to be curbed since it feeds on itself, and the tit-for-tat mentality it inspires just leads to an escalation of tempers. Before you know it, all roads lead to hate.

If someone came to your house for dinner and started hating on your family, you’d have the right and the power to “censor” them by asking them to leave, wouldn’t you?

In his essay “Honest People Have Rights, Too,” L. Ron Hubbard wrote:

Freedom for Man does not mean freedom to injure Man. Freedom of speech does not mean freedom to harm by lies.

I wholeheartedly agree. We all have to do what we can to take hate speech down as many notches as possible until we succeed in removing it altogether from our daily lives. It has no place in a civilized society.

But that doesn’t mean we all have to agree with each other. Indeed, challenging and debating fixed ideas—on an intellectual plane—is the keystone of progress. The scientific method, for example, is all about floating hypotheses, then testing them to prove them wrong in the hopes that something right will come out of the process.

So the idea isn’t to just agree with every idea. That would be madness.

It is to be respectful in the way we disagree with one other.

There are challenges with Google’s “Perspective” AI function. First of all, underneath all that “intelligence” (artificial or otherwise), someone has to start by defining what “hate” is. Understand that it’s easier to spot when it’s happening to you than it is to mathematically pinpoint it in such a way that a series of electrical switches (aka: your computer) can accurately pluck the hate—and only the hate—out of millions of comments on a platform like Facebook.

Today the definition of “hate” might be the use of certain swear words.

Tomorrow it might be disagreeing with certain ideas—perhaps religious ideas.

Which begs the question, who gets to make these calls and how do we ensure this doesn’t simply translate into censorship, restricting our freedom of speech?

Today’s social platforms are once-never-imagined opportunities to express and share our deepest beliefs across the globe at the speed of light.

It’s an interesting problem, and I think it’s important to remember that these platforms are all, in essence, private property. If someone came to your house for dinner and started hating on your family, you’d have the right and the power to “censor” them by asking them to leave, wouldn’t you?

I don’t see that as censorship so much as protecting those you love. That’s your prerogative.

Freedom of speech is one of our most cherished rights, historically meant to act as buffer between a potentially tyrannical government and its citizens. So that’s not what we have here necessarily.

Just the same, I hope the private market takes its time testing this AI tool adequately to ensure we don’t throw the debate-and-productive- discourse baby out with the bathwater, satisfying some ideological agenda—and compromising the positive evolution of our society in the process.

Today’s social platforms are once-never-imagined opportunities to express and share our deepest beliefs across the globe at the speed of light. It is up to each of us to protect this valuable exchange in such a way that the beliefs live or die on their own merits.

So, if this wasn’t already clear, I’d offer a word or two of caution here.

Film director Milos Forman had something relevant to say about censorship, in a private or public context.

His point was about the removal of extremities in speech or writing to which someone might object. Let’s say, for example, it’s certain swear words as I mentioned earlier. They are viewed first as objectionable and, by some, as “extreme.”

Some form of censorship is then used to chop off that extremity.

Except now the part that’s left is the next layer of ideas that someone might come to view as an extremity. Eventually that becomes the next objection and must be chopped.

And so on and so forth, until finally we have… what? Some gray, horrifically dull world in which no one is allowed to disagree with so-called conventional wisdom or authority?

We can’t let that happen. Yes, let’s get the hate out of our daily discourse, but let’s do it in such a way that we protect reason and the interchange of beliefs and ideas, especially if it includes disagreeing, respectfully!

AUTHOR
Hans Eisenman
Husband, father of five, software development exec, Scientologist for 35+ years. Consummate geek and foodie with insatiable willingness to laugh.