What Can We Do About Hate Speech In The Wake Of The Christchurch Attack?


Over the last few days we’ve read and heard numerous statements from the tech sector, politicians and analysts on the terrorist attack carried out in Christchurch last week. While much of the focus is aimed at reforming gun laws in New Zealand and investigating how the gunman came to a decision to carry out this crime, there has also been a lot of focus on how technology companies failed to stop the livestream of this crime from being broadcast and shared. And there are also questions about the nature of news coverage.

The Facebook response

Yesterday, Facebook published an offical update on what it is doing about the terrorist attack in New Zealand. Facebook noted the following:

The video was viewed fewer than 200 times during the live broadcast. The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended. No users reported the video during the live broadcast.

It’s also worth noting that author Ginger Gorman, in her highly acclaimed book Troll Hunting, says this:

When Facebook Live came onto the market, Periscope and Meerkat was already in operation and therefore… it should have been clear that user safety would be a major issue. “Why did it take almost a dozen live-streamed rapes, murders and suicides for them [Facebook] to say ‘OK, well, we’re going to hire 3000 moderators?’”.

The first question is then – why was the livestream allowed to be broadcast at all? And, why did it take people half an hour to report that they were watching a mass shooting?

It’s important to note that Facebook is cooperating with authorities in many jurisdictions in the investigation of this terrorist act.

Political responses

The Prime Minister of New Zealand, Jacinda Ardern, has been the model of strong, compassionate, articulate and caring leadership. She is steering her government on a course to rapidly change gun laws so semi-automatic and assault weapons are made illegal – just as John Howard did in Australia following the rampage of Martin Bryant at Port Arthur in 1996.

Australia’s PM, Scott Morrison, wants the role of social media companies to be on the agenda at the next G20 summit, saying that “If they [social media companies] can write an algorithm to make sure that the ads they want you to see can appear on your mobile phone, then I’m quite confident they can write an algorithm to screen out hate content on social media platforms”.

In stark contrast, Turkish President Tayyip Erdogan has called on New Zealand to reinstate the death penalty saying that if New Zealand didn’t make the attacker to pay then Turkey would. Erdogan also screened parts of the massacre in his speech.

Does Scott Morrison have it right?

I think Morrison’s understanding of algorithms is flawed. Anyone who has seen an ad delivery in action (pretty much everyone) knows that the targeting is haphazard at best. Sure, Facebook, Google and others can deliver ads but they really aren’t all that good at targeting them.

And while the original stream was broadcast and re-shared, Facebook says that it removed 1.5 million copies of the video with three-quarters of those being stopped at the upload stage.

Writing an algorithm to detect “hate content” isn’t that easy. Many platforms still get it wrong when it comes to blocking porn. There are many challenges including defining what “hate content” actually is, understanding the nuanced differences between genuine hate content and reasoned analysis or discussion of “hate content”.

What might be deemed hate content in one context might be OK in another. As a simple example, in war what one person might call an act of terrorism could be, in the other side’s eyes, a justifiable offensive tactic.

Another example – was Erdogan’s speech “hate content”? To his constituents it probably wasn’t. But to many Australians it might be.

Do social media broadcasters need to do a better job? Yes.

But I’m not sure anyone really knows how that might work.

What can we do?

The biggest challenge I’m seeing is that, as readers and observers of the news we need to do a better job of discerning whether a news report is legitimate – numerous conspiracy theories have arisen since last week.

We should also avoid the temptation to “rubber neck” when a terrible incident is occurring and to report content that we think might be illegal.

That it took half an hour for someone to report the livestream of a mass murder is unfathomable to me.

Most of us would call the police if we saw someone break into a car on our local street. Why wouldn’t people report the same (or in this case a much worse) crime when they watch it being broadcast online?

The terrorist act carried out in Christchurch showed the worst of human nature. But we, collectively, can do something by calling out hate speech when we see it, and reporting it so an independent arbiter can review it and make a decision on whether it should be removed.

Comments


2 responses to “What Can We Do About Hate Speech In The Wake Of The Christchurch Attack?”

Leave a Reply