How To Identify And Report Hate Speech On Social Media

How To Identify And Report Hate Speech On Social Media

Social media is great for connecting people, sharing news and information, and giving your grandmother the opportunity to air her political opinions unfettered. Unfortunately, its anonymity and widespread reach makes it an easy way for people to spread hate, and since Donald Trump’s election in 2016, social media users have noticed a worrisome increase in online hate speech.

On Twitter, Islamophobic, anti-Semitic, racist and misogynist trolls run rampant. On Facebook, users report violent memes, groups, and comments targeting people of colour, women, and members of the LGBTQ community.

“You’re dealing with the dark side of technology,” says Lisa Laplace, Senior Staff Attorney at the New York Civil Liberties Union. “[Bias acts] can happen in a lot of contexts — cyberbullying, cyber stalking — and when in that situation one often feels powerless.”

Platforms like Twitter and Facebook have tools in place that allow users to report harassment and hate speech, though the sites’ limitations mean too many reports tend to go unaddressed. Still, it’s worth making the effort if you see threatening messages online. There are also outside sources you can turn to if you stumble upon disturbing bias acts on social media. But first:

What is ‘hate speech’?

Hate speech is sort of an amorphous concept. As Susan Benesch, founder and director of the Dangerous Speech Project, put it in an email, “There is no consensus definition for hate speech. For that reason—and also because any definition of hate speech is highly context-dependent—there’s no consistent or reliable way of identifying it online.”

Still, hate speech “often looks like attacks on people for their perceived race, colour, religion, ethnicity, gender, and sexual orientation,” says Evan Feeney, the Media, Democracy and Economic Justice Campaign Director for civil rights advocacy group Colour of Change. “Anything that attacks someone for these immutable characteristics — calls for violence against them, efforts to intimidate or harass, or sharing names that perpetuate harmful tropes.”

There are varying degrees of hate speech, and and rhetoric can escalate from prejudiced language to threats of violence. As Zahra Billoo, the executive director of the San Francisco Bay Area branch of the Council on American-Islamic Relations (CAIR), put it: “‘I hate all Muslims’ is one kind of hate speech. ‘I hate all Muslims and want to kill them’ is an escalation.”

When there’s a specific threat against a specific person, Billoo says, the stakes amplify. “‘I hate Zahra and am going to her house and will wait for her outside and beat her up with an axe’ is beyond hate speech. It’s a threat,” she said. Note that though you might be able to get Twitter or Facebook to block a user who says they “want to kill” a group of people without including any specifications, typically law enforcement will not address that kind of general threat.

Other examples of hate speech, especially on social media, include ethnic slurs, “coded” language (some anti-Semitic groups, for instance, use an (((echo))) symbol around Jewish users’ or groups’ last names) and violent imagery.

“Our members report people sharing memes and photos from the Jim Crow era of black folks being lynched,” Feeney said. “They’re often used to intimidate people, even though there is no verbal statement. It’s still a clear threat to someone, to dig up historical terror.”

[referenced url=”https://www.lifehacker.com.au/2018/04/recruit-your-friends-to-stop-online-harassment/” thumb=”https://i.kinja-img.com/gawker-media/image/upload/t_ku-large/lah418tklwvfjtjuvzba.jpg” title=”Recruit Your Friends To Stop Online Harassment” excerpt=”A new tool recently released by a research team at MIT’s Computer Science and Artificial Intelligence Laboratory offers a novel way to fight online harassment: getting your trusted friends to help keep unwanted emails out of your account.”]

Is hate speech illegal?

It is not. Because of free speech protections, non-threatening hate speech is nearly impossible to police. “In the United States, hate speech is not illegal—in fact it is constitutionally-protected speech, under the First Amendment,” Benesch wrote.

Law enforcement can address specific threats of violence that go beyond hate speech — “I’m going to kill you,” “I have your address and am coming to your home,” etc. — but non-violent racist or otherwise biased statements, objectionable as they may be, are out of the legal purview.

Since platforms like Facebook and Twitter are run by private companies, however, they can make certain forms of hate speech a violation of their terms of service. Still, there are limitations. Twitter’s enforcement policy is infamously loose, though they do have one:

“Twitter is reflective of real conversations happening in the world and that sometimes includes perspectives that may be offensive, controversial, and/or bigoted to others. While we welcome everyone to express themselves on our service, we will not tolerate behaviour that harasses, threatens, or uses fear to silence the voices of others,” spokesperson Katie Rosborough said.

Facebook (and Instagram, which shares Facebook’s policies) defines hate speech “as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanising speech, statements of inferiority, or calls for exclusion or segregation,” according to the platform’s Community Standards.

But, as a Facebook spokesperson told us, it’s hard to figure out exactly what constitutes hate speech out of context. As per the Community Standards, “Our systems may recognise specific words that are commonly used as hate speech, but not the intentions of the people who use them.”

So Facebook relies both on technologies that detect hate speech, user reports, and teams that review user reports to determine whether something violates their terms of service.

[referenced url=”https://www.lifehacker.com.au/2017/10/get-verified-and-maybe-twitter-will-answer-your-harassment-report/” thumb=”https://i.kinja-img.com/gawker-media/image/upload/t_ku-large/btxejtykwffxsom8pdkf.jpg” title=”Get Verified And Maybe Twitter Will Answer Your Harassment Report” excerpt=”Everyone knows that Twitter is horrible at fighting harassment, but The Daily Beast recently explained why, quoting anonymous former employees who blame understaffed support teams, inadequate training, and a catch-all response flowchart that doesn’t allow for common sense or individual judgement. They also revealed that reports from verified users get moved to a second, shorter work queue.”]

In fact, Katherine Lo, a researcher at University of California, Irvine who specialises online moderation, says most platforms rely on user reports to moderate hate speech. “There’s not a single platform I can think of that can police all of their content,” she said. “A lot of these systems rely on reporting.”

Which brings us to:

How do you report hate speech directly to platforms?

It’s fairly easy to report hate speech on major social media platforms. It’s less easy to get the platforms to take action against purported hate speech. But we’ll get to that in a second.

On Twitter, you can report tweets by clicking on the little arrow on the upper right hand side of the tweet itself. Click “Report tweet”, then provide Twitter with some information about the tweet’s offensive content, like whether it’s harmful or abusive. Twitter may also ask you to report additional tweets on the abusive user’s timeline.

On Facebook, you can report posts by clicking on the three dots in the upper right hand side of the post itself, then click “Give feedback on this post.” You can specifically report the post as hate speech.

Here’s what it looks like on Twitter:

And on Facebook:

If Twitter or Facebook reviews the post[s] and decides the content does in fact violate the platform’s terms of use, they may suspend or permanently ban the person who made the post you reported. Sometimes, this happens quickly. Mordechai Lightstone, the the Social Media editor of Chabad.org, says he often reports anti-Semitic tweets from trolls.

“I’ve found that for the hard-Nazi stuff, the Hitler pics and gas chambers, it’s pretty easy and fast,” he wrote in an email. “The challenge is more of the grey area — people accusing me of being a fake Jew trying to take control of the entirety Brooklyn — where it often begins to get complicated.”

The process can also be frustratingly slow. “I just received a report now that Twitter had removed some hateful tweets that I reported over a month ago,” says Lightstone.

In the meantime, you can block or mute the user you find offensive. Sometimes, this is the only thing you can do. “If, when you first report, you get a message [from the platform] that says, ‘We didn’t remove them,’ muting or blocking the user is basically your only course of action moving forward,” Lo said.

[referenced url=”https://www.lifehacker.com.au/2017/12/how-to-shut-down-a-troll-once-and-for-all/” thumb=”https://i.kinja-img.com/gawker-media/image/upload/t_ku-large/r3em3qioevrmyjhu4mjv.jpg” title=”How To Shut Down A Troll, Once And For All ” excerpt=”Even if Twitter finally bans the Nazis (lol yeah right), there will always be boring, stupid, annoying and bad people on the internet. And sometimes they will find what you put online, and bother you about it. How do you deal with them without feeling like crap? We got some advice from Eli Yudin, CollegeHumor’s community manager, whose whole job is talking to strangers on the internet.”]

Note that when it comes to reporting, there’s often power in numbers. “If the platform doesn’t take action, you can make multiple reports,” Billoo said. “If a single user is reported on multiple times, or if multiple users report a single user, public awareness can move a platform to take action if one report wouldn’t. These are companies that respond to customers.”

For instance, last year Twitter finally banned Alex Jones and his InfoWars website, after massive outcry over the right-wing conspiracy theories hawked by his accounts, including posts that were deemed violent and harassing. Both Facebook and YouTube had previously restricted Jones’s accounts, but Twitter moved late, reportedly prompted by “numerous reports from users regarding a video Jones posted of himself harassing a reporter outside of hearings with social-media executives on Capitol Hill,” according to Bloomberg.

Should you get law enforcement involved?

When it comes to hate speech online, it’s tricky to know when to seek outside help. Sometimes, it’s enough to block a user or get them booted from a platform, but if you feel genuinely threatened, you may need to weigh your options.

“I’m always hesitant to tell targeted communities to call law enforcement. But a threat is the kind of thing where someone might make the decision to involve the local police department,” Billoo said. “While most online trolling is indeed online, sometimes it spills over into real life.”

Notably, Pittsburgh synagogue shooter Robert Bowers had posted a number of threatening racist and anti-Semitic comments on Gab—a now-shuttered alt-right social media platform—before allegedly murdering 11 people at the Tree of Life Synagogue last summer.

And Cesar Soyac, accused of sending pipe bombs in the mail to vocal Trump critics last fall, repeatedly (and sometimes violently) threatened Democrats on social media.

Billoo notes that threats need to be specific if you plan to involve law enforcement. “The thing that gets tricky with threats of violence is general threats of violence versus specific ones,” she said. “‘I want to kill all Muslims’ isn’t actionable. But where there is a specific targeted threat, I know this isn’t just an egghead on Twitter. Those are the things I would definitely treat as an escalation and report.”

If, for instance, someone specifically threatens to kill or hurt you or people you love, or publicly posts your address, it’s worth filing a police report. Do note that it still may be difficult to take action, even with law enforcement involved. “If someone is being directly targeted with threat, you should absolutely contact law enforcement. But there’s very little legal recourse for harassing content online,” Feeney said.

If you report something to the police, they will likely have you fill out a report, then assign detectives to determine if the harassing post indeed constitutes a crime. Police departments vary, but some have units that specifically target bias acts — the NYPD, for example, has something called the Hate Crime Task Force, which investigates alleged hate crimes.

Again, it is only likely that police will pursue the case if there is an actual, specified threat of violence; if you are merely dealing with harassment, there’s not much they can do.

[referenced url=”https://www.lifehacker.com.au/2018/10/ask-yourself-this-before-you-react-to-anything-on-the-internet/” thumb=”https://i.kinja-img.com/gawker-media/image/upload/t_ku-large/tqoytjlvrxm8eievostm.png” title=”Ask Yourself This Before You React To Anything On The Internet” excerpt=”I talk too much crap on the internet. I go to social media, the place we all go to scream now, and I see something bad. I quote the bad thing to mock it, thus spreading the bad thing, and inviting bad replies. I do this despite already having a job pointing to bad things and inviting bad replies for money, otherwise known as blogging.

So lately I’m using this mantra, not something I tell myself, but something I ask myself, before saying anything — especially a reply or reaction — on the internet. I ask myself, “Is this strategic, or just cathartic?””]

Are there places to report hate speech outside of the platforms themselves?

If Facebook or Twitter won’t address your report, or even if they will, there are outside organisations you can contact as well. Civil rights advocacy groups like the Southern Poverty Law Center, in addition to the aforementioned CAIR and Colour of Change, can provide you with resources and support.

Local groups like the NYCLU have set up tools and portals people can use to report bias acts. The NYCLU’s Equality Watch website, for instance, encourages New Yorkers to report discrimination and harassment they’ve experienced or witnessed, including online. “Once Trump was elected, even before he was sworn into office, we were getting a lot of calls of reports of bias-based bullying,” Laplace said.

“If you feel that internet services are not responding to slurs, if you’re being blocked on a law enforcement Facebook page or public officials’ Facebook page, and you think it has to do with some kind of bias against you… going to the Equality Watch website will get your information to the appropriate organisation.”

What happens when you document with these organisations varies — a number of them (including the NYCLU) will put you in touch with an attorney, should you want to pursue civil or criminal action.

And if you feel your story deserves media attention, ProPublica’s Documenting Hate project collects information on bias-based incidents that journalists can mine for content. “We decided to start the project after the 2016 election so we could have a better sense of where hate is happening, and who is responding to it, and how they’re responding to it,” Rachel Glickhouse, the partner manager for Documenting Hate, said. “Our tip collection includes crimes, like vandalism or assault, and also open-ended incidents, something where person might not know if it’s a crime or not, like hate speech or harassment.”

Note that Documenting Hate can’t specifically do anything actionable. “With our project, if someone is seeking help in how to deal with this, a journalist is potentially going to respond to tell your story, but it’s not like going to a place where someone will march into Facebook’s office to demand change to their reporting policies,” Glickhouse said.

Still, more media attention on online abuse may help push platforms to clarify their policies or move faster to address incidents. It’s often difficult to get platforms to move quickly when you are acting alone.

“Personally, I’ve seen on Twitter that if I report people who are using very clear anti-Semitic language or memes, it often takes weeks before I see a response,” Glickhouse said. “Purely anecdotally, when I have tried to report that kind of stuff on Twitter, sometimes it works, sometimes it doesn’t.”

[referenced url=”https://www.lifehacker.com.au/2018/10/how-to-delete-your-facebook-account-a-checklist/” thumb=”https://www.lifehacker.com.au/wp-content/uploads/sites/4/2018/10/Facebook-410×231.png” title=”How To Nuke Your Facebook Account From Orbit” excerpt=”Let’s talk about that elephant in the room: Facebook’s recent disclosure that attackers got their hands on access tokens for an unknown number of Facebook accounts is a big deal, since it’s the kind of hack that you, a happy Facebook user, could not prevent.”]

Remember to report responsibly

It’s true that social media platforms seem to have murky policies when it comes to policing harassment and hate speech. But excess or irresponsible reporting can make it more difficult for the platforms to separate real threats from spurious ones. “It’s a really big trust and safety issue with hate speech,” Lo said. “Reports are not a good indicator, since a lot of people abuse report systems.”

For instance, last year freelance journalist Danielle Corcione found themselves kicked off Twitter for a joke they tweeted about trans exclusionary radical feminists (TERFs). Corcione tweeted “My pronouns are yeehaw,” then responded to the tweet with, “If any TERFs like or retweet this, I’m shoving my foot up your arse.” Corcione later got an email from Twitter notifying them their account had been suspended. They suspect the tweet ended up in a TERF Reddit forum, making it easy for people to mass report it.

“I didn’t get much backlash for [the tweet] at all. It didn’t even get much engagement, it got like 70 likes or something,” Corcione said. “It was a non-direct non-threat, and a reference to That ‘70s Show.”

In fact, as Feeney pointed out, it’s not uncommon for the wrong people to get penalised. “So many people have shared [with Colour of Change] screenshots and stories of the full range of hate speech — from direct personal threats and doxxings to more generalized hateful statements about people’s race and religion and sexual orientation — where Facebook did not remove that content, even though it clearly violated Facebook’s standards,” Feeney said. “In fact, it’s often folks that are more marginalized, black people and LGBTQ people, who end up getting their content removed.”

So, make sure what you’re reporting really looks like hate speech (“Shut up, snowflake,” and “You’re a libtard arsehole” don’t count, though it’s fair to block or mute accounts that tweet that stuff) and to protect yourself, think twice before posting something that might be taken the wrong way. Even self referential or ironic language can lead to trouble.

As Billoo cautioned, “People who are using social media should be thoughtful about what they post.” After all, she said, “Once it leaves your lips or fingertips, it’s forever.”


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments