Social media is great for connecting people, sharing news and information, and giving your grandmother the opportunity to air her political opinions unfettered. Unfortunately, its anonymity and widespread reach makes it an easy way for people to spread hate, and since Donald Trump’s election in 2016, social media users have noticed a worrisome increase in online hate speech.
On Twitter, Islamophobic, anti-Semitic, racist and misogynist trolls run rampant. On Facebook, users report violent memes, groups, and comments targeting people of colour, women, and members of the LGBTQ community.
“You’re dealing with the dark side of technology,” says Lisa Laplace, Senior Staff Attorney at the New York Civil Liberties Union. “[Bias acts] can happen in a lot of contexts — cyberbullying, cyber stalking — and when in that situation one often feels powerless.”
Platforms like Twitter and Facebook have tools in place that allow users to report harassment and hate speech, though the sites’ limitations mean too many reports tend to go unaddressed. Still, it’s worth making the effort if you see threatening messages online. There are also outside sources you can turn to if you stumble upon disturbing bias acts on social media. But first:
What is ‘hate speech’?
Hate speech is sort of an amorphous concept. As Susan Benesch, founder and director of the Dangerous Speech Project, put it in an email, “There is no consensus definition for hate speech. For that reason—and also because any definition of hate speech is highly context-dependent—there’s no consistent or reliable way of identifying it online.”
Still, hate speech “often looks like attacks on people for their perceived race, colour, religion, ethnicity, gender, and sexual orientation,” says Evan Feeney, the Media, Democracy and Economic Justice Campaign Director for civil rights advocacy group Colour of Change. “Anything that attacks someone for these immutable characteristics — calls for violence against them, efforts to intimidate or harass, or sharing names that perpetuate harmful tropes.”
There are varying degrees of hate speech, and and rhetoric can escalate from prejudiced language to threats of violence. As Zahra Billoo, the executive director of the San Francisco Bay Area branch of the Council on American-Islamic Relations (CAIR), put it: “‘I hate all Muslims’ is one kind of hate speech. ‘I hate all Muslims and want to kill them’ is an escalation.”
When there’s a specific threat against a specific person, Billoo says, the stakes amplify. “‘I hate Zahra and am going to her house and will wait for her outside and beat her up with an axe’ is beyond hate speech. It’s a threat,” she said. Note that though you might be able to get Twitter or Facebook to block a user who says they “want to kill” a group of people without including any specifications, typically law enforcement will not address that kind of general threat.
Other examples of hate speech, especially on social media, include ethnic slurs, “coded” language (some anti-Semitic groups, for instance, use an (((echo))) symbol around Jewish users’ or groups’ last names) and violent imagery.
“Our members report people sharing memes and photos from the Jim Crow era of black folks being lynched,” Feeney said. “They’re often used to intimidate people, even though there is no verbal statement. It’s still a clear threat to someone, to dig up historical terror.”
Is hate speech illegal?
It is not. Because of free speech protections, non-threatening hate speech is nearly impossible to police. “In the United States, hate speech is not illegal—in fact it is constitutionally-protected speech, under the First Amendment,” Benesch wrote.
Law enforcement can address specific threats of violence that go beyond hate speech — “I’m going to kill you,” “I have your address and am coming to your home,” etc. — but non-violent racist or otherwise biased statements, objectionable as they may be, are out of the legal purview.
Since platforms like Facebook and Twitter are run by private companies, however, they can make certain forms of hate speech a violation of their terms of service. Still, there are limitations. Twitter’s enforcement policy is infamously loose, though they do have one:
“Twitter is reflective of real conversations happening in the world and that sometimes includes perspectives that may be offensive, controversial, and/or bigoted to others. While we welcome everyone to express themselves on our service, we will not tolerate behaviour that harasses, threatens, or uses fear to silence the voices of others,” spokesperson Katie Rosborough said.
Facebook (and Instagram, which shares Facebook’s policies) defines hate speech “as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanising speech, statements of inferiority, or calls for exclusion or segregation,” according to the platform’s Community Standards.
But, as a Facebook spokesperson told us, it’s hard to figure out exactly what constitutes hate speech out of context. As per the Community Standards, “Our systems may recognise specific words that are commonly used as hate speech, but not the intentions of the people who use them.”
So Facebook relies both on technologies that detect hate speech, user reports, and teams that review user reports to determine whether something violates their terms of service.
Everyone knows that Twitter is horrible at fighting harassment, but The Daily Beast recently explained why, quoting anonymous former employees who blame understaffed support teams, inadequate training, and a catch-all response flowchart that doesn't allow for common sense or individual judgement. They also revealed that reports from verified users get moved to a second, shorter work queue.Read more
In fact, Katherine Lo, a researcher at University of California, Irvine who specialises online moderation, says most platforms rely on user reports to moderate hate speech. “There’s not a single platform I can think of that can police all of their content,” she said. “A lot of these systems rely on reporting.”
Which brings us to:
How do you report hate speech directly to platforms?
It’s fairly easy to report hate speech on major social media platforms. It’s less easy to get the platforms to take action against purported hate speech. But we’ll get to that in a second.
On Twitter, you can report tweets by clicking on the little arrow on the upper right hand side of the tweet itself. Click “Report tweet”, then provide Twitter with some information about the tweet’s offensive content, like whether it’s harmful or abusive. Twitter may also ask you to report additional tweets on the abusive user’s timeline.
On Facebook, you can report posts by clicking on the three dots in the upper right hand side of the post itself, then click “Give feedback on this post.” You can specifically report the post as hate speech.
Here’s what it looks like on Twitter:
And on Facebook:
“I’ve found that for the hard-Nazi stuff, the Hitler pics and gas chambers, it’s pretty easy and fast,” he wrote in an email. “The challenge is more of the grey area — people accusing me of being a fake Jew trying to take control of the entirety Brooklyn — where it often begins to get complicated.”
The process can also be frustratingly slow. “I just received a report now that Twitter had removed some hateful tweets that I reported over a month ago,” says Lightstone.
In the meantime, you can block or mute the user you find offensive. Sometimes, this is the only thing you can do. “If, when you first report, you get a message [from the platform] that says, ‘We didn’t remove them,’ muting or blocking the user is basically your only course of action moving forward,” Lo said.
Even if Twitter finally bans the Nazis (lol yeah right), there will always be boring, stupid, annoying and bad people on the internet. And sometimes they will find what you put online, and bother you about it. How do you deal with them without feeling like crap? We got some advice from Eli Yudin, CollegeHumor's community manager, whose whole job is talking to strangers on the internet.Read more
Note that when it comes to reporting, there’s often power in numbers. “If the platform doesn’t take action, you can make multiple reports,” Billoo said. “If a single user is reported on multiple times, or if multiple users report a single user, public awareness can move a platform to take action if one report wouldn’t. These are companies that respond to customers.”
For instance, last year Twitter finally banned Alex Jones and his InfoWars website, after massive outcry over the right-wing conspiracy theories hawked by his accounts, including posts that were deemed violent and harassing. Both Facebook and YouTube had previously restricted Jones’s accounts, but Twitter moved late, reportedly prompted by “numerous reports from users regarding a video Jones posted of himself harassing a reporter outside of hearings with social-media executives on Capitol Hill,” according to Bloomberg.
Should you get law enforcement involved?
When it comes to hate speech online, it’s tricky to know when to seek outside help. Sometimes, it’s enough to block a user or get them booted from a platform, but if you feel genuinely threatened, you may need to weigh your options.
“I’m always hesitant to tell targeted communities to call law enforcement. But a threat is the kind of thing where someone might make the decision to involve the local police department,” Billoo said. “While most online trolling is indeed online, sometimes it spills over into real life.”
Notably, Pittsburgh synagogue shooter Robert Bowers had posted a number of threatening racist and anti-Semitic comments on Gab—a now-shuttered alt-right social media platform—before allegedly murdering 11 people at the Tree of Life Synagogue last summer.
And Cesar Soyac, accused of sending pipe bombs in the mail to vocal Trump critics last fall, repeatedly (and sometimes violently) threatened Democrats on social media.
Billoo notes that threats need to be specific if you plan to involve law enforcement. “The thing that gets tricky with threats of violence is general threats of violence versus specific ones,” she said. “‘I want to kill all Muslims’ isn’t actionable. But where there is a specific targeted threat, I know this isn’t just an egghead on Twitter. Those are the things I would definitely treat as an escalation and report.”
If, for instance, someone specifically threatens to kill or hurt you or people you love, or publicly posts your address, it’s worth filing a police report. Do note that it still may be difficult to take action, even with law enforcement involved. “If someone is being directly targeted with threat, you should absolutely contact law enforcement. But there’s very little legal recourse for harassing content online,” Feeney said.
If you report something to the police, they will likely have you fill out a report, then assign detectives to determine if the harassing post indeed constitutes a crime. Police departments vary, but some have units that specifically target bias acts — the NYPD, for example, has something called the Hate Crime Task Force, which investigates alleged hate crimes.
Again, it is only likely that police will pursue the case if there is an actual, specified threat of violence; if you are merely dealing with harassment, there’s not much they can do.