Webster University media hackathon - tackling hate speech

3179/1-D06 (CERN)



Show room on map
Alison Langley (Webster University)

An 8-week intensive course on media-related concept prototype development. This is the kickoff session organized at IdeaSquare.

Challenge: Tackle hate speech on the web to make the Internet a safer space for all without dampening free expression. 

Facebook, Twitter, YouTube, 4chan and others have often become megaphones for hate and bullying; incubators for extremism.  At best, vitriol and outsized emotion hinder reasoned and rational discourse. At its worst, it can radicalize readers to violence. Social media allows extremist groups a platform to amplify their size and potency. It’s easy to use and it’s suited for anonymous communications. ISIL, neo-Nazi and far left posters use social media to link videos, photos, messages, and press releases to uncontrolled and unsupervised sites, like Justpaste.it or archive.org.
The EU, NATO, The Organisation for Security and Cooperation in Europe and the UN, among others, have documented how extremist groups have used social media to attract members. It appears to be working: Statistics in Europe, US and many parts of Asia and Africa show that hate crimes, spurred on by social media, are on the rise.  There was a spike of anti-immigrant crimes right after the Brexit vote, just as there was a jump in the US after Trump won the presidential election.  Germany, The Netherlands, Sweden, Austria and other countries are also reeling from arson attacks, grave distructions and beatings of minorities, as well as bomb attacks, by a variety of extremist groups. 

Hate speech also acts as a chilling effect on free speech, especially by women, minorities and LGBT communities, who bear the brunt of verbal violence, according to multiple studies including an important one by the Fundamental Rights Agency. The spread of hate speech negatively affects the groups it targets, hurting those who speak out for freedom, tolerance and nondiscrimination in an open society. If you think you will be cut down – either figuratively or literally – you are less likely to speak up. 

Most perplexing is that social media emboldens many people to write things they wouldn’t likely express outside the virtual world. One victim of hate speech points out no one has ever told her to her face that they want to rape her. Social media provides angry people a new way to reach victims in their own homes. Campaigns of prejudice often have online dimensions that are more effective than any offline strategy simply because they can reach far more people. 

Nasty trolls focus on the identity of the victim – their race, gender, sexuality and so on – to make the abuse as hurtful and personalized as possible. The attacks often tend to focus more on who they are and less on what they have said. The onslaught can also be on entire groups, ridiculed for the deeds of a few. 

As hubs for content, social media companies have become the battleground in the question of balancing the right to free speech – even the caustic and acrid kind – with the right to live in dignity. Both rights are written into the UN’s Universal Declaration of Human Rights and the European Union Charter on Fundamental Rights. What happens if they run into conflict?  Without free expression democracy cannot exist. Without dignity, what’s the point of democracy? 

In the US, with its strong free speech tradition, the public appears willing to tolerate postings glorifying Nazism, racism and hate against groups. It’s seen as the cost of freedom. Citizens must accept the bad in order to protect the good, the logic goes. One person’s hate speech might be another’s poetry. Attempts to block free speech, as seen recently, becomes a rallying point for the extremists: They see themselves as victims. 

In many European countries, however, memories of Nazism are too recent to ignore the hate speech.  The deadly effect of whipping up sentiment against a class of people remains a raw and bitter part of their history. And they fear another rise of fascism if hate speech goes unchecked. In 2007, the EU adopted a framework that obligates member states to make it illegal to deny the Holocaust and to express hate speech. 

This year, the German government passed the “Netzwerkdurchsetzungsgesetz,” which requires social media companies to delete comments that are hateful, insulting and incite to violence within 24 hours of notification.  Offenders face very large fines under the law, which is not without its critics.

The EU differentiates between speech that is offensive and that which incites hatred and violence. Shocking and disturbing speech, the European Human Rights Court has ruled, must be legal: “such are the demands of … pluralism, tolerance and broadmindedness without which there is no ‘democratic society’.” 

In a later ruling, the court stated that equal dignity for all human beings is a foundation to democracy, therefore,  it may be necessary in certain societies to sanction or prevent “all forms of expression which spread, incite, promote or justify hatred based on intolerance” provided that the restrictions, penalties and punishments are proportional.

That last part – proportional – is important. It’s meant to prevent rulers from using hate speech as a pretence to imprison or otherwise silence critics. A reasonable remedy to illegal hate speech, according to European courts, typically means deleting or removing the post. 
So what should social media companies do when the philosophy of some democratic countries – all speech should be free – conflicts with that of other democratic countries – all speech except hatred should be free? How should hate be defined? Who should define it? Should it be banned or not? Who should decide these questions in a world where speech crosses boundaries but that virtual world doesn’t have a unified law?

The agenda of this meeting is empty