Monitoring, Regulating and Limiting Hate Speech
-
Upload
andre-oboler -
Category
Documents
-
view
234 -
download
0
description
Transcript of Monitoring, Regulating and Limiting Hate Speech
Monitoring, Regulating and Limiting Hate Speech
Dr Andre Oboler
CEO, Online Hate Prevention Institute
@onlinehate | facebook.com/onlinehate © Andre Oboler, 2015
2 December, 2015 United Nations, NY
Point 1: Social Media & Search are special
• Three separate hate speech problems:
– Hate speech on the internet
– Hate speech in social media
– Hate speech found via search engines
• One can allow freedom of expression on the Internet, while still denying hate speech access to the tools to go viral or to mislead.
Mainstream media
How do the news sites rank?
• BBC Online 58 at 1.795%
• CNN 70 at 1.478%
• Huffington Post 93 at 1.284%
• The New York Times 118 at 0.191%
Compare this to #2 Facebook at 42.981%, or #5 Wikipedia at 12.633%
Hate speech, technology, and regulation Prof. Jeremy Waldron (New York University School of Law): Hate Speech:
– Undermines the ‘public good of inclusiveness’ in society – Becomes embedded in ‘the permanent visible fabric of society’ and victim’s
‘assurance that there will be no need to face hostility, violence, discrimination, or exclusion by others’ in going about their daily life vanishes
Prof. Lawrence Lessig (Harvard Law School): – “unless we understand how cyberspace can embed, or displace, values from
our constitutional tradition, we will lose control over those values. The law in cyberspace—code—will displace them”
Let’s combine these ideas...
Point 2: The Fabric of Online Space The Internet is a space whose fabric is speech. Hate
speech embeds itself in the very fabric of this space.
Some of these spaces are vital public spaces, others are more private. In a world where space is made of speech, when the public spaces are built of hate and become harmful to some, it denies them access to what should be a right for all.
The environment itself can become exclusionary. In this environment a distinction between hate speech and hate acts is illusionary.
Point 3: A technological accelerant for hate
The Internet, and particularly social media, is a technological accelerant for memes, including messages of hate and extremism
– An accelerant is a term usually used in the firefighting area. It is any substance that
can accelerate the development of a fire. It's a fitting term. – A meme is a broader concept than the internet meme consisting of an image and
text that many are familiar with. A meme is an idea, a unit of culture, which can spread like a virus and morph as it does. It is a concept developed by Richard Dawkins in his book the Selfish Gene back in 1976. Racism, Xenophobia, and antisemitism in particular are all memes.
The idea of a technological accelerant for memes can be amusing if the meme is grumpy cat, but downright scary if the meme of the sort of hate that has inspired genocides.
Just as the car accelerated movement, and new laws i.e. road rules, had to
be create in response, so too are some laws needed to halt or at least slow down the viral spread of hate online.
So if we need to monitor and remove hate, how do we do it?
Response 1: Report on examples compiled by experts
Reports available Online by theme: http://ohpi.org.au/
Response 2: Briefings on specific items of hate in SM Briefings available online by theme: http://ohpi.org.au/
Expert work: Breakdown of 191 Examples
50 Facebook pages | 249 images | 191 excluding reposts
Security Threat / Threat to Public Safety: (42)
Cultural Threat (29)
Economic Threat (11)
Dehumanising or Demonizing Muslim (37)
Incitement & general threats (24)
Targeting Refugees (12)
Other Forms of Hate (36)
Access via: http://ohpi.org.au/anti-muslim-hate/
This doesn’t scale...
• YouTube – 2,056,320 videos are uploaded each day
• Facebook – 350,000,000 images are uploaded each day
• Even if only a small percent of them are hate... That’s still going to be a huge volume of content every day. And it’s being seen by a huge audience.
The FightAgainstHate.com Approach • The problem of monitoring and analysis at scale was
first raised in 2009 in the online antisemitism working group of the Global Forum to Combat Antisemitism
• In 2011 a software proposal was discussed. The key aspects of this approach were:
– Crowd sourcing the report from the public
– Artificial intelligence (AI) for quality control of the reports
– AI is to be based on calibration to experts opinions
– Platform is to provide sharing of data between experts to enable further analysis
Responding Reporting Transparency Accountability
Responding Reporting Transparency Accountability
Public Public } } } } Experts
Experts
So we have response 3: Monitoring & Analysis Transparency & Accountability
At the Global Forum to Combat Antisemitism in May we release a report based on data from the FightAgainstHate.com reporting tool. Here are some the results:
Final report to be released Jan 27, 2016
23%
41%
36%
Antisemitism by social media platform
YouTube
5%
12%
49%
34%
Antisemitism by classification sub-types
Promoting violence against Jews
Holocaust denial
Traditional antisemitism (not Israel-related)
New antisemitism (Israel-related)
Sample size: 2024 items
Drilling deeper the results are even more startling. We see that different kinds of Antisemitism are more prevalent on different platforms. Prevalence is a combination of what users upload, and what action the platform is taking to remove such content.
16
27 72
Promoting violence against Jews
YouTube
42
105
44
Holocaust denial
YouTube
214
253
120
New antisemitism
YouTube
137
433
167
Traditional antisemitism
YouTube
Final report to be released Jan 27, 2016
Final report to be released Jan 27, 2016
Forth coming data • Removal rates range from 2% (new antisemitism on
YouTube) to 50% (promoting violence on Facebook)
• The final report will provide a full breakdown by platform and hate type
More on the SAMIH campaign at: http://fightagainsthate.com/samih/
Spotlight on Anti-Muslim Hate Report Based on a sample of 1111 Items of Anti-Muslim Hate Speech
Muslims as a cultural threat, 33%
Demonising Muslims, 17% Muslims as a security risk, 19%
Inciting anti-Muslim violence, 9%
Xenophobia / anti-refugee, 7%
Muslims as dishonest, 3%
Undermining Muslim allies, 5%
Socially excluding Muslims, 3%
Other anti-Muslim hate, 4% Anti-Muslim hate classification subtypes
Draft report to be released Dec 10, 2015. Full Report Feb 2016.
69%
31%
Demonising Muslims (Facebook)
online
offline
94%
6%
Xenophobia / anti-refugee (Facebook)
online
offline
80%
20%
Muslims as a security risk (Facebook)
online
offline
Take down rates so far
Spotlight on Anti-Muslim Hate Report
Draft report to be released Dec 10, 2015. Full Report Feb 2016.
These items have been reported to the platforms through the usual reporting mechanisms. We will be offering senior management the list we are using, and allow them time to review the items, before publishing the final report.
The Big Picture
Contact details
• Websites: oboler.com / ohpi.org.au / fightagainsthate.com
• Twitter: @oboler / @onlinehate
• Facebook: facebook.com/onlinehate
• E-mail via: http://ohpi.org.au/contact-us/
Help promoting FightAgainstHate.com will enable us collect and share better data. NGOs and Government agencies can endorse it (39 organisations endorsing it so far).