December 12, 2016
The Commission has published its first evaluation of how Facebook, Google (YouTube), Twitter and Microsoft have applied the Code of Conduct to combat illegal online hate speech, which sets out how they should respond to online hate speech and which was agreed with the Commission on 31 May 2016.
Initial results show that 28% of all notifications of alleged illegal online hate speech lead to the removal of the flagged content. However, only 40% of all notifications are currently reviewed under 24 hours, while the aim of the Code of Conduct is to review the majority within 24 hours.
Commissioner Věra Jourová said: “It is our duty to protect people in Europe from incitement to hatred and violence online. This is the common goal of the code of conduct. The last weeks and months have shown that social media companies need to live up to their important role and take up their share of responsibility when it comes to phenomena like online radicalisation, illegal hate speech or fake news. While IT Companies are moving in the right direction, the first results show that the IT companies will need to do more to make it a success”.
As part of the Code, the IT companies pledged to review valid removal notifications against their community guidelines and where necessary apply national laws transposing the Framework Decision on combating racism and xenophobia in less than 24 hours and to remove or disable access to content, if necessary. Twelve NGOs based in nine EU countries analysed the responses to notifications over a period of six weeks.
A second monitoring exercise will take place in 2017 to assess progress and decide on next steps. To read the Commission’s press release in full and for a link to the Commission’s assessment, click here.