The European Union is once again asking Facebook, Google,
Twitter, and other web companies to crack down on hate speech and
speech inciting violence and terrorism — but this time, it’s taking
things a step further. The European Commission has issued guidelines
for web companies to follow, and it’s warning the companies that, if
they don’t comply, the Commission may pass legislation. And that
legislation, of course, could lead to some huge fines.
There are a handful of guidelines so far. The Commission
recommends that web companies appoint a dedicated point of contact, who
law enforcement can contact when illegal content is discovered. It wants
web companies to allow third-party “trusted flaggers” with “specific
expertise in identifying illegal content” to come in and monitor
potentially illegal posts. And it asks web companies to invest in
technologies that can automatically detect potentially illegal posts and
speech.
The Commission would also like companies to do more to
prevent illegal content from being reposted after it’s been taken down.
And the Commission says time frames may need to be established for how
quickly illegal content is taken down once it’s discovered. Web
companies should issue public guidelines, the Commission says, so that
users know how takedown requests are treated and what kind of content
gets removed.
It sounds like a lot, but it mostly boils down to this:
web companies should remove illegal content faster and invest in tools
and employees to make it happen.
Web companies still take over a week to remove illegal
content in more than a quarter of cases, says Mariya Gabriel,
Commissioner for the digital economy and society. “The situation is not
sustainable,” Gabriel says in a statement. “Today we provide a clear
signal to platforms to act more responsibly.”
And there’s a good chance web companies will take steps toward following what the European Commission suggests.
For one, the
European Union is known for levying enormous fines on tech companies — like the €2.4 billion fine on Google
— and those companies would certainly like to avoid any new legislation
coming down that they could one day be in violation of. But also, these
companies have already been working with the EU toward reducing hate
speech.
And several European countries have already passed or considered passing their own laws on hate speech that web companies have to comply with.
A year ago, Facebook, Google, Twitter, and Microsoft all agreed to hate speech rules,
which required the companies to review “the majority of” hateful
content within 24 hours of becoming aware of it.
As a result of the
partnership, the companies later teamed up on a new database of images and videos
identified as promoting terrorism, helping the platforms quickly pull
down content that had already been identified as illegal by another
company.
In today’s announcement, Vera Jourová, commissioner for
justice and consumers, refers back to that agreement saying it’s proof
that asking web companies to more strictly regulate hate speech on their
own can work.
“The code of conduct I agreed with Facebook, Twitter,
Google, and Microsoft shows that a self-regulatory approach can serve as
a good example and can lead to results,” Jourová said. But she also
warned that “if the tech companies don't deliver, we will do it."
The Commission says it plans to “carefully” monitor web
companies’ progress in implementing these recommendations and assess
whether further action needs to be taken. That’s supposed to be
completed by next May.
“Follow-up initiatives will depend on the online
platforms' actions to proactively implement the guidelines,” the
Commission writes. Further actions, the announcement says, include
“possible legislative measures to complement the existing regulatory
framework."