Once social media companies and websites began acquiescing to EU Commission demands for content takedown, the end result was obvious. Whatever was already in place would continually be ratcheted up. And every time companies failed to do the impossible, the EU Commission would appear on their virtual doorsteps, demanding they be faster and more proactive.
Facebook, Twitter, Google, and Microsoft all agreed to remove hate speech and other targeted content within 24 hours, following a long bitching session from EU regulators about how long it took these companies to comply with takedown orders. As Tim Geigner pointed out late last year, the only thing tech companies gained from this acquiescence was a reason to engage in proactive censorship.
Because if a week or so, often less, isn’t enough, what will be? You can bet that if these sites got it down to 3 days, the EU would demand it be done in 2. If 2, then 1. If 1? Well, then perhaps internet companies should become proficient in censoring speech the EU doesn’t like before it ever appears.
Even proactive censorship isn’t enough for the EU Commission. It has released a new set of recommendations [PDF] for social media companies that sharply increases mandated response time. The Commission believes so-called “terrorist” content should be so easy to spot, companies will have no problem staying in compliance.
Given that terrorist content is typically most harmful in the first hour of its appearance online and given the specific expertise and responsibilities of competent authorities and Europol, referrals should be assessed and, where appropriate, acted upon within one hour, as a general rule.
Yes, the EU Commission wants terrorist content vanished in under an hour and proclaims, without citing authorities, that the expertise of government agencies will make compliance un-impossible. The Commission also says it should be easy to keep removed content from popping up somewhere else, because it’s compiled a “Database of Hashes.”
Another bad idea that cropped up a few years ago makes a return in this Commission report. The EU wants to create intermediary liability for platforms under the concept of “duty of care.” It would hold platforms directly responsible for not preventing the dissemination of harmful content. This would subject social media platforms to a higher standard than that imposed on European law enforcement agencies involved in policing social media content.
In order to benefit from that liability exemption, hosting service providers are to act expeditiously to remove or disable access to illegal information that they store upon obtaining actual knowledge thereof and, as regards claims for damages, awareness of facts or circumstances from which the illegal activity or information is apparent. They can obtain such knowledge and awareness, inter alia, through notices submitted to them. As such, Directive 2000/31/EC constitutes the basis for the development of procedures for removing and disabling access to illegal information. That Directive also allows for the possibility for Member States of requiring the service providers concerned to apply a duty of care in respect of illegal content which they might store.
This would apply to any illegal content, from hate speech to pirated content to child porn. All of it is treated equally under certain portions of the Commission’s rules, even when there are clearly different levels of severity in the punishments applied to violators.
In accordance with the horizontal approach underlying the liability exemption laid down in Article 14 of Directive 2000/31/EC, this Recommendation should be applied to any type of content which is not in compliance with Union law or with the law of Member States, irrespective of the precise subject matter or nature of those laws…
The EU Commission not only demands the impossible with its one-hour takedowns, but holds social media companies to a standard they cannot possibly meet. On one hand, the Commission is clearly pushing for proactive removal of content. On the other hand, it wants tech companies to shoulder as much of the blame as possible when things go wrong.
Given that fast removal of or disabling of access to illegal content is often essential in order to limit wider dissemination and harm, those responsibilities imply inter alia that the service providers concerned should be able to take swift decisions as regards possible actions with respect to illegal content online. Those responsibilities also imply that they should put in place effective and appropriate safeguards, in particular with a view to ensuring that they act in a diligent and proportionate manner and to preventing [sic] the unintended removal of content which is not illegal.
The Commission follows this by saying over-censoring of content can be combated by allowing those targeted to object to a takedown by filing a counter-notice. It then undercuts this by suggesting certain government agency requests should never be questioned, but rather complied with immediately.
[G]iven the nature of the content at issue, the aim of such a counter-notice procedure and the additional burden it entails for hosting service providers, there is no justification for recommending to provide such information about that decision and that possibility to contest the decision where it is manifest that the content in question is illegal content and relates to serious criminal offencesinvolving a threat to the life or safety of persons, such as offences specified in Directive (EU) 2017/541 and Directive 2011/93/EU. In addition, in certain cases, reasons of public policy and public security, and in particular reasons related to the prevention, investigation, detection and prosecution of criminal offences, may justify not directly providing that information to the content provider concerned. Therefore, hosting service providers should not do so where a competent authority has made a request to that effect, based on reasons of public policy and public security, for as long as that authority requested in light of those reasons.
These recommendations will definitely cause all kinds of collateral damage, mainly through proactive blocking of content that may not violate any EU law. It shifts all of the burden (and the blame) to tech companies with the added bonus of EU fining mechanisms kicking into gear 60 minutes after a takedown request is sent. The report basically says the EU Commission will never be satisfied by social media company moderation efforts. There will always be additional demands, no matter the level of compliance. And this is happening on a flattened playing field where all illegal content is pretty much treated as equally problematic, even if the one-hour response requirement is limited to “terrorist content” only at the moment.