As the European Union readies for its upcoming elections, one of its top priorities has become accelerating the fight against online disinformation—false information which is spread with the intent to mislead people so as to cause public harm or profit. While disinformation is not a new phenomenon, malicious actors use today’s online platforms to spread disinformation with greater speed, quantity, and reach than in the past. The challenge facing European democracies is significant and solving it will require cooperation from industry, academia, and government, including voluntary practices and soft regulation.
Many policymakers place the blame for this problem squarely on the shoulder of online platforms. They believe today’s problems are an outgrowth of online platforms “intentionally and knowingly” violating privacy and competition laws, and their knee-jerk response is to demand tougher regulations and stricter enforcement for platforms. While countries should update their laws on election transparency and foreign interference to ensure they address emerging threats relating to digital content and remain on-par with offline activities, attempts to regulate the algorithms used to show ads or display news to users of online platforms will likely fall short because of the rapidly evolving nature of the threat and the fact that Europe still lacks evidence on the nature of online disinformation, its agents, its amplification structures, and its impact. Even the European Commission’s High-Level Expert Group on Disinformation recommended to further study the information landscape. Moreover, regulations that require online platforms to remove ambiguous categories of speech, such as disinformation, will lead platforms to aggressively remove all kinds of content, even lawful and legitimate content, so as to avoid penalties.
A better approach is self-regulation. Some scholars and politicians scoff at self-regulatory approaches because they believe tech companies are uninterested in solving this problem. But such views overlook that the tech industry has undertaken initiatives to combat online disinformation in the last two years. Facebook, for instance, has removed 1.5 billion fake accounts within just six months last year. And it is precisely through voluntary measures that a number of leading tech companies have been making rapid and significant progress to tackle the threat of online disinformation. In 2018, Facebook, Google, Twitter, and Mozilla developed a self-regulatory “Code of Practice on Disinformation” with the European Commission. EU policymakers are monitoring the signatories’ commitment through monthly compliance reports.
The first reports, which were released last month, show that each platform has been making measured progress. Having no interest in seeing disinformation campaigns spiral out of control and in losing the trust of their users, platforms have been partnering with independent fact-checkers and deploying a range of new features, such as AI systems, to verify online content, and detect and reduce user interaction with fraudulent material. Platforms now provide more contextual information to users through fact-check labeling, and they have improved verification standards for political ads. For example, Google is deploying a host of online safeguards, such as a process to verify the identity of EU election advertisers prior to granting authorization to post content. These safeguards will apply to all ads that mention a political party or feature a candidate and which may only run in the EU. Both Facebook and Google will introduce transparency tools in promoted political content ahead of the elections to ensure users can see who pays for these ads, and whom they are targeted to. Google will also provide an election-ads transparency report and a searchable ad library.
The EU could do more to support the industry’s efforts to face down online disinformation. The current design of EU policies is adequate to deal with yesterday’s challenges, but unfit to anticipate those of tomorrow. Policymakers should be focusing on the next generation of disinformation which will involve more sophisticated strategies, more data, and better algorithms. Addressing these challenges will require even closer collaboration with the tech community and greater investments in public and private R&D, for example to develop technologies that automatically evaluate the trustworthiness of new content and identify deep fakes. But the EU’s financial commitments will also need to match the scope of the disinformation problem. The European Commission announced that the budget to fight disinformation will increase from €1.9 million to €5 million in 2019, which signals strong awareness. Yet this would still leave the EU on the weaker side in this new form of asymmetric information warfare because Russia invests billions in propaganda and other disinformation campaigns.
The EU cannot solve this problem alone. It will need to work with its allies, both in industry and other democracies, to develop a coordinated response to this complex global problem. But simple-minded attempts to blame online platforms, rather than those knowingly creating disinformation to mislead the public, undercut efforts at industry-government collaboration and sow distrust among stakeholders who should be working together to address this difficult problem.
Image credit: Flickr