In the fight against COVID-19, many online companies have swiftly adopted voluntary measures to address disinformation and other harmful content on their platforms. But their actions have attracted critics who say the European Commission should require platforms to be more heavily regulated when it takes up the Digital Services Act later this year. But imposing more rules on digital platforms would be misguided and instead, EU policymakers should leverage existing structures such as co-regulatory solutions and step up efforts to increase digital literacy.
In the past few weeks, digital platforms have adopted strong responses to tackle disinformation about the virus. For example, Google has banned ads capitalizing on the pandemic, redirected COVID-19 queries on YouTube to authoritative sources like the World Health Organization, and removed videos making false claims about how to fight the virus. Facebook has removed exploitative ads and posts with false claims about treatments and has established a system for alerting users who have engaged with misinformation about the virus. Instagram has removed unofficial COVID-19 accounts from appearing in its recommendation listings, and WhatsApp has limited message forwarding to deter users from spreading viral messages. Amazon has banned thousands of seller accounts who raised prices for scarce items such as protective masks.
These actions have created ammunition for opponents of digital platforms. Some argue that these platforms have gone too far in resorting to automated filtering, accidentally taking down legitimate news outlets and websites and censoring speech as a result. Others say these efforts are proof that these companies can do more, and that they should “carry over to other areas,” beyond COVID-19 misinformation. British politician Julian Knight said there are “shocking examples in recent weeks” of false information about COVID-19 on social media and that digital platforms are “morally responsible for tackling disinformation on their platforms and should face penalties if they don’t.” Former Member of the European Parliament Marietje Schaake called on policymakers to “restrict the power of online platforms,” asserting that what is needed are rules “forcing tech companies” to fight disinformation.
But EU policymakers should not lend their ears to critics who are using the current crisis as an opportunity to repurpose and advance positions they already had put forth before the crisis. When updating the rules for online platforms and content moderation, EU policymakers should not overreact, and rather than penalizing Internet companies, they should build on existing frameworks and pursue co-regulatory solutions. The European Commission itself recognized that some platforms were able to take action quickly because of the code of practice on disinformation, an initiative between the Commission and signatories such as Facebook, Google, and Twitter deploying a set of voluntary measures to tackle disinformation. This co-regulatory framework gives platforms the flexibility to quickly respond with exceptional measures during a crisis and, during normal circumstances, to strike more of a balance between consumer protection, free speech, and other goals.
This does not mean EU governments cannot play a more active role in fighting disinformation. Policymakers should play their part by investing more in digital literacy and media literacy which is one of the most effective ways to empower individuals to navigate misinformation.
The COVID-19 crisis shows that the major online platforms are responsive actors that work with policymakers to act responsibly in the interest of public health. The Digital Services Act should build on these proactive voluntary actions rather than enact stringent rules that would undermine Internet companies and the freedom of European citizens.
Image credits: Flickr user World Economic Forum