Recap: Can the EU Lead in AI After the Arrival of the GDPR?
The EU is investing significantly in artificial intelligence (AI), yet the implementation of the EU’s General Data Protection Regulation (GDPR) in May poses a threat to the development and use of AI in Europe, according to a new report from the Center for Data Innovation. The GDPR imposes a series of obligations on firms, such as requirements to explain algorithmic decisions to customers, review algorithmic decisions by human workers, and erase copies of personal data, and firms face large fines for non-compliance. The report argues that the GDPR will discourage the use of AI to automate processes, hurting the EU economy and consumers. On Tuesday, March 27, the Center for Data Innovation gathered a panel of experts to discuss the new report, the implications of the GDPR for AI in Europe, and steps EU policymakers can take to better support AI development and use.
Nick Wallace, a senior policy analyst at the Center for Data Innovation and the event moderator, began the event by referencing the negative effects of the GDPR’s specific requirements on algorithmic decision making. The GDPR states that any algorithmic decision that can have legal or similar effects be subject to human review. As a result, some decisions cannot be fully automated. According to Wallace, the right to have algorithmic decisions explained is particularly troublesome because sophisticated algorithms are complex and difficult to interpret. He noted, “Our fear is that this requirement in the GDPR, especially in the most significant circumstances, will either push companies to not use AI at all or to use substandard algorithms that are easier to explain but may produce less accurate and therefore potentially less fair decisions.”
Despite these challenges, the panelists stressed that there are still significant opportunities for AI development in Europe. Richard Middleton, managing director and co-head of the policy division at the Association for Financial Markets in Europe, noted that AI can help improve compliance in finance by flagging transactions within large volumes of data. In addition, AI is beneficial to smart devices, and Hugues Bersini, an AI researcher at the Université Libre de Bruxelles noted that “Where you have ‘smart’ somewhere, you can put AI in it.” Additionally, there is no shortage of AI companies in Europe.
Yet Wallace doubts that any small European companies are destined to be the next Google or Facebook. Corinna Schulze, the director of EU Government Relations at SAP, noted that the U.S. based technology companies, like Chinese ones, have access to huge domestic markets. That is why she believes the lack of a single digital market in the EU is the “eternal problem we still need to solve.” Bersini added that the EU needs to foster collaboration, and Schulze agreed. She referenced the need for a constant dialogue and a common approach to research between member nations. None of the panelists suggested more regulation was needed, however. As Middleton summed up, policy cannot keep up with the change of technology when lawmakers legislate specific uses.
Wallace pointed out that some of the GDPR’s requirements, such as the right to human review, stem from ethical concerns about algorithms making decisions. According to Bersini, algorithmic bias generally is a result of incorrect data or correct data that was extrapolated too much. Yet the GDPR states that consumers have a right to meaningful information when algorithms, but not humans, make critical decisions. As Wallace said, “It seems odd that we are holding algorithms to a higher standard.” Moreover, as Middleton mentioned, algorithms can help create consistent and fair decisions in areas such as loan rates and insurance costs. Erasing data will also negatively affect machine learning capabilities because it relies on input from new data. Consequently, the removal of information can undermine the integrity of existing algorithms.
Panelists also discussed the challenges of interpreting the GDPR moving forward. As Schulze said, many more individuals are likely to request access to their data to understand what information is being processed about them. One result could be class action lawsuits where hundreds of individuals request access to their data or demand that organizations delete data collected about them. However, companies will need more clarity on exactly what they must do to comply with such requests. While Schulze believes the principles in the GDPR of protecting an individual’s data are sound, she stated the actual legislation leaves room for interpretation. “To me, the law is not written in stone. There is a lot of discussion that needs to be had about how do we shape these rights.”
There is also a concern that many companies are ill-prepared to comply with the GDPR. Victoria de Posson, a public policy consultant at FTI Consulting, is worried that AI companies—many of them startups—are unaware of the regulations. In addition, a recent survey by Thomas Reuters found that half of companies globally already fail to comply with data and privacy regulations. Organizations that do not comply can be fined four percent of their total annual revenue or €20m ($24.8 million), whichever is larger. Thus, the GDPR could hurt smaller companies the most because €20m is significantly larger than their global revenue.
AI can help bring consistency and efficiency to decision making, and the EU has invested in its development and adoption. The GDPR, however, may leave the EU further behind the United States and China in AI. The GDPR’s complexity, requirements, and vagueness make it likely the EU will not fully realize the benefits of AI.