In a panel discussion hosted by the Center for Data Innovation, policymakers and industry representatives discussed the European Commission’s white paper on AI, the policy options it outlines for a legal framework, and the challenges facing the EU as it aims to lead in innovation.
Irina Orssich, who leads a team looking at AI at DG Connect, shared preliminary results of the Commission’s public consultation on the white paper. (The Commission has now published more details.) Of the consultation’s 1,215 responses, only 3 percent of respondents say that current EU legislation sufficiently addresses concerns regarding AI. Indeed, 33 percent of respondents favor changing existing legislation. In addition, 43 percent of respondents say that compulsory requirements should apply only to high-risk AI, while 85 percent favor ex-ante assessment measures. Most respondents believe that regulations should cover biometric tools. After analyzing the public feedback, the Commission will release a legislative proposal on AI. In the interim, the Commission is planning a virtual event for the European AI Alliance at the beginning of October.
The consultation’s responses also recommend that the EU closely cooperate with member states. Eline Chivot, senior policy analyst at the Center for Data Innovation, mentioned that many national AI strategies remain aspirational, lack sufficient detail on execution and clarity on performance measures, and ignore funding realities. Panelists Renaud Vedel, prefect and coordinator of France’s national strategy for AI at the French Ministry of the Economy and Finance, and Kees van der Klauw, coalition manager of the Netherlands AI Coalition, both stated that individual member states have a key role in implementing the EU’s AI approach, and that they should build on each other’s efforts by harmonizing strategies, pooling resources, and sharing results, so as to ensure less technologically sophisticated EU member states benefit.
But the EU is facing more critical challenges that may prevent it from leading in AI. Vedel cautioned that the EU single digital market is not a reality yet, and has yet to be fully integrated. In addition, the EU is facing a brain drain, and the U.S. market is more attractive to EU AI start-ups and scale-ups than the fragmented EU ecosystem.
A critical point of discussion was around the obstacles raised by the GDPR, particularly regarding access to personal data. Vedel recalled that the EU’s privacy law was written before today’s AI wave. While data should be protected, it is key to AI development. Stringent regulation risks limiting AI and machine learning applications in the EU in areas which are advancing rapidly, such as computer vision and natural language processing, while EU competitors have more freedom to invest, innovate, and deploy these applications on the market. The EU and member states should relax some of the GDPR’s binding rules, which restrict the collection and use of personal data for AI systems, and address the implications of divergent interpretations of these laws. A legal framework that lacks pragmatism and agility, and which would be burdensome for companies, will prevent EU start-ups to grow and compete internationally. Janne Elvelid, policy manager EU affairs at Facebook, added that regulation should support, rather than obstruct, the many benefits AI can deliver for innovation, society, and economic growth.
Panelists advocated for a gradual approach to future AI legislation, and its alignment with the GDPR as the law covers a few aspects of automated decision making. Orssich responded that the Commission is working on identifying and addressing gaps in the EU’s extensive body of legal regimes that already cover AI systems. She emphasized the Commission’s awareness about the need for a clear framework that promotes innovation and the existing protocols supporting a controlled environment enabling testing and experimentation.
Van der Klauw and Vedel seconded Orssich’s comment. For example, in order to catch up with the United States and China in the global AI race and implement a vision for AI, the Dutch government supports a learning approach and pragmatic mentality. The EU should create environments allowing innovators to “make mistakes first and apologize later,” for instance through digital innovation hubs and fields labs. The French AI strategy calls for companies to proactively use a clause in the GDPR that allows personal data to be repurposed and maximize its reuse if it serves the public interest, in sectors such as healthcare, defense, the environment, and transport, but this remains an isolated effort. Panelists concurred that as the technology and legal environment changes, it is necessary to accommodate existing regulation, and allow stakeholders to share, test, and experiment with technical knowledge, and develop useful, marketable AI applications (e.g., for mobility or healthcare).
Finally, panelists discussed China’s ambitious AI strategy and increasing involvement in standard-setting bodies. The EU cannot ignore China’s efforts to influence the development of international standards and to promote its own technology standards globally, for instance in AI, 5G, and cybersecurity. China is a dangerous competitor which is massively investing in AI, is making strides in computer vision, natural language processing, and automated vehicles, and has a market bigger than Europe. The EU’s vision for AI, based on fundamental rights, contrasts with China’s approach and while it cannot compete with respect to funding and investment, the EU can offer an alternative, including through its industrial strategy, a human-centric approach, and by seizing opportunities for global collaboration, particularly involving transatlantic partnerships.