BRUSSELS— In response to the release of a report from the Artificial Intelligence (AI) High Level Expert Group offering policy and investment recommendations, the Center for Data Innovation released the following statement from Senior Policy Analyst Eline Chivot:
The report includes a range of appropriate solutions to support the development and uptake of AI, including talent retention and mobility strategies, the identification of key sectors for applied AI research, regulatory sandboxes, a better transfer of research results to the market to facilitate the commercialization of AI systems, the integration of existing research networks, and the increased availability of large data sets. The report also constructively recommends policymakers avoid “unnecessarily prescriptive regulation” and “cumulative regulatory interventions at the sectoral level” which could have a chilling effect on innovation, and instead suggests using broad principles as guidance.
Despite these well-advised recommendations, the group stands firm on the Commission’s view that Europe’s competitive advantage is in developing “trustworthy AI.” The problem with this view is that there is little empirical evidence that AI systems made outside Europe would be untrustworthy, that Europe has a unique ability to produce more ethical AI systems, or that there is a significant market for AI systems marketed as ethical-by-design.
In addition, the report recommends that the Commission consider introducing a “mandatory obligation to conduct a trustworthy AI assessment” for some AI systems developed by the private sector. Companies should be able to develop their own assessments, standards, or codes of practice voluntarily—many already have done so and regularly report on their progress. Impact assessments should be voluntary and provide those companies who participate some degree of liability protection (e.g., through the presumption of no ill-intent).
Finally, although the Group rightly recommends policymakers first examine existing EU laws relevant to AI rather than issuing new regulation, it suggests that this existing framework—including the GDPR and ePrivacy Directive—remains the standard of reference without objectively evaluating their negative impact on the development and use of AI systems. Without reforming the GDPR, the EU will find it is constrained by this regulatory anchor and will struggle to compete globally on AI.