BRUSSELS—In response to the release of the European Commission’s High-Level Expert Group on Artificial Intelligence’s “Ethics Guidelines for Trustworthy AI,” the Center for Data Innovation issued the following statement from its senior policy analyst, Eline Chivot:
The new ethics guidelines are a welcome alternative to the EU’s typical “regulate first, ask questions later” approach to new technology. They also reflect a number of improvements from the draft released in December. For example, the new document acknowledges the trade-off between enhancing a system’s explainability and increasing its accuracy. It rightly acknowledges that its principles remain abstract, does away with the poorly defined “principle of beneficence,” and no longer associates “nudging” with “risks to mental integrity.” In addition, it is particularly important that this document does not include recommendations urging the Commission to regulate on AI.
However, the document falls short in a number of areas. Most importantly, it incorrectly treats AI as inherently untrustworthy and argues the principle of explicability is necessary to promote public trust in AI systems, a claim which is unsupported by evidence. These areas should be changed in the next report.
Most importantly, the belief that the EU’s path to global AI dominance lies in beating the competition on ethics rather than on value and accuracy is a losing strategy. Pessimism about AI will only breed more opposition to using the technology, and a hyper focus on ethics will make the perfect the enemy of the good.
The HLEG’s report does not reflect the official view of the European Commission, although it does reflect the conventional wisdom of many European policymakers. Therefore, we encourage the Commission to move past this report and take concrete steps to meaningfully support the development and deployment of AI with additional policy and investment decisions.