On November 28, 2017, the Center for Data Innovation submitted comments to the Article 29 Working Party, the advisory body of European privacy regulators, on its guidelines regarding algorithmic decision-making and the General Data Protection Regulation (GDPR).
The guidelines go beyond the requirements of the GDPR in ways that will further chill the development and use of artificial intelligence (AI) in the EU. The requirement that a human reviewing an algorithmic decision consider “all the available input and output data” will prompt those using AI to limit the sophistication of algorithmic decisions and the data they draw on, so as to minimize the labor costs of a human review, which are considerable. Similarly, the recommendation that companies be prepared to explain any decision, whether it has legal or significant effects or not, will encourage them to limit their use of AI in all cases.
The guidelines appear to rely on the flawed assumption that a decision made by an algorithm is more likely to be indecipherable, biased, unfair, or damaging than one made by a human, and that human decisions are therefore preferable, particularly when dealing with vulnerable individuals. Proper auditing can identify and control for bias in algorithms and the data they draw on. But human bias is much more difficult to track and prevent in an objective and systematic way, because human motivation is often indecipherable even from a subjective point of view. Algorithmic decisions are less prone to bias, and easier to rectify when biased, than human decisions. Far from being a threat, AI is a promising technology and policymakers should accelerate its adoption to help society.