Home PublicationsCommentary The EU Should Not Stand In The Way Of Facial Recognition Technology

The EU Should Not Stand In The Way Of Facial Recognition Technology

by Eline Chivot
by and
Marie-Laure Denis

The Swedish data protection authority has fined a municipality €20,000 under the General Data Protection Regulation (GDPR) for having piloted the use of facial recognition technology to register the attendance of students at one of its high schools. Even though the school had obtained their consent, the court found that the consent was not valid because of the imbalance of power between the school and students. The fine sets a precedent that will further discourage organizations from using facial recognition technology out of fear that they will run afoul of the GDPR. Unfortunately, more restrictions may be in store as facial recognition technology is likely to be a target of the new European Commission’s upcoming regulation on artificial intelligence.

Slowing the adoption of facial recognition technology is wrong for three reasons.

First, as the budgets of governments and their law enforcement agencies are tightening, facial recognition technology can prove a powerful tool in improving cost-efficiency and streamlining resources. Facial recognition systems are already implemented in American airports to reduce boarding time—an estimate by Delta Airlines suggests that it can save an average of nine minutes—and European airports are testing them. Swedish school’s teachers, who spend 17,000 hours a year monitoring student attendance, could certainly benefit from a technology that speeds up the process and allows them to dedicate more time to teaching. In this respect, municipal authorities deemed the results successful and the technology safe.

Second, facial recognition technology provides a solution to critical security challenges, both for consumers and businesses. For instance, Apple is implementing its “Face ID” on its newer products, which gives consumers more control over the security of their devices. The security industry is successfully applying facial recognition technology to detect tailgating, where someone holds a door open for someone without an access pass. Governments can also use facial recognition technology as a discreet solution to monitor public areas and protect the public. The city of Nice in France, which in 2016 had been targeted by a terrorist attack on Bastille Day, recently submitted a report reflecting positive results after testing facial recognition by using its CCTV cameras—results which French data protection authority CNIL remains reluctant to acknowledge. Sweden may find facial recognition technology to be rather à propos as well, given the country is dealing with rising crime rates that are negatively impacting its business climate.

Third, curtailing adoption will limit development of better facial recognition technology systems that can allow European firms to compete with those offered by foreign competitors—an outcome that is directly at odds with EU policymakers’ goal of being more competitive in AI. The European Commission’s high-level expert group on AI recommends that “exceptional use of such technology, such as for national security purposes, must be evidence based” which creates a catch-22: By limiting its implementation rather than testing it in various environments and for various purposes, how can the use of new technology be “evidence-based” and improved?

The EU should hold off on any new regulations targeted exclusively at biometrics, especially without clear evidence of tangible harm. In the case of schools, for example, the biometric data they collect is innocuous, and schools often already have their students’ pictures—along with other personally identifiable information such as their date and place of birth, identification number, address, health and education records, and even indirect identifiers about their relatives, such as their family members’ names and profession. As long as schools protect the biometric data the same way they protect other sensitive student information, there should be no issue.

European policymakers should also ease their existing restrictions on facial recognition technology . The GDPR allows some government uses of facial recognition—a point that a UK court recently upheld after it dismissed a lawsuit brought by privacy campaigners over the South Wales Police using facial recognition technology —but the Swedish fine shows that the law still restricts some from using the technology. EU policymakers should clarify and relax GDPR’s requirements, so that the law no longer exposes organizations testing facial recognition technology to sanctions. Moreover, the GDPR should allow reasonable uses of facial recognition technology, like taking attendance in schools, which presents no more risk when done by a computer than when a human does the same task.

Facial recognition technology has clearly raised concerns across various member states, but policymakers should focus on preventing clearly inappropriate uses, not stopping organizations from using the technology for legitimate purposes.

Image credits: Wikipedia

You may also like

Show Buttons
Hide Buttons