FACIAL RECOGNITION TECHNOLOGIES: with great power comes great responsability

Contents

On June 14th 2023, the European Parliament proposed the first European law on artificial intelligence (the “AI Act”), which, following the approval of the European Union, is expected to come into effect from next year. This event further solidifies Europe as a centre of a heated debate regarding biometric technologies (also known as Biometrics), i.e. “all automated processes used to recognize an individual by quantifying physical, physiological or behavioural characteristics […]”.

Within this category also falls facial recognition, which allows the automatic recognition of individuals “based on their face in order to authenticate or identify them”.

How facial recognition works

Facial recognition is based on capturing the image of an individual’s face and extracting its unique features to create a digital representation of it, which is called biometric template, and store it in a database. The system then performs a comparison between the template obtained and those already present within the database to check for possible matches.

It is worth mentioning that facial recognition is a probabilistic technology, which means that matches are established based on the probability that the “examined” person is indeed the person the system is looking for.

Applications of artificial intelligence

Notably, facial recognition systems can have two functionalities:

  1. Authentication of a subject: this function is also referred to as “one-to-one verification” since the system’s activity is aimed at verifying the identity of the subject. The system, in fact, compares the person’s “real-time” template with the one already part of the database to verify if there is a match between the two. An example of authentication system is the facial recognition function that allows to unlock a smartphone.
  2. Identification of a subject: this function is also referred to as “one-to-many verification” since the system compares all the templates captured in a specific moment with one specific template already present within the database. The purpose of the system, in fact, is to identify a specific individual within a group of people or a geographic area. An example of this kind of system is the facial recognition system approved for the Olympic Stadium of Rome that was supposed to allow the identification of offenders for whom access to public places such as sports arenas is prohibited.

Risks of artificial intelligence

In respect of the point (2.) above, the AI Act would prohibit the use of immediate real biometric identification systems in publicly accessible spaces, without exception. Remote systems will still be usable, however, for the prosecution of serious crimes and with judicial authorisation.

The AI Act would provide for a system of rules divided into different levels of risk, setting obligations for providers and users:

  • In the highest category, that of unacceptable risk, the artificial intelligence systems that poses a threat to individuals shall be prohibited. This includes cognitive behavioural manipulation of specific vulnerable individuals or groups, social scoring that classifies people based on behaviour, socioeconomic level, and personal characteristics, as well as real-time and remote biometric identification systems such as facial recognition.
  • AI systems that have a negative impact on security or fundamental rights will be classified as high-risk. High-risk corresponds to a “significant risk” of harm to health, safety, or fundamental rights. In this case, the AI Act does not prohibit their deployment but imposes specific requirements that must be fulfilled.
  • Artificial intelligence systems with limited risk will have minimum transparency requirements to enable users to make informed decisions. This includes AI systems that generate or manipulate image, audio, or video content.
  • In the case of low-risk scenarios, there is no legal obligation imposed.

AI privacy and security

In any case, regardless of the activity performed by the system, facial recognition amounts to processing of personal data, notably, of biometric data. It must be pointed out that “biometric data for the purpose of uniquely identifying a natural person” fall within the definition of “special categories of data” given by Article 9 of GDPR, which also ensure enhanced protection to this kind of data because of their sensitive nature. For this reason, the adoption of a facial recognition system necessarily requires to be examined through the lens of the data Protection Regulation no. 2016/679 (“GDPR”) and the Law Enforcement Directive[1] (“LED”) in order to establish whether this kind of technology can be compatible with the rights granted by Articles 7 and 8 of the European Convention of Uman Rights (“ECHR”), i.e. the right to respect for private life and communications and the right to protection of personal data.

AI and GDPR

It is important to note that in order for these systems to comply with GDPR (or LED), their adoption must be strictly necessary and proportionate for the intended purposes and justified by one of the legal bases provided by the Regulation. Considering the nature of the data involved, one might assume that obtaining consent would be the safest and easiest way to ensure lawful processing. However, this is only true if the data controller is a private company.

If facial recognition is used by a public authority, a different justification for the processing activity is required, specifically “the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security.” Nevertheless, this does not imply that the processing activities can be carried out in any circumstances without informing the data subjects properly. Doing so would not only violate data protection regulations but also create a widespread feeling of constant surveillance, which would infringe upon other fundamental rights such as the right to freedom of expression (Article 10 ECHR) and the right to freedom of association (Article 11 ECHR).

Oversight AI

Another aspect that needs to be taken into account is that the data protection legal framework includes the right for individuals “not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her” (Article 22 of GDPR and Article 11 of LED).

This provision ensures that individuals have the right to both human oversight of the processing activity and the ability to opt-out of automated processing, allowing their data to be evaluated by a human being. This guarantee is is particularly important due to two factors:

  • facial recognition systems handle sensitive data;
  • the processing activity can lead to profiling and potential discriminatory outcomes.

In conclusion, it is worth noting that the existing legal framework on facial recognition will soon undergo changes with the imminent implementation of the AI Act. This new regulation takes a more stringent stance towards facial recognition technologies by categorizing them as “High-Risk Artificial Intelligence Systems” and imposing strict conformity requirements and compliance obligations.

[1]Directive (EU) 2016/680 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data.

Date
Speak to our experts