The Commission’s guidelines on prohibited artificial intelligence practices: general analysis and privacy aspects

Contents

The approval by the European Commission of the draft Guidelines on prohibited artificial intelligence practices, announced in the Communication of February 4, 2025, marks a significant step in the implementation of EU Regulation 2024/1689 (AI Act), which represents the first comprehensive regulatory framework at the European level for artificial intelligence governance.

Although these guidelines are not legally binding, they play a crucial role in providing interpretative clarity on Article 5 of the AI Act, which lists prohibited practices deemed incompatible with the fundamental principles of the Union, including respect for human dignity, the protection of fundamental rights, and the safeguarding of public security.

The AI Act adopts a risk-based approach, classifying AI systems into four categories, ranging from minimal-risk systems (which are not subject to specific obligations) to unacceptable-risk systems, whose use is strictly prohibited. This regulatory framework introduces a prevention and oversight system aimed at ensuring that artificial intelligence applications operate in compliance with the values and rights enshrined in the European Union’s legal order.

The analysis of this document highlights the regulatory context in which the prohibitions set forth by the AI Act are framed, focusing in particular on the most critical aspects and potential implications for privacy and personal data protection.

The content of the guidelines

The Commission’s Guidelines aim to provide a uniform and consistent interpretation of the provisions of Article 5 of the AI Act, clarifying the scope of application, allowed exceptions, involved parties, and coordination with other Union regulations.

Article 5 of the Regulation, in particular, establishes an absolute ban on certain AI practices, identified as incompatible with the fundamental principles of the EU, including the protection of personal data, non-discrimination, and the right to security.

Prohibited AI practices under article 5 of the AI Act

The Commission’s document provides a detailed analysis of the eight categories of prohibited practices, clarifying their scope of application and possible exceptions:

  • Manipulation and deception (Art. 5(1)(a)): Prohibition of systems that use subliminal techniques or manipulative strategies to significantly distort individuals’ behavior, altering their decision-making abilities.
  • Exploitation of vulnerabilities (Art. 5(1)(b)): Ban on using AI to exploit vulnerabilities related to age, disability, or socio-economic conditions, leading users to make harmful or disadvantageous choices.
  • Social scoring (Art. 5(1)(c)): Prohibition of using AI to categorize individuals based on social, personal, or professional behavior, when this results in unjustified or discriminatory treatment.
  • Prediction of criminal risk (Art. 5(1)(d)): Ban on AI systems that assess the likelihood of committing crimes solely based on automated profiling or personal characteristics.
  • Massive facial Image scraping (Art. 5(1)(e)): Prohibition of indiscriminate and unconsented collection of biometric data (e.g., by extracting images from the internet or CCTV) to create facial recognition databases.
  • Emotion recognition (Art. 5(1)(f)): Ban on using AI to infer individuals’ emotions in work or educational contexts, except for specific exemptions related to security or health reasons.
  • Biometric categorization for sensitive data (Art. 5(1)(g)): Prohibition of using AI to deduce sensitive characteristics, such as race, religion, political views, sexual orientation, or union membership.
  • Remote real-time biometric identification (Art. 5(1)(h)): Prohibition of using remote biometric recognition systems in public spaces for law enforcement purposes, with only limited and strictly regulated exceptions.

Operational implications and critical compliance aspects

Within the intricate regulatory framework outlined by the Guidelines, the need for companies developing or implementing AI systems to conduct a thorough preliminary evaluation of potential incompatibilities with the prohibitions set forth in Article 5 of the AI Act becomes particularly evident.

In this context, adopting appropriate organizational and technical measures to ensure, from the design phase, the compliance of AI systems with regulatory requirements is of paramount importance. This first and foremost requires implementing preliminary impact assessment procedures that not only address data protection aspects—already governed by specific regulations under the GDPR—but also consider the additional risk dimensions identified by the AI Act. The relationship between the AI Act and the regulation on personal data protection is particularly crucial, given that many of the prohibited practices relate to the processing of personal and biometric data, areas already governed by the General Data Protection Regulation (GDPR) and Directive 2016/680 on the processing of data for law enforcement purposes.

Additionally, it is noteworthy that the guidelines emphasize the establishment of mechanisms for continuous monitoring and compliance checks of AI systems with the regulatory bans. As clarified by the Guidelines, the assessment of whether prohibited practices exist must not only occur during the initial implementation phase but throughout the entire lifecycle of the system.

Scraping of biometric data and facial recognition

One of the most significant prohibitions concerns the creation of biometric databases through the massive scraping of images from public sources. This practice is already under close scrutiny by data protection authorities, as it is contrary to the principles of data minimization and purpose limitation established by the GDPR. However, the Guidelines do not fully clarify the legitimacy limits of using biometric images to train AI models, leaving room for potential regulatory conflicts.

Social scoring and prediction of criminal risk

Algorithmic profiling techniques aimed at social reliability assessments or criminal risk predictions raise particularly complex issues regarding data protection, as well as the principles of proportionality and non-discrimination. These systems can lead to automated biases, significantly impacting the presumption of innocence and the right to non-discrimination.

Remote biometric identification and privacy in public spaces

The ban on the use of real-time facial recognition systems in public spaces is one of the most restrictive provisions of the AI Act and aligns with the framework of protections outlined by the GDPR concerning the processing of biometric data.

However, the provision for exemptions related to the prevention of serious threats to public security raises difficult questions regarding the proper balance between public safety needs and the protection of privacy rights, particularly in terms of identifying objective criteria for assessing the proportionality of the measures adopted.

In this context, the requirement for specific documentation and recording of processing activities, as well as the need to conduct prior impact assessments on fundamental rights, becomes especially relevant, in accordance with Article 35 of the GDPR and the provisions of the AI Act concerning high-risk systems.

The sanctions regime

The AI Act adopts a particularly strict approach regarding sanctions for violations of the prohibitions under Article 5, stipulating the highest penalties within the Regulation. Specifically, violations of prohibited practices can lead to administrative fines of up to 35 million euros or, for companies, up to 7% of the total global turnover of the previous fiscal year, whichever is higher.

This sanctions framework, which reflects the seriousness attached by the European legislator to these violations, applies equally to both providers and users of AI systems, each within their area of responsibility. For public entities, Member States retain some discretion regarding the application of administrative fines, but they must ensure the imposition of effective, proportionate, and deterrent measures.

Furthermore, of particular note is the provision stating that in the case of repeated violations, the maximum fine may be increased by up to 2%, creating a clear deterrent effect against systematic violations of the prohibitions.

Interpretative challenges and enforcement perspectives

While the Guidelines provide valuable clarifications regarding the scope of the prohibitions, they leave some significant interpretative questions open, particularly regarding:

  • The practical coordination between the competent AI regulatory authorities and the data protection authorities;
  • The criteria for evaluating the “significance” of the harm required to establish certain prohibited practices;
  • The identification of the boundaries between prohibited practices and permitted practices under the Regulation’s exemptions.

Businesses, therefore, face the need to navigate a particularly complex and detailed regulatory framework that requires a proactive approach to compliance in the field of AI. However, this challenge can also represent an opportunity to rethink AI development and implementation processes with a more responsible and sustainable mindset.

In this perspective, it is crucial to:

  • Adopt AI governance frameworks that integrate the regulatory requirements outlined in the AI Act from the design phase;
  • Implement continuous risk assessment procedures;
  • Invest in training personnel involved in the development and use of AI systems;
  • Implement documentation and traceability mechanisms for compliance assessments.

Thus, while the Commission’s Guidelines do not resolve all interpretative questions raised by the AI Act, they serve as an important reference point for businesses in adapting to the new regulations. They provide operational guidance that must be continuously updated based on practical experience and further clarifications from regulatory authorities.

The challenge in the coming months will be to translate the principles and prohibitions outlined by the European legislator into concrete operational practices, balancing the need for technological innovation with the necessity of fully respecting the fundamental rights of individuals involved in the use of AI systems.

AI Act implementation timeline

2 FEBRUARY 2025

  • Entry into force of the prohibitions (Article 5)
  • Compliance obligation for all AI systems, including those already in use

2 AUGUST 2025

  • Designation of national supervisory authorities
  • Entry into force of the sanctions system
  • Enforcement powers of authorities begin

2 AUGUST 2026

  • General application of the AI Act
  • Member States must operationalize AI regulatory sandboxes
  • Commencement of compliance obligations for high-risk AI systems

2 AUGUST 2027

  • Application of Article 6(1) to high-risk AI systems
  • Deadline for compliance of general-purpose AI models placed on the market before 2 August 2025

Download Area
Download PDF
Download
Date
Speak to our experts