AI Act: An Overview of the EU’s Proposed Regulation on Artificial Intelligence

Contents

Introduction

The European institutions have finally reached an agreement and issued the new European Regulation introducing harmonized rules on the use of Artificial Intelligence (the so-called “Artificial Intelligence Act” or “AI Act”). The proposal, published in April 2021 by the European Commission, has been the subject of extensive debate, becoming a key priority on the EU’s legislative agenda due to the rapid growth of Artificial Intelligence. The regulation aims to establish common rules to safeguard fundamental human rights while facilitating the development of the internal market. Certain AI systems will be prohibited, while others will be subject to stringent requirements.

The new AI regulation

Artificial Intelligence is advancing rapidly, finding applications in various sectors of the economy and society. European institutions now aim to establish uniform rules for Artificial Intelligence based on the AI Act, published by the European Commission in April 2021. After extensive discussions among European institutions and the approval by the European Parliament on March 13, 2024, the European Regulation was published in the Official Journal, marking the EU as the pioneer of the world’s first cross-cutting legislation on Artificial Intelligence.

What is the purpose of the AI Act?

The regulation seeks to improve the functioning of the internal market by establishing a uniform legal framework for the development, commercialization, and use of AI systems. Several EU Member States have already implemented national laws to ensure AI develops in a manner that is safe and respectful of human rights. However, legislative fragmentation is counterproductive as it undermines the efficiency of the internal market by hampering the dissemination of these technologies and fails to provide a uniform level of protection for human rights.

What is the scope of the AI Act?

The AI Act applies uniformly across all sectors of the economy and society. The proposed rules apply to the following entities and subjects, inter alia:

  • Providers of AI systems, including general-purpose AI systems, within the European Union, regardless of whether they are established in the EU or in a third country;
  • AI system “deployers,” meaning individuals or legal entities located within the EU that use AI systems; and
  • Providers and deployers of AI systems located in third countries, where the system’s output is used within the EU.

The regulation also includes some exemptions, as it does not apply, for example, to AI systems developed or used exclusively for military purposes or those employed by public authorities under an international agreement related to crime prevention and judicial cooperation with the EU or its Member States.

Additionally, exemptions apply to research and development activities for AI systems before commercialization, AI systems developed solely for scientific research and development, and individuals using AI systems for purely personal, non-professional purposes.

High-risk artificial intelligence systems

High-risk AI systems are those that could negatively impact users’ rights and safety. The regulation permits their use provided that strict requirements and limits are met.

These requirements include, among others, ex-ante conformity assessments, the adoption of risk management measures, appropriate training procedures, and data governance frameworks.

The category of high-risk AI systems encompasses, for example, systems used as safety components in medical devices, systems affecting access to and enjoyment of essential private and public services, and those employed by law enforcement agencies.

The European legislator also specifies the general criteria for categorization, including the system’s purpose, its level of autonomy, and the nature and volume of personal data processed.

To avoid being classified as high-risk and subject to the corresponding regulatory requirements, an AI system must meet at least one of the criteria set by the regulation, such as being designed for a narrowly defined procedural task or enhancing the outcome of a human-completed activity.

Limited-risk artificial intelligence systems

Limited-risk systems are those that do not pose significant concerns for user safety or rights.

This category includes systems that interact with humans (e.g., chatbots) or generate or manipulate images, audio, or video content (e.g., generative AI).

Limited-risk AI systems are subject to specific transparency obligations. End users must be informed that they are interacting with an AI system or that their emotions or characteristics are being recognized through automated means. For AI systems generating or manipulating media resembling the original (e.g., deepfakes), the content must be identified as AI-generated, with exceptions for legitimate purposes.

Low or minimal-risk artificial intelligence systems

AI systems that do not fall into the aforementioned categories and pose only low or minimal risks can be developed and distributed in the EU without additional legal obligations. However, the regulation encourages the voluntary adoption of codes of conduct that apply mandatory requirements for high-risk systems.

Future perspectives

The AI Act is an ambitious attempt to regulate the disruptive and revolutionary impacts of Artificial Intelligence on human rights, the economy, and society as a whole. It represents a delicate balance between safeguarding the EU’s fundamental values and rights and promoting technological progress without excessive constraints.

With its adoption, the next critical phase will begin: ensuring compliance by affected entities before the regulation takes effect two years after its publication in the Official Journal. This period will provide insight into how the legislation impacts AI usage and development and how other global players in the sector, such as the United States, respond to it.

Date
Speak to our experts