The definition of Artificial Intelligence within the European regulatory framework.

Contenido

Artificial intelligence (“AI”) is one of the most revolutionary technological innovations of our time, with cross-sector implications in numerous industries. However, the definition of what constitutes an «artificial intelligence system» remains a topic of debate, particularly within the European regulatory framework. The recent Regulation (EU) 2024/1689 (“AI Act”) introduced a harmonized definition aimed at providing legal certainty and safeguarding fundamental rights without hindering innovation.

To facilitate the practical application of this definition, the European Commission published specific guidelines (“Guidelines”) on February 6, 2025. These Guidelines clarify the meaning and scope of the AI Act, provide examples and evaluation criteria, and distinguish AI systems from traditional software.

The purpose of this analysis is to examine the definition of an AI system under the AI Act and its related Guidelines, highlighting its key elements, potential interpretative challenges, and practical implications for operators navigating compliance in this evolving regulatory landscape.

The regulatory framework and the definition of the AI Act

The AI Act, which came into force on August 1, 2024, establishes a harmonized regulatory framework for the development, commercialization, and use of artificial intelligence systems within the European Union. However, as an innovative regulation, its practical application requires interpretative support.
In this context, the Guidelines are not merely an explanatory document but rather an essential interpretative tool with legal relevance for the proper qualification of AI systems. Their contribution is articulated on multiple levels:

  • they outline objective parameters for identifying AI systems, enabling operators to assess with greater certainty the applicability of the regulation to their products or services;
  • they address gray areas in the definition through concrete cases, offering solutions for borderline scenarios that could generate interpretative uncertainties;
  • they elaborate on the requirement of operational autonomy as a distinguishing element of AI systems compared to conventional software, introducing qualitative and quantitative criteria for its evaluation;
  • they establish a conceptual demarcation between artificial intelligence systems and traditional software applications, with particular attention to data processing systems that, despite being complex, do not exhibit the distinctive characteristics of AI;
  • they propose an analytical method that considers the entire life cycle of the system, from design to implementation, deployment, and evolution of its functionalities.
    The definition provided by the AI Act, analyzed through the lens of the Guidelines, is articulated into seven constitutive elements which, considered together, determine whether a system qualifies as «artificial intelligence» under European law:

Systemic and computational nature: The AI Act requires that the system be designed to operate through integrated hardware-software architectures capable of complex computational processing. The mere presence of algorithms is not sufficient; rather, a structured system that enables data processing and the execution of algorithmic decisions according to advanced computational logics is necessary.

Graduated decision-making autonomy: The regulation encompasses various degrees of operational autonomy, from minimal levels to more sophisticated forms. The Guidelines clarify that the distinguishing element lies in the system’s ability to perform operations that do not require constant and direct human control for every single algorithmic decision. This autonomy is manifested in the system’s ability to select among multiple options without immediate human supervision for each choice.

Post-implementation adaptive capacity: A particularly qualifying element is the system’s potential to evolve after its initial implementation. The Guidelines specify that such adaptability can manifest in various forms, from supervised learning to the ability to modify its parameters in response to new data or environmental stimuli. It is noteworthy that this capability does not necessarily have to be activated; the mere technical predisposition for adaptation is sufficient.

Predetermined or emerging purposes: The Guidelines make a fundamental distinction between systems with explicitly coded objectives set by designers and systems capable of developing emerging purposes based on the operational context. This latter category raises complex interpretative questions, as it implies an ex-post evaluation of the system’s teleological nature, which is not immediately discernible from the initial technical specifications.

Sophisticated inferential mechanisms: Perhaps the most defining characteristic of AI is its ability to perform complex inferences that go beyond deterministic algorithmic logic. The Guidelines emphasize that this requirement differentiates AI systems from traditional software based on predefined rules. Inference presupposes the capability to derive conclusions or predictions that are not immediately inferable from premises through simple logical operations but rather through processes of generalization, analogy, or pattern recognition.

Output with informational or decision-making value: The system must be capable of generating content, predictions, recommendations, or decisions that significantly impact human decision-making processes or digital or physical environments. The Guidelines clarify that the informational value of the output must go beyond mere reproduction or processing of pre-existing data, adding predictive, interpretative, or decision-making elements with potential practical impact.

Transformative interaction with the environment: The final element concerns the system’s ability to concretely modify the state of physical or virtual environments. This requirement implies that AI output can translate into concrete actions, such as activating physical devices, modifying operational parameters, or creating/altering digital content with tangible effects on users or the surrounding environment.

The Guidelines emphasize that these elements do not all need to be simultaneously present, but they must be evaluated as a whole to determine whether a system falls under the AI definition. This interpretative flexibility, while allowing the regulation to adapt to technological evolution, also introduces discretionary margins that could generate application uncertainties for industry operators.

Critical analysis and practical implications

The definition of artificial intelligence adopted in the AI Act represents the result of a complex balance between regulatory needs and the necessity not to hinder technological innovation. This definition, in light of the interpretative Guidelines, presents significant critical issues that deserve careful consideration, particularly regarding the practical implications for industry operators.
The definition adopted by the European legislator is characterized by a semantic breadth that, while ensuring the applicability of the regulation to future technologies (so-called «future-proofing»), also risks generating qualification uncertainties. The Guidelines attempt to narrow this breadth by clarifying that not all advanced software systems automatically fall within the scope of the AI Act. However, there remains a concrete risk of over-inclusion of systems that, despite having sophisticated computational capabilities, do not exhibit the decision-making and adaptive autonomy that should characterize true artificial intelligence.
In particular, the criteria related to graduated autonomy and adaptive capacity are defined in qualitative rather than quantitative terms, leaving operators with the burden of a discretionary evaluation that could lead to heterogeneous and conflicting approaches.

One of the most problematic aspects emerging from the analysis of the Guidelines concerns the distinction between AI systems and traditional software. While theoretically, the differences appear clear, in practical application, the distinction becomes blurred, especially for so-called «borderline systems» that exhibit hybrid characteristics.

The Guidelines attempt to provide distinguishing criteria based mainly on inferential capacity and decision-making autonomy, but the rapid technological evolution is already challenging these criteria. Software implementing advanced optimization algorithms or expert-based systems could fall into a gray area, leading to uncertainties regarding the applicable regulation.

A particularly critical element is the absence of quantitative parameters for assessing the autonomy and adaptability required to qualify a system as AI. The Guidelines provide only qualitative indications, without identifying objective minimum thresholds.

This approach, while ensuring interpretative flexibility, also imposes complex evaluations on operators that may vary significantly depending on the application context and industry sector. The absence of quantitative benchmarks could result in heterogeneous assessments by regulatory authorities across different Member States, risking fragmentation of the digital single market.
The qualification of a system as «artificial intelligence» under the AI Act is not merely a theoretical exercise but determines the applicability of a complex system of obligations differentiated based on the associated risk level.

The Guidelines clarify that the definition of AI constitutes the logical and legal premise for applying the risk-based classification system provided by the AI Act. This classification, in turn, determines the compliance obligations for developers, providers, and users, with significant implications in terms of costs, responsibilities, and required investments.
The position of systems at the borderline of the definition appears particularly critical, as qualification uncertainty translates into uncertainty regarding regulatory requirements, with potential market and competition distortions.

Toward a dynamic interpretation of the AI definition

The definition of artificial intelligence introduced by the AI Act represents a balance between the need to regulate complex technological phenomena and the necessity not to hinder innovation. The European Commission’s Guidelines serve as an essential interpretative tool, providing operational criteria for identifying AI systems and distinguishing them from traditional software.

However, uncertainties remain, particularly concerning borderline systems and the qualitative assessment of decision-making autonomy and adaptability. Such uncertainties could lead to heterogeneous approaches in applying the regulation, potentially distorting the digital single market.
It is therefore desirable to maintain constant monitoring of technological evolution and the interpretation of the definition by national competent authorities to ensure uniform application and legal certainty for all stakeholders involved in the development and use of artificial intelligence systems.

Thus, the definition of AI in the European legal framework is not a static concept but a dynamic legal notion, destined to evolve alongside technological progress and the practical applications that will emerge in the coming years.

Download Area
Download the PDF
Download
Fecha
Habla con nuestros expertos