The relationship between artificial intelligence and personal data protection

Contenido

Artificial Intelligence (AI) has profoundly transformed various sectors, offering unprecedented opportunities. However, the integration of AI systems raises complex issues regarding the protection of personal data. In Europe, the General Data Protection Regulation (GDPR) and the recent AI Act establish the regulatory framework to ensure that technological innovation respects individuals’ fundamental rights.

REGULATORY FRAMEWORK

The advent and widespread adoption of AI systems in contemporary economic and social contexts have significantly transformed the methods of collecting, processing, and using personal data. These changes have posed considerable regulatory challenges, prompting the European legislator to adopt a multi-layered regulatory approach. This approach integrates the GDPR with the newly introduced AI Act, recently approved by the European Parliament as part of the EU’s broader digital strategy.

This strategy, rooted in the EU’s commitment to fostering technological innovation while safeguarding fundamental rights, is structured through a comprehensive system of interconnected rules. These include not only the GDPR and AI Act but also the Data Governance ActData ActDigital Services Act, and Digital Markets Act. Together, they create an integrated legal framework aimed at establishing a unified digital market in Europe, characterized by high standards for protecting individual rights.

Within this intricate regulatory context, the interaction between the GDPR and AI Act is particularly significant. AI systems inherently require the processing of vast amounts of personal data, both for training algorithms and their operational use. The complexity of this interaction is evident when analyzing how these two frameworks integrate and complement one another. While there are elements of potential overlap, their overall regulatory approaches are characterized by substantive complementarity.

RISK-BASED APPROACH AND PREVENTIVE ASSESSMENT MECHANISMS

The regulatory methodologies of both the GDPR and AI Act are rooted in a risk-based approach, which, while sharing similarities, are distinct in their specific applications.

Under the GDPR, obligations are scaled according to the level of risk associated with data processing activities. Data controllers are required to assess the impact of processing operations on individuals’ rights and freedoms using Data Protection Impact Assessments (DPIAs).

Conversely, the AI Act establishes a four-tier risk classification for AI systems (unacceptable, high, limited, and minimal risk), each corresponding to a different set of obligations and responsibilities. The Act also introduces the Fundamental Rights Impact Assessment (FRIA), a preventive assessment tool that, although resembling the DPIA in structure, differs in scope and purpose. While the DPIA focuses specifically on privacy-related risks, the FRIA adopts a broader perspective, evaluating the potential impact of AI systems on a wide range of fundamental rights protected under European law.

This apparent overlap between the DPIA and FRIA reflects the EU legislator’s deliberate choice to create a system of complementary and synergistic protections. The DPIA addresses privacy-specific risks, such as data breaches, proportionality of data collection, or adequacy of security measures, while the FRIA considers broader rights, including non-discrimination, freedom of expression, human dignity, and children’s rights. Together, these assessments ensure that diverse risks are captured, acknowledging that an AI system could be privacy-compliant but still discriminatory or vice versa.

The EU legislator has also established mechanisms to coordinate the two assessments. For high-risk AI systems involving personal data processing, the FRIA may incorporate elements from the DPIA to avoid unnecessary duplication. This integrated approach optimizes compliance efforts while ensuring a comprehensive evaluation of risks across multiple dimensions of protection.

IMPLEMENTATION CHALLENGES IN THE INTEGRATED MANAGEMENT OF GDPR AND AI ACT

The coordinated implementation of the GDPR and AI Act presents significant operational challenges, particularly concerning the principle of transparency. While the GDPR mandates full and detailed transparency regarding data processing activities, the AI Act must address the technical limitations of AI systems, particularly those based on deep neural networks, whose «black-box» nature can complicate the explainability of decision-making processes.

This tension is especially pronounced in automated decision-making processes. Article 22 of the GDPR grants individuals the right not to be subject to decisions based solely on automated processing with significant legal effects. This must be interpreted alongside the AI Act’s requirements for human oversight in high-risk AI systems. Developing integrated operational procedures to meet both sets of requirements is crucial.

Moreover, implementing data subjects’ rights under the GDPR in the context of AI systems presents additional challenges, particularly regarding the rights to data erasure and rectification. The nature of machine learning algorithms, which embed training data in ways that are not easily reversible, poses technical difficulties in fulfilling these rights. This requires innovative solutions that balance the protection of individuals’ rights with the inherent characteristics of AI systems.

GOVERNANCE OF AI SYSTEMS: TOWARD AN INTEGRATED SUPERVISION MODEL

The complexity of interactions between personal data protection and AI regulation highlights the need for a governance system capable of ensuring effective supervision of both frameworks. The AI Act introduces a multi-level governance model, establishing new supervisory bodies at the European level, such as the European Artificial Intelligence Board, while assigning specific responsibilities to national authorities. This model intersects with the existing GDPR framework, which relies on the European Data Protection Board (EDPB) and national data protection authorities.

The primary challenge lies in developing harmonized interpretations of the two regulations to avoid inefficient overlaps and ensure comprehensive protection of fundamental rights. The adoption of interpretative guidelines by competent authorities will play a pivotal role, particularly regarding the integration of DPIA and FRIA, management of data subjects’ rights in AI contexts, and implementation of transparency and explainability requirements.

The designation of national authorities responsible for enforcing the AI Act is a critical decision that will impact the governance system’s effectiveness. The choice between assigning these responsibilities to existing data protection authorities, which have significant experience with complex technological issues, or creating new AI-specific authorities will influence the overall success of the governance model.

CONCLUSION AND FUTURE PERSPECTIVES

Analyzing the interactions between the GDPR and AI Act highlights the practical challenges organizations will face in the coming years. Companies developing or using AI systems will need to implement new compliance processes that address both regulations. For instance, a company deploying AI-based recruitment software must ensure GDPR compliance for processing candidates’ data while meeting the AI Act’s requirements for algorithmic transparency and non-discrimination. This necessitates specialized teams with expertise in privacy, AI, and regulatory compliance.

Operational challenges will extend to documentation and procedures. Organizations must redesign their risk assessment models, integrating DPIA and FRIA into a unified, efficient process. Templates and workflows must be updated to address both privacy and AI-specific aspects, such as algorithm robustness and potential biases. Continuous monitoring systems will also be essential to maintain compliance over time, given the dynamic risk management approaches required by both regulations.

The ability to navigate these regulatory requirements effectively will depend on developing practical procedures and operational tools, such as integrated checklists for project evaluations, collaborative workflows involving all relevant functions, and advanced logging and documentation systems. By adopting this pragmatic approach, organizations can turn regulatory challenges into opportunities for responsible innovation.

For further insights into AI regulation, please refer to the following articles:

Download Area
Scarica il PDF
Download
Fecha
Habla con nuestros expertos