Data & Technology Innovation | November 2025 Insight

Contenido

Generative AI: the EDPS sets the rules for compliant and responsible use

Data protection & artificial intelligence

On October 28, 2025, the European Data Protection Supervisor (EDPS) published an updated version of its guidelines on the use of generative artificial intelligence (GenAI) by European institutions. The document serves as a reference not only for EU bodies but also for all public and private organizations wishing to integrate GenAI systems in compliance with European data protection regulations.

The guidelines – significantly strengthened compared to the previous version from July 2023 – provide a clear framework of principles, precautions, and requirements to follow when designing and using generative AI tools, particularly when they process personal data or could potentially impact the rights and freedoms of individuals.

Key aspects include:

  • the obligation for a prior impact assessment (DPIA), especially when the system’s output may lead to decisions or profiling;
  • the adoption of the «data protection by design and by default» principle, starting from the development and training phase of the model;
  • attention to risks related to algorithmic hallucinations, lack of transparency in the output, and the use of incorrect or outdated data;
  • the importance of documenting the purposes, sources, prompts, datasets, and methods of interaction with the systems to ensure verifiability and accountability;
  • the need to avoid using GenAI for surveillance, manipulation, or high-impact automated decisions (such as hiring, firing…), which are considered highly intrusive.

The EDPS also emphasizes that the adoption of GenAI solutions requires an integrated approach, where technological, organizational, and legal aspects are jointly evaluated. Particular attention is given to the internal use of models, which should include: controlled access, training based on solid legal foundations, anonymization measures, and periodic reviews.

In this context, it is increasingly crucial for organizations to adopt internal policies on the use of generative AI, clearly defining roles, responsibilities, usage limits, and operational precautions. Only through well-structured internal rules – shared and updated – can organizations ensure the safe, compliant, and transparent use of GenAI solutions, preventing technological or legal pitfalls.

The Data & Technology Innovation team at LEXIA is available to support companies and organizations in defining internal policies and frameworks for the use of generative AI, integrating legal, ethical, and operational aspects into a tailored, compliance-oriented approach.

Video surveillance: the Data Protection Authority reiterates the obligation to comply with GDPR information requirements and workers’ protection regulations

Data protection

The Italian Data Protection Authority continues to closely monitor the use of video surveillance systems, particularly where the protection of company assets intersects with workers’ rights.

In its recent decision No. 493, dated September 11, 2025, the Authority fined a business in the food and beverage sector, reaffirming the fundamental principles of lawfulness, fairness, and transparency imposed by Article 5 of Regulation (EU) 2016/679 («GDPR«) and labor law, particularly Article 4 of Law 300/1970 («Workers’ Statute«).

During its investigation, which was triggered by a report from the local police, the Authority confirmed the presence of an active video surveillance system in the commercial establishment. The processing of personal data in this case did not involve video recording but only the collection and remote viewing of images in real-time via a smartphone. However, this was found to be illegal under two distinct aspects.

Firstly, the Authority identified a violation of personal data protection regulations due to the absence of adequate informational signs (first-level notice) and extended privacy notices (second-level notice). The Authority emphasized that even the mere remote viewing of images depicting customers or employees in the commercial establishment constitutes personal data processing and, therefore, stressed the importance of adhering to the principles of lawfulness, fairness, and transparency set forth in Article 5 of the GDPR, in addition to the informational obligations under Article 13 of the GDPR towards the data subjects.

The second issue of illegality concerned labor law. According to Article 4 of the Workers’ Statute, the installation and use of video surveillance systems are subject to a teleological limit (used solely for organizational, productivity, labor security, and company asset protection purposes) and an administrative limit (requiring prior agreement with union representatives or, in the absence of such agreement, authorization from the competent Labor Inspectorate). However, the Authority’s investigation revealed that the video surveillance system was primarily aimed at «monitoring employees working in food and beverage service.» The Authority declared that this processing was in direct conflict with Article 4 of the Workers’ Statute, thus constituting unlawful processing under Article 88 of the GDPR and Article 114 of Legislative Decree 196/2003 («Privacy Code«).

The commercial establishment was therefore sanctioned by both the Labor Inspectorate and the Data Protection Authority (the outcome could have been much more severe, considering that, under Article 171 of the Privacy Code, a violation of Article 4 of the Workers’ Statute can also be a criminal offense, punishable under Article 38 of the Workers’ Statute). This decision further underscores the importance of strictly adhering to the labor law safeguards for the installation and use of systems that could involve remote monitoring of employees, as well as the need to comply with personal data protection regulations.

For further information on privacy and labor law compliance related to video surveillance systems and their implementation in accordance with regulatory frameworks, the Data & Technology Innovation team at LEXIA is available to provide analysis and specialist support.

Deepfake and AI: a new crime is introduced in the criminal code

Artificial intelligence

With the entry into force of Law No. 132 of September 23, 2025, Italy became the first EU Member State to adopt a comprehensive national framework on artificial intelligence. The text, published in the Official Gazette on September 25, 2025, aligns with Regulation (EU) 2024/1689 («AI Act«) and contains provisions and delegations to the Government to regulate the use of AI systems in strategic sectors such as healthcare, public administration, labor, justice, and security.

Among the most significant innovations is the introduction of a new crime in the criminal code related to the illegal dissemination of deepfakes. Article 612-quater punishes with imprisonment from one to five years those who spread, without consent, AI-generated or altered images, videos, or audio that appear authentic and are likely to deceive. This is, in fact, the first explicit criminalization of the deepfake phenomenon in Italy. The offense is subject to private prosecution, unless the victim is a minor, incapacitated, or the defamatory content concerns a public authority: in such cases, prosecution is initiated ex officio.

The legislator has also provided for:

  • a new aggravating circumstance under Article 61 of the Italian Penal Code, applicable when a crime is committed using AI tools that make the act more insidious, hinder its repression, or worsen its consequences;
  • a specific aggravating circumstance for the crime of attacking citizens’ political rights (Article 294 of the Penal Code): if committed using AI systems to influence voting or other political rights, the sentence will be increased from two to six years.

This legislative intervention comes at a time when the misuse of generative AI is becoming a global issue: according to the latest international data, fraud and disinformation cases using deepfakes have increased tenfold between 2022 and 2023, with potentially devastating impacts on reputation, democracy, and security.

However, the effectiveness of the reform will depend on the legislature’s ability to translate the delegations into concrete implementing measures capable of regulating a continuously evolving technological landscape in an agile yet effective manner. Operational responsibility will also fall on companies, professionals, and administrations, who will need to update policies, control systems, and compliance frameworks.

The Data & Technology Innovation team at LEXIA is available to support organizations in complying with the new obligations introduced by Law No. 132/2025, providing assistance in drafting internal policies, managing risks associated with content obtained through AI-generated processes, and implementing governance models for the ethical, compliant, and secure use of artificial intelligence.

The EDPB and the interaction between DSA and GDPR: public consultation concluded

On October 31, 2025, the public consultation launched by the European Data Protection Board (“EDPB”) on guidelines 3/2025 (the “Guidelines”) regarding the interaction between Regulation (EU) No. 2022/2065 (“DSA”) and Regulation (EU) No. 2016/679 (“GDPR”) concluded. The Guidelines, adopted on September 11, 2025, aim to ensure a consistent and harmonized application of both regulations, clarifying that the DSA does not override the GDPR but rather complements its application.

Below are the key points of contact between the two regulations addressed in the Guidelines:

  • Systemic risks: Very Large Online Platforms (“VLOP”) and Very Large Online Search Engines (“VLOSE”), as defined by the DSA, must assess the impact on fundamental rights. The presence of risks to such rights makes it likely that a DPIA (Data Protection Impact Assessment) is required under Article 35 of the GDPR.
  • Voluntary investigations for detecting illegal content (Art. 7 DSA): the EDPB identifies legitimate interest (Article 6(1)(f) GDPR) as the most suitable legal basis for processing personal data, also requiring the necessity of the processing and a positive balance with the rights and freedoms of the data subjects.
  • Deceptive design patterns (Art. 25 DSA): although the DSA’s ban on using deceptive design patterns (also known as “dark patterns”) does not apply to practices already covered by the GDPR, the EDPB considers such patterns misleading to users, as they contradict the principles of lawfulness, fairness, and transparency under Article 5(1)(a) GDPR.
  • Advertising and special categories of personal data: the EDPB reiterates the absolute ban for online platforms to show targeted advertising based on special categories of data (Art. 26(3) DSA), as defined by the GDPR.
  • Recommender systems and minors: VLOP and VLOSE must always offer at least one non-profiled recommendation option, avoiding mechanisms that push users to choose the more invasive option. For minors (Art. 28 DSA), age verification must be necessary and proportionate, excluding solutions that enable unique identification or permanent storage of age/age group.

The EDPB will review the submitted observations and assess whether any additions or changes to the Guidelines are necessary.

The Data & Technology Innovation Team at LEXIA is available to provide further insights into the Guidelines and the related obligations, including governance and accountability, DPIAs on systemic risks, legal bases for voluntary investigations, advertising and profiling, compliant design, and age verification solutions.

The new EU rules on political advertising come into effect on October 15

Marketing & Communication

As of October 15, 2025, Regulation (EU) 2024/900 applies, introducing new rules to make paid political advertising more transparent, fair, and secure from external interference. The goal is clear: to ensure that European citizens can make informed electoral choices, with clear information about the origin, purpose, and dissemination of political messages.

The European legislator intends to strengthen trust in the democratic debate accompanying political elections, combating information manipulation and opaque micro-targeting practices, particularly through the misuse of personal data. In this sense, the Regulation continues the regulatory framework already outlined by Regulation (EU) 2016/679 («GDPR«) and Regulation (EU) 2022/2065 («Digital Services Act«), sharing the goal of protecting the fundamental right to personal data protection and freedom of opinion and expression.

The scope of the Regulation is broad and includes any form of political advertising made by, for, or on behalf of a political entity, as well as any communication intended to influence the outcome of an election, referendum, or legislative or regulatory process, regardless of whether it occurs at the European, national, regional, or local level. However, communications that, while coming from or referring to political actors, do not aim to influence voting or decision-making processes are not considered political advertising. These exclude private or commercial communications, messages issued by competent authorities regarding the organization of elections or referenda, and institutional information disseminated by the European Union or national authorities in the course of their functions. Additionally, the mere presentation of candidates in public or media contexts compliant with sectoral regulations, personal expressions of opinion, and content transmitted under the editorial responsibility of media outlets (such as interviews, debates, or political comments), when no specific payment is made for their dissemination, are also excluded from the scope of the new regulation.

One of the key elements of the new regulatory framework is transparency: every political advertisement must be clearly identifiable as such and accompanied by a notice indicating the responsible party for payment, the political event it refers to, any use of targeting or algorithmic distribution techniques, and other relevant information such as total amounts spent and the source of the funds. This information must be easily accessible via an online European archive, which will allow for systematic monitoring of spending and the origin of digital political messages.

The regulation on targeting and personalized distribution of political advertisements is particularly stringent, as the use of personal data will only be permitted if the data is collected directly from the data subject and with explicit, separate consent for the purpose of political advertising. Furthermore, to protect the integrity of the electoral process, a ban on political advertising from sponsors based outside the Union has been introduced in the three months preceding elections or referenda.

Advertising operators, digital platforms, and political actors are now called to assess the impact of the new provisions on their communication and data management processes, as well as to implement compliance mechanisms to ensure adherence to the transparency, responsibility, and accountability obligations established by the Regulation. The Data & Technology Innovation Team at LEXIA is available to provide support.



Fecha
Habla con nuestros expertos