AGCOM Approves Guidelines and Code of Conduct for Influencers
Digital Compliance
By resolution approved on July 23, 2025, the Italian Communications Authority (“AGCOM“) adopted the Guidelines and the Code of Conduct for influencers, following a participatory process initiated with a technical table and subsequent public consultation (see Resolution 472/24/CONS).
The new regulatory framework is part of the context of the Consolidated Text on audiovisual media services (c.d. TUSMA), extending obligations similar to those imposed on media service providers to influencers who meet certain relevance criteria (i.e., a minimum threshold of 500,000 followers or 1 million average monthly views on at least one platform). These individuals are expressly recognized with editorial responsibility for content published on social media and video-sharing platforms.
The Code of Conduct establishes rules to ensure (i) transparency in commercial communications (e.g., influencers must prevent the spread of fake news and provide accurate information), (ii) content recognizability, (iii) protection of minors, (iv) safeguarding fundamental rights (e.g., prohibiting the publication of content that could damage dignity or addressing hate speech), and (v) respect for intellectual property. “Relevant” influencers will be listed in a public registry managed by AGCOM and must comply with the provisions within six months of the publication of the Code.
The measure introduces a monitoring and control system for compliance with the rules, with administrative fines ranging from €250,000 to €600,000 in case of violations. It also introduces a ban on hidden advertising and mandates the clear indication of promotional collaborations, in line with principles of fairness and consumer protection.
AGCOM’s intervention marks a turning point in the regulatory framework for influencer marketing, introducing stringent obligations for influencers and relevant content creators, now equated to actual editorial entities. These operators are assigned direct responsibility for the content disseminated, aiming to enhance user protection and harmonize digital dynamics with standards already set for other audiovisual sector players.
For an in-depth analysis of the impacts of the new Code of Conduct, the Data & Technology Innovation Team at LEXIA is available for consultation.
AI Act: GPAI Code of Practice and EU Guidelines
Artificial Intelligence
On July 10, 2025, the GPAI Code of Practice (Code of Good Practices for General Artificial Intelligence Models) was adopted, aiming to guide providers of general AI systems – i.e., those designed for integration into a wide range of downstream applications and capable of performing multiple tasks – in complying with the obligations established in Chapter V of Regulation (EU) 2024/1689 (“AI Act”).
The Code of Practice serves as a soft law tool, voluntarily adhered to, developed by independent experts through a multi-stakeholder process. It aims to support the general AI sector in ensuring compliance with the legal obligations set out in Articles 53 and 55 of the AI Act, particularly regarding safety, transparency, and copyright.
Targeted at providers of general AI models, the Code of Practice facilitates their compliance with the AI Act, reducing the legal and managerial burden of compliance activities and offering greater legal certainty compared to alternative and undetermined methods of compliance.
The Code of Practice is structured into three distinct chapters: Transparency, Copyright, and Security. The Transparency and Copyright chapters provide all providers of general AI models with a way to demonstrate compliance with the obligations under Article 53 of the AI Act. The Security chapter applies only to a smaller and special subset of providers of systemic risk AI models, as defined in Article 55 of the AI Act.
To date, according to the EU Commission, major international players such as Amazon, Anthropic, Fastweb, Google, IBM, Microsoft, and OpenAI, providers of the most widespread general artificial intelligence models, are among the main adherents to the Code of Practice.
Supporting this tool, on July 18, the EU Commission approved, under Article 96 of the AI Act, guidelines for providers of general AI models, complementing the Code of Practice.
To assist all actors in the general AI model ecosystem, the EU Commission, within the guidelines, has provided interpretive guidance that clarifies:
- when a model is considered general-purpose AI and when it exceeds the threshold to be classified as a systemic risk model;
- criteria for identifying the “providers” of general-purpose AI models and the specific obligations they must comply with;
- exemptions for open-source AI model providers.
However, the Code of Practice and the new guidelines also raise questions about the effectiveness and practicability of the proposed regulatory tools and their voluntary nature as a legal instrument.
The Data & Technology Innovation Team at LEXIA is available to assist providers and deployers of AI systems in progressively complying with the AI Act, offering tailored consultancy, operational support, and specialized training programs.
Digital Identity and Trust Services: What Changes with the New eIDAS Regulations
Digital Identity
On July 30, 2025, three implementing regulations were published in the Official Journal of the European Union, completing the implementation of eIDAS 2.0 and introducing technical references and operational procedures for digital identity, electronic signatures, and electronic seals. This represents an important step toward a safer, interoperable, and reliable European digital ecosystem, thanks to the integration of common ETSI standards and binding technical requirements for qualified providers.
The regulations define, in particular, the reference standards for the verification of digital attributes, security rules for remote electronic signature and seal devices, as well as procedures for qualified electronic attestation of attributes, which will become a key element in future European digital wallets.
For trust service providers, this involves the obligation to comply with the new specifications by the transition deadline set for August 2027, updating infrastructures, processes, and documentation.
This update does not only concern the European framework: the introduced changes intersect with national regulations contained in the Digital Administration Code (CAD), which has regulated trust services in Italy for years, from digital archiving to identity management. The CAD will need to be harmonized with the new EU standards, avoiding overlaps and inconsistencies. Specifically, interventions will be required on digital preservation rules, electronic register management systems, and identification mechanisms to ensure full compatibility with the ETSI technical standards referenced in the European regulations.
This convergence process will require coordinated effort: on one side, the regulatory update at the national level, with the intervention of AgID and the awaited reform of the CAD; on the other side, concrete actions by qualified providers and operators, who will need to carry out technical audits, revise policies and contracts, and plan a gradual adjustment path to the new rules.
The Data & Technology Innovation Team at LEXIA is available to support businesses and trust service providers in analyzing the impacts in relation to the CAD, evaluating ETSI requirements, and defining technical and regulatory adjustment plans consistent with the new eIDAS framework.
Privacy and AI: Training is No Longer Optional (and Why Ignoring It Can Be Costly)
Compliance
The growing complexity of the regulatory context regarding personal data protection and artificial intelligence is making compliance with training obligations ever more central for businesses, public entities, and individuals who process personal data or employ automated technologies.
Regarding data protection, Article 29 of Regulation (EU) 2016/679 (GDPR) requires that anyone acting under the authority of the data controller or processor must be trained on the proper ways to process personal data. Additionally, the accountability principle (Article 5(2) and Article 24 GDPR) mandates that the data controller demonstrate the adoption of appropriate technical and organizational measures, including essential staff training activities.
On the AI front, Regulation (EU) 2024/1689 (AI Act) introduces new training obligations for providers, distributors, and users of AI systems, in particular:
(i) Article 4(3) requires operators to ensure that personnel involved in managing AI systems receive adequate training on their functioning, risks, and limitations;
(ii) For high-risk systems, Article 26 imposes specific training measures for users, including understanding the context in which the systems are used, the data processed, and potential implications for fundamental rights;
(iii) Regarding general artificial intelligence models, the EU Commission’s guidelines (July 2025) emphasize the importance of adequately training technical and legal teams on the governance of the AI model lifecycle, even when adopting open-source or pre-trained systems.
This is not merely a recommendation, but a genuine legal obligation, non-compliance with which can lead to sanctions and liability in cases of data breaches, misuse of technology, or failure to meet transparency and control requirements.
The Data & Technology Innovation Team at LEXIA designs and implements customized training programs on personal data protection and artificial intelligence, targeting DPOs, legal teams, HR, IT, marketing, and business management, in line with national and EU standards.
Contact us to build a training path that is not only a compliance requirement but a true risk management tool and a way to enhance compliance.