Generative artificial intelligence has now reached a level of maturity that requires a specific and comprehensive regulatory framework. In this context, the European Union has developed a pioneering approach through the adoption of the Code of Conduct for general-purpose AI models (“GPAI”), a legal instrument that forms part of the broader framework established by Regulation (EU) 2024/1689 (“AI Act”) and which constitutes a unique development in the international regulatory landscape (the “Code”).
In addition, the new Guidelines issued by the European Commission (C(2025) 5045 final, published on 18 July 2025) clarify the scope of application and the content of the obligations for GPAI providers set out in the AI Act (the “Guidelines”), offering interpretative support that complements and strengthens the provisions of the Code.
This paper analyses the legal implications of the Code of Conduct for GPAI models, focusing on transparency, safety, and copyright, with particular attention to systemic risk management, which is the key mechanism for demonstrating compliance with the obligations under the AI Act.
Regulatory framework
GPAI models are subject to specific obligations set out in Articles 53 and 55 of the AI Act: Article 53 establishes obligations for all providers of GPAI models, while Article 55 introduces additional requirements for models presenting “systemic risks” (defined in Article 3(65) as specific risks associated with high-impact capabilities that have significant effects on the Union market).
The Code is not a mere sectoral self-regulation initiative but rather the soft-law instrument expressly envisaged and encouraged by Article 56, which governs the “development, adoption and approval” of codes of conduct aimed at ensuring the proper application of the obligations imposed on GPAI providers (Arts. 53 and 55).
Adherence to the Code is formally voluntary; however, Article 56(1) clarifies that it constitutes the “main instrument” through which providers may demonstrate their compliance. In the absence of adherence, the provider bears the burden of proving—“through other suitable means”—full compliance with all obligations, entailing a clear evidentiary and reputational burden.
The Guidelines highlight that the Code represents the preferred instrument for demonstrating compliance under Article 56, whereas in the absence of adherence, providers must demonstrate compliance with the obligations through “other suitable means” (Guidelines, Section 5.1).
Article 56(2) identifies four areas that a code must mandatorily cover:
maintenance and updating of technical documentation (Art. 53(1)(a)-(b));
definition of the appropriate level of detail for the training-data summary (Art. 53(1)(d));
mapping of sources of systemic risk of GPAI in the Union;
proportionate assessment and mitigation procedures for such risks.
The Code of Practice duly fulfils these four functions through its chapters on “Transparency,” “Copyright” and “Safety & Security.”
The Transparency Chapter: Documentation and Accountability
Documentation obligations
The Transparency Chapter of the Code represents the operational implementation of the obligations set out in Article 53(1)(a) and (b) of the AI Act: it consists of a set of commitments—referred to as Measures—that GPAI providers adhering to the Code undertake to ensure that the technical documentation of the models is not only complete and up-to-date but also accessible to relevant stakeholders.
These Measures are not mere formal requirements: they constitute a genuine accountability framework, aimed at ensuring that all actors in the value chain—from supervisory authorities to downstream providers (the “fornitori a valle” in the Italian version of the Regulation)—have the information they need to integrate, assess, and monitor GPAI in accordance with European standards on safety and fundamental rights protection.
This documentation is intended for three categories of recipients:
AI Office (AIO): recipient of the most detailed information, provided upon request;
National Competent Authorities (NCAs): recipients of specific information necessary for the exercise of supervisory functions;
Downstream Providers (DPs): recipients of the information required for the integration of models into their AI systems.
The Guidelines reinforce this interpretation, clarifying that the technical documentation must be kept up to date and made available throughout the entire lifecycle of the model, with differentiated modalities for the AI Office, national authorities, and downstream providers (Guidelines, Section 2.2).
This tripartite structure reflects the principle of proportionality: the information proactively provided to downstream providers is general in nature and functional for the integration of the model, whereas information intended for authorities is provided only in response to formalized requests, which must specify the legal basis and purpose of the processing.
On this point, it is worth noting that one of the central challenges of the Transparency Chapter is the protection of trade secrets and confidential information: Article 78 of the AI Act requires recipients of information (AIO, NCAs, DPs) to respect the confidentiality of the data received and to adopt appropriate cybersecurity measures to protect its confidentiality.
Measure 1.1: preparation and updating of documentation
The first Measure requires signatories to prepare, at the time the model is placed on the market, comprehensive technical documentation covering every significant aspect of the GPAI. This documentation, compiled in the Model Documentation Form, includes information on the provider’s identity, the architectural characteristics of the model, technical and design specifications, training processes (including details on the methodologies used and design logics), the data processed (types, sources, curation methodologies, measures for detecting bias and inappropriate content), and finally, energy and computational consumption.
The dynamic nature of this Measure requires that the documentation be continuously updated to reflect any changes or updates to the model, with an obligation to retain previous versions for a period of ten years.
Measure 1.2: communication of information to stakeholders
The second Measure concerns making the information contained in the Model Documentation Form available to three distinct recipients:
The AI Office and National Competent Authorities, which may request access to detailed information to exercise their supervisory functions, always in compliance with the principles of necessity and proportionality;
Downstream providers, who must be able to access the information necessary to understand the capabilities and limitations of the model in order to responsibly integrate it into their AI systems.
This communication must take place within a reasonable timeframe and in any event no later than 14 days for requests from downstream providers, except in exceptional circumstances.
Measure 1.3: quality, integrity and security of documentation
The third Measure requires signatories to ensure that the documented information is not only accurate but also protected against unintentional alterations and unauthorized access. To this end, they are encouraged to adopt quality protocols and established technical standards, thereby reinforcing trust in the robustness and integrity of the shared data.
Balancing transparency and trade secrets
The Transparency Chapter addresses one of the most delicate tensions for GPAI providers: the need to ensure a high level of transparency towards authorities and commercial partners without compromising the confidentiality of strategic information. Article 78 of the AI Act imposes strict confidentiality obligations on recipients of such data and requires them to implement appropriate cybersecurity measures to protect intellectual property rights and trade secrets.
The Code adopts a modular and calibrated approach: the information to be proactively provided to downstream providers is limited to what is strictly necessary to enable the safe and compliant integration of the model; the most sensitive information intended for the AI Office and NCAs is instead communicated only upon a justified request and limited to what is strictly necessary for the exercise of supervisory functions. This system reflects the European Union’s aim to create a dynamic balance between transparency and the protection of industrial competitiveness, while promoting a more trustworthy and responsible AI ecosystem.
The Copyright Chapter: policies, safeguards and liability
The obligation to adopt a copyright policy
The Copyright Chapter of the Code represents the operational response to Article 53(1)(c) of the AI Act, which requires providers of GPAI models placed on the Union market to adopt a policy to ensure compliance with EU copyright and related rights law. This obligation arises from the need to ensure that AI models are not trained on protected content in violation of applicable law and that the outputs generated do not, in turn, result in acts of infringement.
The copyright policy set out in the Code, in line with Article 53(1)(c) and the Guidelines (Section 2.2), goes beyond abstract principles and defines concrete actions that providers must undertake to ensure copyright compliance throughout the entire lifecycle of the model.
The measures of the Copyright Chapter: a due diligence framework
Measure 1.1 – Draft, update, and implement a copyright policy
Signatories commit to developing and maintaining an up-to-date corporate copyright policy that governs how GPAI models are trained and used in compliance with copyright laws. This policy must define internal responsibilities for its implementation and provide for verification and monitoring mechanisms, thereby becoming a key element of internal intellectual property governance. In addition, providers are encouraged to publish a summary of their policy to enhance transparency towards external stakeholders.
Measure 1.2 – Lawful access to protected content
The Code requires providers to ensure that the extraction of data and content from the web through crawling takes place only with respect to materials to which lawful access has been obtained. This entails a prohibition on circumventing effective technological measures (e.g., paywalls or subscription models) and an obligation to exclude from crawling processes websites identified by authorities as repeatedly infringing copyright on a commercial scale.
Measure 1.3 – Identification and respect of reservations of rights
In line with Article 4(3) of Directive (EU) 2019/790, signatories must adopt state-of-the-art technologies—including machine-readable solutions such as the robots.txt protocol—to identify and exclude content where rights holders have expressed reservations regarding its use for text and data mining purposes. This measure underscores the principle that respecting reservations of rights is an essential element of the lawfulness of the training process.
Measure 1.4 – Prevention of infringements in generated outputs
Equally important is the measure requiring providers to prevent, through technical safeguards, the generation of outputs by GPAI models that unlawfully reproduce protected content. In addition, there is an obligation to include in the model’s terms of use, or in accompanying documentation for open-source models, an explicit prohibition on uses that infringe copyright.
Measure 1.5 – Points of contact and complaint handling
Finally, signatories must designate an electronic point of contact for rights holders and establish a mechanism for receiving and handling complaints relating to alleged infringements. This measure aims to ensure a direct dialogue channel with affected parties and to strengthen providers’ accountability towards the creative ecosystem.
The Guidelines further clarify that open-source GPAI providers may benefit from exemptions from these obligations only if they meet specific conditions (no monetization, public availability of model parameters, and compliance with an open-source license – Guidelines, Section 4).
Template for the training data summary
In parallel with the Code, the AI Office is developing a standardized template for the training data summary that providers must make publicly available pursuant to Article 53(1)(d) of the AI Act. This template is structured into three main sections:
1. General information: identification of the model and the provider, dates of placement on the market, general characteristics of the training data;
2. List of data sources: detailed categorization of sources (public datasets, third-party data, crawled data, synthetic data) with specific dimensional and temporal indications;
3. Relevant aspects of data processing: measures to ensure copyright compliance, removal of undesirable content, and other relevant processing aspects.
The Security and systemic risk management Chapter: a future-proof risk governance
An integrated framework for high-risk GPAI
The Security Chapter of the Code of Conduct provides the practical implementation of Article 55 of the AI Act, which requires providers of GPAI models presenting systemic risk to establish technical and organizational governance capable of monitoring, assessing, and mitigating risks throughout the entire lifecycle of the model.
The Guidelines specify that GPAI providers with systemic risk must promptly notify the Commission when they reach the threshold of 10²⁵ FLOP, unless they can demonstrate the absence of systemic risks (Guidelines, Section 2.3.2). Risk assessment and mitigation must be carried out continuously throughout the entire lifecycle of the model (Guidelines, Section 2.2).
This framework does not consist merely of abstract requirements but takes the form of ten interrelated commitments, each accompanied by operational measures designed to guide signatories towards the adoption of state-of-the-art practices.
A modular and proportionate structure
The ten commitments represent the pillars on which the Safety and Security Framework of GPAI providers is based:
Some are strategic in nature, such as the obligation to establish governance procedures and designate internal personnel responsible for risk management.
Others are technical-operational in focus, imposing cybersecurity measures, post-market monitoring systems, and incident response plans.
Finally, some commitments promote a corporate risk culture through training, internal audits, and continuous review mechanisms.
The overall objective is twofold: on the one hand, to ensure that GPAI models do not become vectors of systemic threats to health, public safety, or fundamental rights; on the other, to strengthen providers’ resilience in the face of an evolving threat landscape.
The main measures: a narrative of technical and organisational due diligence
Without reducing the richness of the framework to a static list, it is possible to highlight some key areas:
Risk governance (Commitments 1–3): these provide for the establishment of internal policies, the appointment of a security officer, and the definition of criteria for risk assessment.
Assessment and mitigation (Commitments 4–6): these include the obligation to conduct systemic risk analyses, employ advanced testing techniques (e.g., red-teaming), and adopt corrective measures to maintain risks within acceptable levels.
Operational security (Commitments 7–9): these govern protection against external and internal threats, the physical and cybersecurity of infrastructures, and business continuity.
Accountability and transparency (Commitment 10): this requires the drafting of a Safety and Security Model Report, to be shared with the AI Office and published in summary form to inform the public, balancing transparency and the protection of trade secrets.
Independent external assessments
An innovative element of the Code concerns the obligation to provide access to independent external evaluators to facilitate post-market monitoring. Signatories must provide a sufficient number of independent external evaluators with free and adequate access to:
The most capable versions of the model with respect to systemic risk;
The model’s chain-of-thought, where available;
The versions of the model with the fewest security mitigations implemented.
Such access may be provided via API, on-premise access, access through hardware provided by the signatory, or by making the model parameters publicly available for download.
Critical issues, future perspectives and systemic considerations
The Code of Conduct presents a contradiction that cannot be overlooked: it is proclaimed as voluntary but in practice operates as if it were mandatory. Article 56 of the AI Act has created a mechanism that is both ingenious and insidious: it identifies the Code as the main evidentiary tool for demonstrating compliance with regulatory obligations, effectively transforming what should be an act of good faith into a commercial necessity. The result is paradoxical: while not legally binding, adherence to the Code becomes practically unavoidable. Those who opt out face the alternative evidentiary burden of demonstrating compliance through other means—a path that is inevitably more costly, complex, and uncertain in outcome. This is a form of soft coercion: elegant, yet effective.
This configuration raises uncomfortable questions about democratic legitimacy. We are witnessing the creation of de facto binding rules developed outside traditional democratic decision-making processes. The multi-stakeholder process, however technically sophisticated and representative, cannot replace the legitimacy derived from ordinary legislative procedures. A democratic shortfall emerges that calls for more robust parliamentary oversight over the content of the Code and its future updates.
But the challenges do not end there. The Code is caught in an almost impossible mission: reconciling the irreconcilable. On the one hand, copyright holders and civil society organizations demand total transparency; on the other, technology companies must protect trade secrets and multibillion-euro investments in research and development. The European legislator has sought to resolve this tension with regulatory balancing of uncertain effectiveness. The Model Documentation Form epitomizes this tension: the differentiation of information for different stakeholders attempts to satisfy everyone but risks pleasing no one. The outcome may ultimately prove unsatisfactory for both sides: too opaque for civil society and too intrusive for providers, as also emphasized in the Guidelines, which call for a dynamic balance between transparency, security, and protection of trade secrets (Guidelines, Section 6).
Even more problematic is the framework for managing systemic risks. How can one define ex ante thresholds of acceptability for technologies that evolve in discontinuous and unpredictable leaps? The «systemic risk tiers» envisaged by the Code risk becoming mere formal compliance exercises, incapable of capturing those emerging risks that, by definition, escape current assessment methodologies. It is akin to trying to predict the future using tools of the past.
The absence of direct sanctions for breaches of commitments does not imply a lack of legal consequences—quite the opposite. According to the Guidelines (Section 5.2), the Commission, through the AI Office, will have exclusive enforcement competence and may impose fines of up to 3% of global annual turnover for violations relating to GPAI and up to 7% for systemic-risk GPAI.
An indirect enforcement system also takes shape, operating across multiple dimensions and creating a network of subtle yet pervasive pressures. Reputational enforcement presupposes a market where consumers are capable of assessing and penalizing non-compliant behavior—an unlikely scenario in the B2B GPAI market, where purchasers are often as technologically sophisticated as providers. Contractual enforcement depends on the bargaining power of the parties involved and may prove a blunt instrument in relationships characterized by significant power imbalances. Finally, regulatory enforcement requires continuous and specialized supervision by the AI Office, whose success will depend on the human and technical resources actually allocated—a variable far from guaranteed.
An underestimated but potentially explosive aspect concerns the implications for European competition law. The standardization of operational practices through the Code could facilitate collusive or otherwise anti-competitive behavior, which is particularly dangerous in an already highly concentrated market. The obligation to provide access to independent external evaluators could also create information asymmetries between incumbents and new entrants, reinforcing existing dominant positions. This is a side effect that could transform a regulatory tool into a mechanism for protecting incumbents’ market positions.
Paradoxically, the transparency and documentation requirements could foster the emergence of new operators specializing in compliance and assessment services, creating new competitive dynamics. The balance between these opposing effects will largely depend on the practical application of the Code and the interpretative choices of the AI Office—a considerable discretionary power that warrants careful monitoring.
The Code is a testing ground for a new model of regulating technological innovation, moving away from the traditional command-and-control approach in favor of a system based on principles, objectives, and processes. It is a fascinating but risky experiment: while offering greater flexibility and adaptability to technological evolution, it raises fundamental questions about legal certainty and the predictability of legal consequences. The main challenge lies in maintaining an adequate level of regulatory clarity while allowing continuous adaptation to technological changes—a balance that requires almost surgical precision in calibration.
The European approach is poised to significantly influence the evolution of international AI regulation. Europe’s leadership, reinforced by the first-mover advantage of the AI Act, could promote the emergence of global standards based on the principles and methodologies developed within the EU. However, international regulatory convergence will need to contend with very different approaches: the U.S. model of self-regulation and the Chinese model of strong state control. The success of the European model will depend on its ability to demonstrate effectiveness in balancing innovation and rights protection while avoiding negative impacts on the competitiveness of European businesses.
Ultimately, the Code of Conduct represents a historic regulatory experiment, whose success will depend on the system’s ability to concretely demonstrate its effectiveness in achieving its stated objectives without compromising technological innovation or creating undue competitive barriers. The development of monitoring and evaluation mechanisms that allow the objective measurement of the Code’s impact and the introduction of necessary adjustments will be crucial.