With the adoption of the first General Code of Conduct for Artificial Intelligence (AI), we stand at a pivotal moment for the future of regulating this technology. The initiative, promoted under the AI Act (EU Regulation 2024/1689), represents an ambitious attempt to balance technological innovation, the protection of fundamental rights, and the safety of AI systems.
THE REGULATORY CONTEXT
The Code of Conduct, currently under consultation, is part of the regulatory framework defined by the AI Act, which came into force on August 1, 2024. Its primary objective is to guide providers of general-purpose artificial intelligence models toward standards of transparency, safety, and reliability. To this end, the document draws inspiration from the principles of the European Union, as enshrined in the Charter of Fundamental Rights and the EU treaties.
The document emphasizes the importance of regulation that accounts for the systemic risks associated with the use of general-purpose AI models, as defined in Article 3, paragraph 65 of the AI Act, with particular attention to those characterized by high systemic risk under the criteria established in Article 51, paragraph 1.
The draft code is structured around four key areas:
- transparency and copyright rules: detailed documentation obligations to ensure model traceability and compliance with copyright regulations
- identification and Mitigation of systemic risks: tools to assess and manage risks associated with models with systemic potential, such as large-scale manipulation or misuse in critical domains
- risk governance: oversight and reporting mechanisms involving competent authorities and other stakeholders
- proportionality: differentiation of measures based on the size of the provider, with simplified requirements for SMEs and startups
LIABILITY PROFILES
The provisions regarding the responsibility of general-purpose AI model providers are of particular significance. In accordance with Article 53 of the AI Act, the Code outlines a comprehensive system of obligations, which includes:
- the preparation and continuous updating of the model’s technical documentation, including details of the training and testing process
- the implementation of compliance policies aligned with Union copyright law
- the publication of a detailed summary of the content used for model training
For providers of models with systemic risk, Article 55 introduces additional substantial obligations, including:
- the assessment and mitigation of systemic risks at the Union level
- the monitoring and documentation of relevant incidents, as specified in paragraph 1, letters (a), (b), and (c) of the regulation.
TECHNICAL AND OPERATIONAL MEASURES
In compliance with Article 56 of the AI Act, the Code mandates the adoption of a Safety and Security Framework (SSF), a critical tool for risk management and the documentation of implemented security measures. This framework must include, as specified in Section 2 of Annex XIII, detailed procedures for continuous assessment and mitigation measures tailored to the severity of identified risks.
The monitoring and reporting system, governed by Article 55, paragraph 1, letter (c), establishes specific obligations for documentation and prompt reporting to the AI Office and national competent authorities. Of particular importance is the definition of a serious incident under Article 3, paragraph 49 of the AI Act.
CRITICAL ASPECTS AND OPPORTUNITIES
One of the most significant areas of interest lies in the definition and application of transparency criteria. For instance, Article 53 of the draft Code requires AI providers to disclose detailed information about the datasets used, training methods, and key model parameters. However, reconciling these obligations with the protection of industrial know-how and trade secrets remains an unresolved challenge.
Another critical aspect is the recognition of the need for international cooperation in developing interoperable standards. This issue is particularly relevant for open-source models and transnational technologies. It is worth noting the differentiated regime outlined in Article 53, paragraph 2, for open-source models, which exempts them from certain provisions while maintaining fundamental obligations related to systemic risks, as specified in recitals 102 and 103.
The final version of the Code, expected by May 2025, and the success of its implementation will largely depend on the ability to balance regulatory flexibility with the need to ensure high standards of safety and the protection of fundamental rights, as required by Article 1 of the AI Act. In this context, developing effective mechanisms for cooperation among competent authorities, as outlined in Article 74, paragraph 10, will be essential. At the same time, a proportionate approach must be adopted to address the specific needs of small and medium-sized enterprises (SMEs) and innovative startups.