{"id":57808,"date":"2025-07-17T18:27:54","date_gmt":"2025-07-17T16:27:54","guid":{"rendered":"https:\/\/www.lexia.it\/?p=57808"},"modified":"2025-07-21T17:38:58","modified_gmt":"2025-07-21T15:38:58","slug":"code-of-conduct-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/","title":{"rendered":"The Code of Conduct for general-purpose artificial intelligence"},"content":{"rendered":"\n<p><em>Generative artificial intelligence has now reached a level of maturity that requires a specific and comprehensive regulatory framework. In this context, the European Union has developed a pioneering approach through the adoption of the <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/contents-code-gpai\">Code of Conduct for general-purpose AI models <\/a>(\u201c<strong>GPAI<\/strong>\u201d), a legal instrument that forms part of the broader framework established by Regulation (EU) 2024\/1689 (\u201c<strong>AI Act<\/strong>\u201d) and which constitutes a unique development in the international regulatory landscape (the \u201c<strong>Code<\/strong>\u201d).<\/em><\/p>\n\n\n\n<p><em>In addition, the <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/library\/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act\">new Guidelines issued by the European Commission <\/a>(C(2025) 5045 final, published on 18 July 2025) clarify the scope of application and the content of the obligations for GPAI providers set out in the AI Act (the \u201c<strong>Guidelines<\/strong>\u201d), offering interpretative support that complements and strengthens the provisions of the Code.<\/em><\/p>\n\n\n\n<p><em>This paper analyses the legal implications of the Code of Conduct for GPAI models, focusing on transparency, safety, and copyright, with particular attention to systemic risk management, which is the key mechanism for demonstrating compliance with the obligations under the AI Act.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Regulatory framework<\/h2>\n\n\n\n<p>GPAI models are subject to specific obligations set out in<strong> Articles 53 and 55<\/strong> of the AI Act: Article 53 establishes obligations for all providers of GPAI models, while Article 55 introduces additional requirements for models presenting \u201csystemic risks\u201d (defined in Article 3(65) as specific risks associated with high-impact capabilities that have significant effects on the Union market).<\/p>\n\n\n\n<p>The Code is not a mere sectoral self-regulation initiative but rather the<strong> soft-law<\/strong> instrument expressly envisaged and encouraged by Article 56, which governs the \u201c<em>development, adoption and approval<\/em>\u201d of codes of conduct aimed at ensuring the proper application of the obligations imposed on GPAI providers (Arts. 53 and 55).<\/p>\n\n\n\n<p>Adherence to the Code is formally voluntary; however, Article 56(1) clarifies that it constitutes the \u201c<em>main instrument<\/em>\u201d through which <em>providers<\/em> may demonstrate their compliance. In the absence of adherence, the provider bears the burden of proving\u2014\u201cthrough other suitable means\u201d\u2014full compliance with all obligations, entailing a clear evidentiary and reputational burden.<\/p>\n\n\n\n<p>The Guidelines highlight that the Code represents the preferred instrument for demonstrating compliance under Article 56, whereas in the absence of adherence, providers must demonstrate compliance with the obligations through \u201cother suitable means\u201d (Guidelines, Section 5.1).<\/p>\n\n\n\n<p>Article 56(2) identifies four areas that a code must mandatorily cover:<\/p>\n\n\n\n<p>maintenance and updating of technical documentation (Art. 53(1)(a)-(b));<\/p>\n\n\n\n<p>definition of the appropriate level of detail for the training-data summary (Art. 53(1)(d));<\/p>\n\n\n\n<p>mapping of sources of systemic risk of GPAI in the Union;<\/p>\n\n\n\n<p>proportionate assessment and mitigation procedures for such risks.<\/p>\n\n\n\n<p>The Code of Practice duly fulfils these four functions through its chapters on \u201cTransparency,\u201d \u201cCopyright\u201d and \u201cSafety &amp; Security.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Transparency Chapter: Documentation and Accountability<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Documentation obligations<\/h3>\n\n\n\n<p>The Transparency Chapter of the Code represents the operational implementation of the obligations set out in Article 53(1)(a) and (b) of the AI Act: it consists of a set of commitments\u2014referred to as Measures\u2014that GPAI providers adhering to the Code undertake to ensure that the technical documentation of the models is not only complete and up-to-date but also accessible to relevant stakeholders.<\/p>\n\n\n\n<p>These <strong>Measures<\/strong> are not mere formal requirements: they constitute a genuine <strong>accountability framework<\/strong>, aimed at ensuring that all actors in the value chain\u2014from supervisory authorities to <em>downstream providers<\/em> (the \u201cfornitori a valle\u201d in the Italian version of the Regulation)\u2014have the information they need to integrate, assess, and monitor GPAI in accordance with European<em> standards<\/em> on safety and fundamental rights protection.<\/p>\n\n\n\n<p>This documentation is intended for three categories of recipients:<\/p>\n\n\n\n<p><strong>AI Office (AIO)<\/strong>: recipient of the most detailed information, provided upon request;<\/p>\n\n\n\n<p><strong>National Competent Authorities (NCAs)<\/strong>: recipients of specific information necessary for the exercise of supervisory functions;<\/p>\n\n\n\n<p><em><strong>Downstream<\/strong><\/em><strong> Providers (DPs)<\/strong>: recipients of the information required for the integration of models into their AI systems.<\/p>\n\n\n\n<p>The Guidelines reinforce this interpretation, clarifying that the technical documentation must be kept up to date and made available throughout the entire lifecycle of the model, with differentiated modalities for the AI Office, national authorities, and downstream providers (Guidelines, Section 2.2).<\/p>\n\n\n\n<p>This tripartite structure reflects the principle of proportionality: the information proactively provided to downstream providers is general in nature and functional for the integration of the model, whereas information intended for authorities is provided only in response to formalized requests, which must specify the legal basis and purpose of the processing.<\/p>\n\n\n\n<p>On this point, it is worth noting that one of the central challenges of the Transparency Chapter is the protection of trade secrets and confidential information: Article 78 of the AI Act requires recipients of information (AIO, NCAs, DPs) to respect the confidentiality of the data received and to adopt appropriate cybersecurity measures to protect its confidentiality.<\/p>\n\n\n\n<p><strong>Measure 1.1<\/strong>: preparation and updating of documentation<br>The first Measure requires signatories to prepare, at the time the model is placed on the market, comprehensive technical documentation covering every significant aspect of the GPAI. This documentation, compiled in the Model Documentation Form, includes information on the provider\u2019s identity, the architectural characteristics of the model, technical and design specifications, training processes (including details on the methodologies used and design logics), the data processed (types, sources, curation methodologies, measures for detecting bias and inappropriate content), and finally, energy and computational consumption.<\/p>\n\n\n\n<p>The dynamic nature of this Measure requires that the documentation be continuously updated to reflect any changes or updates to the model, with an obligation to retain previous versions for a period of ten years.<\/p>\n\n\n\n<p><strong>Measure 1.2<\/strong>: communication of information to stakeholders<br>The second <em>Measure<\/em> concerns making the information contained in the Model Documentation Form available to three distinct recipients:<\/p>\n\n\n\n<p>The <em>AI Office<\/em> and National Competent Authorities, which may request access to detailed information to exercise their supervisory functions, always in compliance with the principles of necessity and proportionality;<\/p>\n\n\n\n<p><em>Downstream providers<\/em>, who must be able to access the information necessary to understand the capabilities and limitations of the model in order to responsibly integrate it into their AI systems.<\/p>\n\n\n\n<p>This communication must take place within a reasonable timeframe and in any event no later than 14 days for requests from <em>downstream<\/em> <em>providers<\/em>, except in exceptional circumstances.<\/p>\n\n\n\n<p><strong>Measure 1.3<\/strong>: quality, integrity and security of documentation<br>The third <em>Measure<\/em> requires signatories to ensure that the documented information is not only accurate but also protected against unintentional alterations and unauthorized access. To this end, they are encouraged to adopt quality protocols and established technical standards, thereby reinforcing trust in the robustness and integrity of the shared data.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Balancing transparency and trade secrets<\/h3>\n\n\n\n<p>The Transparency Chapter addresses one of the most delicate tensions for GPAI providers: the need to ensure a high level of transparency towards authorities and commercial<em> partners<\/em> without compromising the confidentiality of strategic information. <strong>Article 78 <\/strong>of the AI Act imposes strict confidentiality obligations on recipients of such data and requires them to implement appropriate cybersecurity measures to protect intellectual property rights and trade secrets.<\/p>\n\n\n\n<p>The Code adopts a modular and calibrated approach: the information to be proactively provided to downstream providers is limited to what is strictly necessary to enable the safe and compliant integration of the model; the most sensitive information intended for the AI Office and NCAs is instead communicated only upon a justified request and limited to what is strictly necessary for the exercise of supervisory functions. This system reflects the European Union\u2019s aim to create a dynamic balance between transparency and the protection of industrial competitiveness, while promoting a more trustworthy and responsible AI ecosystem.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Copyright Chapter: policies, safeguards and liability<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">The obligation to adopt a <em>copyright policy<\/em><\/h3>\n\n\n\n<p>The Copyright Chapter of the Code represents the operational response to Article 53(1)(c) of the AI Act, which requires providers of GPAI models placed on the Union market to adopt a policy to ensure compliance with EU copyright and related rights law. This obligation arises from the need to ensure that AI models are not trained on protected content in violation of applicable law and that the outputs generated do not, in turn, result in acts of infringement.<\/p>\n\n\n\n<p>The <em>copyright<\/em> policy set out in the Code, in line with Article 53(1)(c) and the Guidelines (Section 2.2), goes beyond abstract principles and defines concrete actions that providers must undertake to ensure copyright compliance throughout the entire lifecycle of the model.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The measures of the Copyright Chapter: a due diligence framework<\/h3>\n\n\n\n<p><strong><em>Measure<\/em> 1.1<\/strong> \u2013 Draft, update, and implement a <em>copyright policy<\/em><br>Signatories commit to developing and maintaining an up-to-date corporate copyright policy that governs how GPAI models are trained and used in compliance with copyright laws. This policy must define internal responsibilities for its implementation and provide for verification and monitoring mechanisms, thereby becoming a key element of internal intellectual property governance. In addition, providers are encouraged to publish a summary of their policy to enhance transparency towards external<em> stakeholders<\/em>.<\/p>\n\n\n\n<p><strong><em>Measure<\/em> 1.2<\/strong> \u2013 Lawful access to protected content<br>The Code requires providers to ensure that the extraction of data and content from the web through crawling takes place only with respect to materials to which lawful access has been obtained. This entails a prohibition on circumventing effective technological measures (e.g., paywalls or subscription models) and an obligation to exclude from crawling processes websites identified by authorities as repeatedly infringing copyright on a commercial scale.<\/p>\n\n\n\n<p><strong><em>Measure <\/em>1.3<\/strong> \u2013 Identification and respect of reservations of rights<br>In line with Article 4(3) of Directive (EU) 2019\/790, signatories must adopt state-of-the-art technologies\u2014including machine-readable solutions such as the robots.txt protocol\u2014to identify and exclude content where rights holders have expressed reservations regarding its use for <em>text and data mining<\/em> purposes. This measure underscores the principle that respecting reservations of rights is an essential element of the lawfulness of the <em>training<\/em> process.<\/p>\n\n\n\n<p><strong><em>Measure <\/em>1.4<\/strong> \u2013 Prevention of infringements in generated outputs<br>Equally important is the measure requiring providers to prevent, through technical safeguards, the generation of outputs by GPAI models that unlawfully reproduce protected content. In addition, there is an obligation to include in the model\u2019s terms of use, or in accompanying documentation for open-source models, an explicit prohibition on uses that infringe copyright.<\/p>\n\n\n\n<p><strong><em>Measure<\/em> 1.5<\/strong> \u2013 Points of contact and complaint handling<br>Finally, signatories must designate an electronic point of contact for rights holders and establish a mechanism for receiving and handling complaints relating to alleged infringements. This measure aims to ensure a direct dialogue channel with affected parties and to strengthen providers\u2019 accountability towards the creative ecosystem.<\/p>\n\n\n\n<p>The Guidelines further clarify that open-source GPAI providers may benefit from exemptions from these obligations only if they meet specific conditions (no monetization, public availability of model parameters, and compliance with an <em>open-source<\/em> license \u2013 Guidelines, Section 4).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Template for the training data summary<\/h3>\n\n\n\n<p>In parallel with the Code, the AI Office is developing a standardized template for the training data summary that providers must make publicly available pursuant to Article 53(1)(d) of the AI Act. This template is structured into three main sections:<\/p>\n\n\n\n<p>1. General information: identification of the model and the provider, dates of placement on the market, general characteristics of the training data;<\/p>\n\n\n\n<p>2. List of data sources: detailed categorization of sources (public datasets, third-party data, crawled data, synthetic data) with specific dimensional and temporal indications;<\/p>\n\n\n\n<p>3. Relevant aspects of data processing: measures to ensure copyright compliance, removal of undesirable content, and other relevant processing aspects.<\/p>\n\n\n\n<ol class=\"wp-block-list\"><\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">The Security and systemic risk management Chapter: a future-proof risk governance<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">An integrated framework for high-risk GPAI<\/h3>\n\n\n\n<p>The Security Chapter of the Code of Conduct provides the practical implementation of Article 55 of the AI Act, which requires providers of GPAI models presenting systemic risk to establish technical and organizational governance capable of monitoring, assessing, and mitigating risks throughout the entire lifecycle of the model.<\/p>\n\n\n\n<p>The Guidelines specify that GPAI providers with systemic risk must promptly notify the Commission when they reach the threshold of 10\u00b2\u2075 FLOP, unless they can demonstrate the absence of systemic risks (Guidelines, Section 2.3.2). Risk assessment and mitigation must be carried out continuously throughout the entire lifecycle of the model (Guidelines, Section 2.2).<\/p>\n\n\n\n<p>This framework does not consist merely of abstract requirements but takes the form of <strong>ten<\/strong> interrelated<strong> commitments<\/strong>, each accompanied by operational measures designed to guide signatories towards the adoption of state-of-the-art practices.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">A modular and proportionate structure<\/h3>\n\n\n\n<p>The ten commitments represent the pillars on which the Safety and Security Framework of GPAI providers is based:<\/p>\n\n\n\n<p>Some are strategic in nature, such as the obligation to establish governance procedures and designate internal personnel responsible for risk management.<\/p>\n\n\n\n<p>Others are technical-operational in focus, imposing cybersecurity measures, post-market monitoring systems, and incident response plans.<\/p>\n\n\n\n<p>Finally, some commitments promote a corporate risk culture through training, internal audits, and continuous review mechanisms.<\/p>\n\n\n\n<p>The overall objective is twofold: on the one hand, to ensure that GPAI models do not become vectors of systemic threats to health, public safety, or fundamental rights; on the other, to strengthen providers\u2019 resilience in the face of an evolving threat landscape.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The main measures: a narrative of technical and organisational due diligence<\/h3>\n\n\n\n<p>Without reducing the richness of the <em>framework<\/em> to a static list, it is possible to highlight some key areas:<\/p>\n\n\n\n<p><strong>Risk governance (Commitments 1\u20133): <\/strong>these provide for the establishment of internal policies, the appointment of a security officer, and the definition of criteria for risk assessment.<\/p>\n\n\n\n<p><strong>Assessment and mitigation (Commitments 4\u20136):<\/strong> these include the obligation to conduct systemic risk analyses, employ advanced testing techniques (e.g., red-teaming), and adopt corrective measures to maintain risks within acceptable levels.<\/p>\n\n\n\n<p><strong>Operational security (Commitments 7\u20139):<\/strong> these govern protection against external and internal threats, the physical and cybersecurity of infrastructures, and business continuity.<\/p>\n\n\n\n<p><strong>Accountability and transparency (Commitment 10):<\/strong> this requires the drafting of a Safety and Security Model Report, to be shared with the AI Office and published in summary form to inform the public, balancing transparency and the protection of trade secrets.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Independent external assessments<\/h3>\n\n\n\n<p>An innovative element of the Code concerns the obligation to provide access to independent external evaluators to facilitate post-market monitoring. Signatories must provide a sufficient number of independent external evaluators with free and adequate access to:<\/p>\n\n\n\n<p>The most capable versions of the model with respect to systemic risk;<\/p>\n\n\n\n<p>The model\u2019s chain-of-thought, where available;<\/p>\n\n\n\n<p>The versions of the model with the fewest security mitigations implemented.<\/p>\n\n\n\n<p>Such access may be provided via API, on-premise access, access through hardware provided by the signatory, or by making the model parameters publicly available for <em>download<\/em>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Critical issues, future perspectives and systemic considerations<\/h2>\n\n\n\n<p>The Code of Conduct presents a contradiction that cannot be overlooked: it is proclaimed as voluntary but in practice operates as if it were mandatory. Article 56 of the AI Act has created a mechanism that is both ingenious and insidious: it identifies the Code as the main evidentiary tool for demonstrating compliance with regulatory obligations, effectively transforming what should be an act of good faith into a commercial necessity. The result is paradoxical: while not legally binding, adherence to the Code becomes practically unavoidable. Those who opt out face the alternative evidentiary burden of demonstrating compliance through other means\u2014a path that is inevitably more costly, complex, and uncertain in outcome. This is a form of soft coercion: elegant, yet effective.<\/p>\n\n\n\n<p>This configuration raises uncomfortable questions about democratic legitimacy. We are witnessing the creation of de facto binding rules developed outside traditional democratic decision-making processes. The multi-stakeholder process, however technically sophisticated and representative, cannot replace the legitimacy derived from ordinary legislative procedures. A democratic shortfall emerges that calls for more robust parliamentary oversight over the content of the Code and its future updates.<\/p>\n\n\n\n<p>But the challenges do not end there. The Code is caught in an almost impossible mission: <strong>reconciling the irreconcilable<\/strong>. On the one hand, copyright holders and civil society organizations demand total transparency; on the other, technology companies must protect trade secrets and multibillion-euro investments in research and development. The European legislator has sought to resolve this tension with regulatory balancing of uncertain effectiveness. The <em>Model Documentation Form<\/em> epitomizes this tension: the differentiation of information for different stakeholders attempts to satisfy everyone but risks pleasing no one. The outcome may ultimately prove unsatisfactory for both sides: too opaque for civil society and too intrusive for providers, as also emphasized in the Guidelines, which call for a dynamic balance between transparency, security, and protection of trade secrets (Guidelines, Section 6).<\/p>\n\n\n\n<p>Even more problematic is the framework for managing systemic risks. How can one define ex ante thresholds of acceptability for technologies that evolve in discontinuous and unpredictable leaps? The &#8220;systemic risk tiers&#8221; envisaged by the Code risk becoming mere formal compliance exercises, incapable of capturing those emerging risks that, by definition, escape current assessment methodologies. It is akin to trying to predict the future using tools of the past.<\/p>\n\n\n\n<p>The absence of direct sanctions for breaches of commitments does not imply a lack of legal consequences\u2014quite the opposite. According to the Guidelines (Section 5.2), the Commission, through the AI Office, will have exclusive enforcement competence and may impose <strong>fines<\/strong> of up to 3% of global annual turnover for violations relating to GPAI and up to 7% for systemic-risk GPAI.<\/p>\n\n\n\n<p>An <strong>indirect enforcement <\/strong>system also takes shape, operating across multiple dimensions and creating a network of subtle yet pervasive pressures. Reputational enforcement presupposes a market where consumers are capable of assessing and penalizing non-compliant behavior\u2014an unlikely scenario in the B2B GPAI market, where purchasers are often as technologically sophisticated as providers. Contractual enforcement depends on the bargaining power of the parties involved and may prove a blunt instrument in relationships characterized by significant power imbalances. Finally, regulatory enforcement requires continuous and specialized supervision by the AI Office, whose success will depend on the human and technical resources actually allocated\u2014a variable far from guaranteed.<\/p>\n\n\n\n<p>An underestimated but potentially explosive aspect concerns the implications for European competition law. The standardization of operational practices through the Code could facilitate collusive or otherwise anti-competitive behavior, which is particularly dangerous in an already highly concentrated market. The obligation to provide access to independent external evaluators could also create information asymmetries between incumbents and new entrants, reinforcing existing dominant positions. This is a side effect that could transform a regulatory tool into a mechanism for protecting incumbents&#8217; market positions.<\/p>\n\n\n\n<p>Paradoxically, the transparency and documentation requirements could foster the emergence of new operators specializing in compliance and assessment services, creating new competitive dynamics. The balance between these opposing effects will largely depend on the practical application of the Code and the interpretative choices of the AI Office\u2014a considerable discretionary power that warrants careful monitoring.<\/p>\n\n\n\n<p>The Code is a testing ground for a new model of regulating technological innovation, moving away from the traditional command-and-control approach in favor of a system based on principles, objectives, and processes. It is a fascinating but risky experiment: while offering greater flexibility and adaptability to technological evolution, it raises fundamental questions about legal certainty and the predictability of legal consequences. The main challenge lies in maintaining an adequate level of regulatory clarity while allowing continuous adaptation to technological changes\u2014a balance that requires almost surgical precision in calibration.<\/p>\n\n\n\n<p>The European approach is poised to significantly influence the evolution of international AI regulation. Europe&#8217;s <em>leadership<\/em>, reinforced by the <em>first-mover advantage<\/em> of the AI Act, could promote the emergence of global standards based on the principles and methodologies developed within the EU. However, international regulatory convergence will need to contend with very different approaches: the U.S. model of self-regulation and the Chinese model of strong state control. The success of the European model will depend on its ability to demonstrate effectiveness in balancing innovation and rights protection while avoiding negative impacts on the competitiveness of European businesses.<\/p>\n\n\n\n<p>Ultimately, the Code of Conduct represents a historic regulatory experiment, whose success will depend on the system&#8217;s ability to concretely demonstrate its effectiveness in achieving its stated objectives without compromising technological innovation or creating undue competitive barriers. The development of monitoring and evaluation mechanisms that allow the objective measurement of the Code&#8217;s impact and the introduction of necessary adjustments will be crucial.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><br><br><br><br><br><br><br><br><br><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Generative artificial intelligence has now reached a level of maturity that requires a specific and comprehensive regulatory framework. In this context, the European Union has developed a pioneering approach through the adoption of the Code of Conduct for general-purpose AI models (\u201cGPAI\u201d), a legal instrument that forms part of the broader framework established by Regulation &hellip; <a href=\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/\">Continued<\/a><\/p>\n","protected":false},"author":13,"featured_media":56000,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[665],"tags":[],"area":[625,624],"collana":[],"competenza":[621],"class_list":["post-57808","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-publications","area-law-en","area-innovation-en","competenza-data-technology-innovation"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Code of Conduct for General-Purpose Artificial Intelligence - LEXIA<\/title>\n<meta name=\"description\" content=\"Legal analysis of the EU Code of Conduct for generative AI: transparency, security, copyright, and systemic risk management.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Code of Conduct for General-Purpose Artificial Intelligence - LEXIA\" \/>\n<meta property=\"og:description\" content=\"Legal analysis of the EU Code of Conduct for generative AI: transparency, security, copyright, and systemic risk management.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/\" \/>\n<meta property=\"og:site_name\" content=\"LEXIA\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-17T16:27:54+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-21T15:38:58+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.lexia.it\/wp-content\/uploads\/2025\/03\/influencer-marketing-lexia-1024x576.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"576\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Christian\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Christian\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"18 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/\",\"url\":\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/\",\"name\":\"The Code of Conduct for General-Purpose Artificial Intelligence - LEXIA\",\"isPartOf\":{\"@id\":\"https:\/\/www.lexia.it\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.lexia.it\/wp-content\/uploads\/2025\/03\/influencer-marketing-lexia.png\",\"datePublished\":\"2025-07-17T16:27:54+00:00\",\"dateModified\":\"2025-07-21T15:38:58+00:00\",\"author\":{\"@id\":\"https:\/\/www.lexia.it\/#\/schema\/person\/91f22c316f63e9d080a0fb814a34db5b\"},\"description\":\"Legal analysis of the EU Code of Conduct for generative AI: transparency, security, copyright, and systemic risk management.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/#primaryimage\",\"url\":\"https:\/\/www.lexia.it\/wp-content\/uploads\/2025\/03\/influencer-marketing-lexia.png\",\"contentUrl\":\"https:\/\/www.lexia.it\/wp-content\/uploads\/2025\/03\/influencer-marketing-lexia.png\",\"width\":3840,\"height\":2160,\"caption\":\"code-of-conduct-artificial-intelligence\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.lexia.it\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Code of Conduct for general-purpose artificial intelligence\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.lexia.it\/#website\",\"url\":\"https:\/\/www.lexia.it\/\",\"name\":\"LEXIA\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.lexia.it\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.lexia.it\/#\/schema\/person\/91f22c316f63e9d080a0fb814a34db5b\",\"name\":\"Christian\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.lexia.it\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/470a76d7f2151b16bedeed55f5eaef104b54eeda7aeeee817ff29f1aa1798415?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/470a76d7f2151b16bedeed55f5eaef104b54eeda7aeeee817ff29f1aa1798415?s=96&d=mm&r=g\",\"caption\":\"Christian\"},\"url\":\"https:\/\/www.lexia.it\/en\/author\/christian\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Code of Conduct for General-Purpose Artificial Intelligence - LEXIA","description":"Legal analysis of the EU Code of Conduct for generative AI: transparency, security, copyright, and systemic risk management.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/","og_locale":"en_US","og_type":"article","og_title":"The Code of Conduct for General-Purpose Artificial Intelligence - LEXIA","og_description":"Legal analysis of the EU Code of Conduct for generative AI: transparency, security, copyright, and systemic risk management.","og_url":"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/","og_site_name":"LEXIA","article_published_time":"2025-07-17T16:27:54+00:00","article_modified_time":"2025-07-21T15:38:58+00:00","og_image":[{"width":1024,"height":576,"url":"https:\/\/www.lexia.it\/wp-content\/uploads\/2025\/03\/influencer-marketing-lexia-1024x576.png","type":"image\/png"}],"author":"Christian","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Christian","Est. reading time":"18 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/","url":"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/","name":"The Code of Conduct for General-Purpose Artificial Intelligence - LEXIA","isPartOf":{"@id":"https:\/\/www.lexia.it\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/#primaryimage"},"image":{"@id":"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/#primaryimage"},"thumbnailUrl":"https:\/\/www.lexia.it\/wp-content\/uploads\/2025\/03\/influencer-marketing-lexia.png","datePublished":"2025-07-17T16:27:54+00:00","dateModified":"2025-07-21T15:38:58+00:00","author":{"@id":"https:\/\/www.lexia.it\/#\/schema\/person\/91f22c316f63e9d080a0fb814a34db5b"},"description":"Legal analysis of the EU Code of Conduct for generative AI: transparency, security, copyright, and systemic risk management.","breadcrumb":{"@id":"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/#primaryimage","url":"https:\/\/www.lexia.it\/wp-content\/uploads\/2025\/03\/influencer-marketing-lexia.png","contentUrl":"https:\/\/www.lexia.it\/wp-content\/uploads\/2025\/03\/influencer-marketing-lexia.png","width":3840,"height":2160,"caption":"code-of-conduct-artificial-intelligence"},{"@type":"BreadcrumbList","@id":"https:\/\/www.lexia.it\/en\/2025\/07\/17\/code-of-conduct-artificial-intelligence\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.lexia.it\/en\/"},{"@type":"ListItem","position":2,"name":"The Code of Conduct for general-purpose artificial intelligence"}]},{"@type":"WebSite","@id":"https:\/\/www.lexia.it\/#website","url":"https:\/\/www.lexia.it\/","name":"LEXIA","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.lexia.it\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.lexia.it\/#\/schema\/person\/91f22c316f63e9d080a0fb814a34db5b","name":"Christian","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.lexia.it\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/470a76d7f2151b16bedeed55f5eaef104b54eeda7aeeee817ff29f1aa1798415?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/470a76d7f2151b16bedeed55f5eaef104b54eeda7aeeee817ff29f1aa1798415?s=96&d=mm&r=g","caption":"Christian"},"url":"https:\/\/www.lexia.it\/en\/author\/christian\/"}]}},"_links":{"self":[{"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/posts\/57808","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/comments?post=57808"}],"version-history":[{"count":14,"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/posts\/57808\/revisions"}],"predecessor-version":[{"id":57904,"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/posts\/57808\/revisions\/57904"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/media\/56000"}],"wp:attachment":[{"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/media?parent=57808"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/categories?post=57808"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/tags?post=57808"},{"taxonomy":"area","embeddable":true,"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/area?post=57808"},{"taxonomy":"collana","embeddable":true,"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/collana?post=57808"},{"taxonomy":"competenza","embeddable":true,"href":"https:\/\/www.lexia.it\/en\/wp-json\/wp\/v2\/competenza?post=57808"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}