On July 22, 2024, the Italian Agency for Digitalization (AgID) published the full document of the Italian Strategy for Artificial Intelligence 2024-2026. This strategic plan outlines Italy’s vision for developing and implementing AI in the coming years, focusing on four key areas: research, public administration, business, and education.
Research and development
The Strategy places strong emphasis on research and development, particularly on advancing linguistic models (LMMs and LLMs) in Italian and multilingual contexts. From a legal perspective, this raises critical issues concerning intellectual property. Companies and research institutions involved in developing these models will face a complex landscape of patents and copyrights.
For instance, the development of an Italian LLM might involve using extensive corpora of copyrighted texts for training purposes. Clear guidelines on fair use of such materials will be essential, akin to those established in the U.S. Supreme Court ruling in Authors Guild v. Google (2015), which held that scanning books for research and analysis purposes fell under fair use.
Moreover, public-private research collaborations pose questions about the ownership of resulting inventions. Clear agreements on intellectual property will be necessary, similar to the Bayh-Dole Act in the United States, which governs patent rights for federally funded inventions.
AI Implementation in public administration
The adoption of AI in public administration promises to improve the efficiency and quality of services offered to citizens. However, this raises significant legal concerns regarding privacy and data protection.
The use of AI to automate processes and provide responses to citizens must comply with the EU General Data Protection Regulation (GDPR). In particular, Article 22 of the GDPR, which addresses automated decision-making, will be critical. Public administrations will need to ensure that significant decisions are not made solely by automated systems unless specific conditions are met.
An example of implementation requiring careful legal consideration could be the use of AI to evaluate social benefit applications. While AI could expedite the process, it will be crucial to ensure the system does not inadvertently discriminate against certain categories of citizens, in line with the non-discrimination principle outlined in Article 21 of the Charter of Fundamental Rights of the European Union.
Business support
The Strategy includes tax incentives and support programs for businesses investing in AI solutions. From a legal perspective, it will be critical to clearly define what qualifies as an “AI investment” for these incentives.
One potential approach could mirror the Italian Patent Box, introduced in the 2015 Stability Law, which offers tax benefits for income derived from intangible assets. An “AI Box” could provide similar fiscal advantages for investments in AI technologies.
However, companies benefiting from these incentives must also demonstrate compliance with emerging AI regulations. In particular, alignment with the forthcoming EU AI Act, which proposes a risk-based approach to AI regulation, will be essential. Businesses will need to implement risk management systems, ensure the transparency and robustness of their AI systems, and prepare for potential audits.
Education and skills development
The Strategy emphasizes education and skill development in AI. From a labor law perspective, this raises important issues regarding the rights and responsibilities of employers and employees in an increasingly digitalized economy.
Upskilling and reskilling programs under the Strategy may require amendments to existing employment contracts. Employers might be obliged to provide training to employees whose roles are at risk of automation, in line with the right to vocational training enshrined in Article 14 of the Charter of Fundamental Rights of the EU.
Additionally, the introduction of AI technologies in the workplace may necessitate a review of health and safety policies. For instance, using cobots (collaborative robots) in industrial settings will require new risk assessments and safety protocols, in compliance with the EU Machinery Directive (2006/42/EC).
Conclusion
The Italian Strategy for Artificial Intelligence 2024-2026 represents a significant opportunity for the country but also presents complex challenges for the legal system. The challenge will be to keep pace with technological advancements while ensuring the legal framework remains relevant and effective in a rapidly evolving field like artificial intelligence.
In practical terms, implementing the Strategy will require legal efforts in the following areas:
- AI compliance audits: Developing and implementing audit protocols to assess AI systems’ compliance with existing and future regulations (e.g., creating checklists to ensure GDPR and AI Act adherence).
- Corporate guidelines: Assisting businesses in drafting internal policies for ethical and compliant AI use, including codes of conduct for developing and deploying AI systems.
- Technological due diligence: Creating AI-specific due diligence protocols for use in extraordinary transactions involving tech companies.
- Corporate training: Organizing workshops and training sessions for clients on the legal implications of AI, including data protection considerations.
- Technology mediation: Developing expertise in mediating disputes related to AI, such as intellectual property conflicts over AI-driven innovations or disputes over data usage for training AI models.
- Contractual clauses: Crafting model contractual clauses tailored to AI, applicable in various contexts such as licensing agreements for AI software, AI development contracts, or data-sharing agreements for AI model training.