AI Literacy and European Commission FAQs: how to build an AI Literacy plan

Contenido

The new Regulation (EU) 2024/1689 on Artificial Intelligence (“AI Act”), which entered into force on August 1, 2024, introduces a specific obligation on Artificial Intelligence literacy (“AI Literacy”) under Article 4. This provision is among those rules of the Regulation that will become fully effective as of February 2, 2025.

In line with this provision, on May 13, 2025, the European Commission published a series of Frequently Asked Questions (FAQs) on AI Literacy, aimed at facilitating the implementation of Article 4 of the AI Act by providing interpretative guidance and practical recommendations to support its application. These FAQs follow and align with the guidelines adopted on February 6, 2025, concerning the definition of AI systems.

What does the AI Act establish regarding AI Literacy?

Article 4 of the AI Act introduces a specific obligation for providers and deployers of artificial intelligence systems to adopt appropriate measures to ensure, as far as possible, an adequate level of AI literacy among their personnel and anyone acting on their behalf in the use or operation of AI systems.

These measures must be tailored by taking into account:

  • the technical skills, experience, education, and training of the individuals involved;
  • the application context of the AI systems; and
  • the groups of people on whom such systems are intended to have an impact.

The legal definition of AI Literacy is provided in Article 3, point 56 of the AI Act, which describes it as:
“the skills, knowledge and understanding that enable providers, deployers and affected persons, taking into account their respective rights and obligations under this Regulation, to make informed decisions regarding the deployment of AI systems, as well as to become aware of the opportunities and risks of AI and the potential harm it may cause.” This prevision therefore serves a dual purpose: on one hand, it identifies the entities required to implement AI literacy initiatives (providers and deployers); on the other hand, it defines the scope of the recipients, including both internal staff and any third parties tasked with using the systems (such as consultants, contractors, or service providers).

What level of literacy is required to ensure compliance?

The AI Act does not establish a uniform level of literacy in artificial intelligence but requires that the level be “sufficient,” to be assessed in relation to the specific organizational context. This approach aligns with other European regulations, such as the GDPR, where proportionality and adequacy are calibrated to the level of risk.

The FAQs published by the European Commission emphasize the need for a flexible and modular approach, taking into account both the diversity of AI systems used and the high rate of technological innovation. Nevertheless, some minimum criteria can be identified for properly structuring training programs.

In particular, providers and deployers are encouraged to ask themselves a series of preliminary questions regarding context and purpose, from which concrete measures can be derived.

To structure an effective AI Literacy plan, it is necessary to:

  • ensure a general understanding of AI within the organization, depending on the type of AI systems used and the operational context;
  • consider the role the organization plays (provider or deployer);
  • identify the specific risks that personnel should be aware of in relation to the above factors.

Once objectives and context have been clarified, a training plan must be developed that takes into account (i) the varying levels of technical expertise, experience, education, and training of company staff, and (ii) the context in which AI systems are used and the individuals they affect.

The FAQs recommend that AI Literacy not be limited to technical aspects but also include modules on legal and ethical issues, with particular attention to the applicable regulatory framework (including the AI Act, GDPR, and other sector-specific regulations).

A standardized, “one size fits all” approach is therefore not appropriate, as it may be excessive for some organizations and insufficient for others. For example, if a deployer uses AI systems classified as high-risk (under Title III of the AI Act), it will be appropriate to implement additional training measures to ensure greater awareness in managing such tools and mitigating risks—going beyond simply reading the manufacturer’s instructions.

Who is AI Literacy intended for?

According to the FAQs, the AI literacy obligation applies to the personnel of AI system providers and deployers. However, indirectly, this measure is also aimed at protecting affected individuals, namely those on whom the systems have an impact.

The FAQs clarify that companies whose staff use generative tools—such as ChatGPT for editorial or translation tasks—are also required to train their employees on the risks associated with such uses.

Article 4 of the AI Act further provides that the AI literacy obligation extends to individuals involved in the operation and use of AI systems on behalf of providers or deployers, thereby including, as specified in the FAQs, third parties such as contractors, service providers, or clients.

When does the AI Literacy obligation apply?

The obligation to ensure a sufficient level of AI Literacy takes effect from February 2, 2025, as established by Article 113 of the AI Act. From that date, providers and deployers of AI systems must begin adopting concrete measures to ensure adequate literacy among their staff and any individuals operating on their behalf. This early phase allows organizations to structure training programs and internal processes in preparation for the full applicability of the AI Act, set for August 2, 2025.

In case of non-compliance, the competent national authorities may take corrective measures and impose sanctions, pursuant to Title XII of the Regulation, which governs the enforcement and supervisory framework.

How to comply with AI Literacy obligations?

To properly comply with the obligation, it is advisable to develop an internal compliance plan, taking into account the phased entry into force of the AI Act. The FAQs recommend a multidisciplinary approach that integrates technical, legal, and ethical expertise. Below are some suggested operational steps:

PhaseRecommended actions
ClassificationIdentify, through an internal assessment, whether the organization operates as a provider or deployer, and which AI systems are involved.
Risk analysis of the involved AI systemsAssess the risks that staff need to be aware of: biases contained in training datasets, algorithmic hallucinations, black-box effect, self-fulfilling prophecy effect, etc.
Staff evaluationAssess the level of technical skills, experience, and training of the personnel targeted for the training. For example, the training cannot be the same for a technician and a corporate legal professional.
Development of the AI Literacy programDesign a literacy plan tailored to the context, the type of AI, and the composition of the personnel. It may include:
a) lessons with technical, legal, and ethical experts;
b) simulations and practical workshops;
c) external training initiatives, etc.
Documentation and monitoringAlthough not mandatory, documenting training activities is recommended. It is also advisable to repeat the training periodically, given the ongoing evolution of the subject.

The European Commission has also established a dynamic repository containing examples of best practices in AI Literacy adopted by various companies, useful for guiding the development of customized solutions.

Download Area
Download the PDF
Download
Fecha
Habla con nuestros expertos