Data & Technology Innovation | June 2025 Insight

Inhalt

GDPR: Proposal for Simplification and Reform Presented

Data protection

On May 21, 2025, the European Commission presented a proposal to amend Regulation (EU) 2016/679 (“GDPR”), with the aim of introducing significant changes, particularly to benefit small and medium-sized enterprises and so-called “small mid-cap” companies. The initiative is part of the “Omnibus IV” legislative package, launched in February 2025, which seeks to make the European regulatory framework more proportionate and sustainable, while maintaining strong protection of data subjects’ fundamental rights.

Among the key elements of the proposal is the introduction of new definitions in Article 4 of the GDPR, which broaden the eligibility for simplified measures to a wider range of economic operators. In addition to micro, small, and medium-sized enterprises as identified by Recommendation 2003/361/EC, the category of small mid-cap companies—those with small market capitalizations but exceeding SME thresholds—is also recognized. This move aims to avoid a disproportionate compliance burden for growing businesses that exceed SME size thresholds, potentially hindering their development and competitiveness.

One of the most significant changes concerns the rules related to the record of processing activities. The proposal, presented on May 21, 2025, raises the employee threshold for the obligation to maintain such records to 750 employees, except where the processing presents a high risk under Article 35 of the GDPR.

Also noteworthy is the encouragement to develop codes of conduct and certification mechanisms that reflect the specific needs not only of SMEs but also of small mid-caps.

The hope is that these amendments will help build a more balanced regulatory ecosystem—one that reconciles the protection of data subjects’ rights with the needs for growth and innovation among European businesses.

For a tailored analysis and operational support in aligning with data protection regulations, contact the Data & Technology Innovation Team.

AI Chatbots and Privacy Compliance: The Replika Case

Data protection & artificial intelligence

In May, the Italian Data Protection Authority sanctioned the chatbot Replika for serious violations of the GDPR—an event which, beyond the significance of the ruling itself, prompts a broader reflection on the compliance of AI-based chatbots, which are increasingly used in sensitive and high-risk contexts such as customer service, psychological support, education, and HR.

The use of conversational AI systems raises several critical issues from a data protection perspective. First and foremost, interaction with a chatbot often implicitly involves the collection and analysis of highly personal information—ranging from user preferences and consumption habits to health data or emotional states. It is therefore essential that processing is carried out on a solid legal basis and for well-defined purposes.

Another fundamental aspect is transparency: users must be made aware that they are interacting with an artificial intelligence, and must understand which data is being processed and for what purpose. This is especially important when AI simulates a „human“ relationship, as in the case of Replika, generating emotional or personalized responses that may affect the psychological well-being of the user.

The issue becomes even more complex when minors are involved: here too, the Authority’s ruling emphasized the lack of age verification mechanisms, highlighting the need to adopt effective technical solutions to restrict access to inappropriate content.

Lastly, one must consider the often-overlooked issue of automated decision-making. If the chatbot influences behavior or makes decisions in a broad sense (e.g., assigning profiles, suggesting actions, or modifying the user experience), the safeguards outlined in Article 22 of the GDPR apply—requiring specific protections, including human oversight.

The lesson is clear: conversational AI can offer significant opportunities, but only when integrated within a framework of clear, documented accountability focused on the rights of data subjects. Superficial compliance is no longer sufficient—it’s time to design chatbots responsibly, with a comprehensive vision that balances innovation, ethics, and data protection.

LEXIA’s Data & Technology Innovation team is available to support businesses and developers in risk assessment, privacy-by-design development of AI solutions, and in updating privacy documentation in line with the latest regulatory guidance.

Artificial Intelligence Literacy: The European Commission’s FAQs for Proper Implementation of the AI Literacy Requirement

Artificial Intelligence

The entry into force of Regulation (EU) 2024/1689 (“AI Act”) marks a turning point in the European regulatory approach to artificial intelligence. Among the provisions of the AI Act already applicable as of 2 February 2025 is Article 4, which introduces a specific AI literacy requirement for providers and deployers of AI systems.

The goal is clear: to promote an organizational culture based on awareness, understanding, and accountability in the use of artificial intelligence technologies.

According to the definition set out in Article 3, point 56 of the AI Act, AI literacy goes far beyond mere technical knowledge: it includes the ability to understand the risks, opportunities, ethical impacts, and legal implications of AI systems. Its implementation requires the development of differentiated training programs tailored to business functions, skill levels, and usage contexts.

In this regard, the European Commission provided concrete guidance through its FAQs published on 13 May 2025, emphasizing the importance of a modular and proportionate approach.

There is no one-size-fits-all model. An organization using high-risk AI systems, for example, must adopt enhanced training programs that include risk analysis, practical simulations, and modules on regulatory profiles. The obligation also extends to third parties—such as consultants, suppliers, and contractors—acting on behalf of the controllers.

As legal and innovation professionals, it is now essential to support organizations in thoroughly mapping the technologies in use, defining their organizational role (as provider or deployer), assessing risks, and designing documentable training plans. Complying with the AI literacy requirement is not merely a matter of regulatory adherence—it is a strategic tool to mitigate liability, prevent sanctions, and ensure ethical and trustworthy use of AI. A value for all.

LEXIA’s Data & Technology Innovation team is available to support AI system providers and deployers in structuring AI literacy plans appropriate to their legal entity type and in aligning with the AI Act’s provisions according to applicable deadlines.

Datum
Sprich mit unseren Experten