The Italian Data Protection Authority fines an energy company for unlawful telemarketing: focus on “omnibus” consent
Data protection
On February 27, 2025, the Italian Data Protection Authority (Garante per la protezione dei dati personali) imposed a €300,000 fine on a national energy company for violations in managing telemarketing and telesales activities. Our article provides an in-depth analysis of the case.
The main issues identified include the use of invalid consent (so-called “omnibus” consent), failure to comply with objections registered in the Public Objections Register (Registro Pubblico delle Opposizioni – RPO), and a lack of adequate oversight in managing promotional contact lists, including outsourced operations.
According to the Authority, it is crucial to distinguish between telemarketing (promotional phone activities) and telesales (direct sales via phone). Both activities fall under data protection regulations, even when the phone contact is only a preliminary step toward finalizing a contract.
The decision also highlights the illegality of broad and generic consent, which does not differentiate between purposes, contact methods, or product categories. Since such consent is invalid, it cannot override RPO registration and does not nullify the “opt-out” regime established through this registration. Instead, valid consent must be:
- freely given, meaning optional and not conditional;
- specific, tailored to each distinct purpose;
- granular, differentiating between contact channels (phone, SMS, email) and product categories.
Privacy authorities in France, Spain, and Germany have also adopted strict approaches to this issue. Notable measures include CNIL’s requirement for granular consent for partners and channels, AEPD’s quarterly consent verification, and German case law that invalidates overly broad consent.
In summary, the Italian Authority reiterates a key principle: consent for data processing for marketing purposes must be genuinely free, specific, and targeted. Implementing a compliant telemarketing and telesales process—including proper qualification of external providers—is essential for conducting lawful promotional activities, avoiding penalties, and safeguarding consumer trust and brand reputation.
Our Data & Technology Innovation team is available to assist companies and professionals in planning telemarketing and telesales campaigns in compliance with current data protection regulations and in implementing operational best practices.
Senate approves AI delegation bill
Artificial intelligence
On March 20, 2025, the Italian Senate approved the government’s bill (Senate Act No. 1146), originally presented on May 20, 2024, titled “Provisions and delegations to the Government on artificial intelligence” (the “AI Bill”). This legislation regulates the use of artificial intelligence (“AI”) in areas left to the discretion of member states under Regulation (EU) 2024/1689 (the “AI Act”).
The AI Bill will now move to the Chamber of Deputies for final approval. Its goal is to promote responsible, transparent, and human-centered AI use, in alignment with the AI Act. Key provisions include:
- general principles: Ensuring data protection and privacy, which require special safeguards in AI applications.
- national AI authorities: The Agenzia per l’Italia Digitale (AgID) and the Agenzia per la Cybersicurezza Nazionale (ACN) are designated as Italy’s AI regulatory bodies.
- sector-specific regulations: AI applications in healthcare, disability services, employment, intellectual professions, public administration, and the judiciary—where AI may assist in organization but cannot replace judges in decision-making.
- government delegation for legislative decrees: Empowering the government to adopt further legislative measures in various areas.
- amendments to the Penal Code: Introducing new aggravating circumstances and offenses, including those related to harm caused by the illegal distribution of AI-generated content.
- copyright protection: Addressing one of AI’s biggest challenges—recognizing AI-assisted works as copyrightable only when they result from the intellectual effort of a human author.
This initiative comes amid rapid global regulatory developments. On February 4, 2025, the European Commission approved draft guidelines on prohibited AI practices, followed by the February 6 release of specific guidelines defining AI systems under the AI Act.
LEXIA’s Data & Technology Innovation team closely monitors all regulatory developments in this space.
Artificial intelligence and hallucinations: no liability under Article 96 c.p.c.
Artificial intelligence
In its March 14, 2025 ruling, the Florence Court clarified that the use of fabricated case law citations generated by artificial intelligence in legal pleadings does not give the victorious party the right to seek “aggravated liability” under Article 96 of the Italian Code of Civil Procedure (c.p.c.). In other words, referencing non-existent legal precedents created by ChatGPT does not, in itself, amount to bad faith or gross negligence in court proceedings.
While the Court acknowledged the “reprehensibility of failing to verify the actual existence of case law results produced by AI”, it ruled that the references were merely a reinforcement of an already established defense strategy rather than an attempt to litigate in bad faith. Therefore, Article 96 c.p.c. could not be applied.
The ruling further explains that AI-generated errors can be classified as “hallucinations,” a phenomenon where AI “invents non-existent results but then confirms them as truthful upon further queries.” In this case, the AI allegedly fabricated fictitious case numbers attributed to Italian Supreme Court decisions. The mistake, caused by an unverified use of AI by a law firm associate, led the winning party to request aggravated liability penalties against the opposing side.
The Florence Court dismissed the request, emphasizing that the incorrect case law citations were used only to support an argument already presented in the first instance and were not intended to mislead the Court but to reinforce known legal reasoning.
This ruling serves as a reminder that, in a world where AI is reshaping every sector, developing AI literacy is no longer optional—it is essential for maintaining both competence and accountability. To better understand AI, its impacts, and its applications, contact LEXIA’s Data & Technology Innovation team for further insights.
AI literacy and internal policies: key tools for responsible AI adoption
Artificial intelligence & Data protection
The rapid spread of generative AI tools—often adopted independently by employees—requires companies to take immediate action in terms of awareness, governance, and accountability. In this context, AI literacy is not only a strategic asset for competitiveness but also a prerequisite for regulatory compliance and responsible risk management.
Italian businesses, including SMEs, should adopt a dual approach:
- conduct an internal assessment of AI systems used or accessible within the organization, whether at the corporate or individual level. This includes widely available tools such as ChatGPT, Copilot, or Gemini.
- implement corporate AI policies that define roles, application areas, safeguards, and controls, integrating them with existing cybersecurity, privacy, and IT usage policies.
Internal guidelines should cover:
- authorization criteria and access rules for AI tools.
- verification procedures for AI-generated outputs, with special attention to AI hallucinations.
- restrictions on inputting personal data or confidential information without adequate legal and technical safeguards.
- compliance with GDPR in the use and storage of AI-generated content.
- clarifications on intellectual property, individual responsibilities, and internal controls.
Several Italian companies have already adopted internal AI usage policies, demonstrating how clear and well-structured guidelines can serve as a reference framework for the safe and effective use of AI. These policies align with the AI Act and the principles of responsibility and transparency emphasized in the AI Bill currently under review in the Italian Parliament.ilità e trasparenza richiamati anche nel disegno di legge IA attualmente all’esame della Camera.
The LEXIA Data & Technology Innovation team is available to assist businesses in mapping AI tools within their organization, drafting tailored AI policies, and updating internal privacy and compliance documentation to tackle the challenges of artificial intelligence with expertise.