Generative AI: The Technology Revolutionizing Artificial Intelligence


The New Frontier of Artificial Intelligence is generative AI, which is revolutionizing the world. The functioning of this technology is based on complex algorithms that learn to recognize and replicate existing data patterns to generate entirely new information, offering new opportunities to create diverse content (texts, images, music,…).

However, these technologies raise ethical and security issues that cannot be ignored. In this article, we will explore how generative AI works, the risks (especially ethical ones) of its use, and the development of sentient artificial intelligence.

Generative AI: The Technology Revolutionizing Artificial Intelligence

The rapid development of Artificial Intelligence is increasingly incorporating generative AI, capable of producing always new and original outputs. While this technology offers numerous new possibilities, the associated issues and risks cannot be overlooked. To fully appreciate both the benefits and potential drawbacks of generative artificial intelligence, it is necessary to understand how it operates.

Generative Artificial Intelligence represents an advanced frontier in AI systems. These systems, using complex machine learning and deep learning models, emulate human creativity. This is achieved through processing and analyzing vast volumes of data, including images, sounds, and texts. The models learn to recognize patterns, styles, and structures within this data, allowing them to create new content that can be perceived as original or creative.

The learning process of these AIs is iterative and progressive: initially, the models may generate rudimentary results, but over time and with the addition of new data, they improve their production capabilities. For example, in deep neural networks like Generative Adversarial Networks (GANs), two neural networks work in tandem: one generates new data, while the other evaluates their quality, in a continuous feedback loop refining the generative network’s ability to produce increasingly convincing and refined results.

This ongoing evolution has allowed generative AI to excel in various fields. In the legal sector, for example, it can be used to generate drafts of legal documents or analyze extensive volumes of case law and legislation to assist in legal research. In the music field, it can compose original pieces or facilitate the creative process by suggesting variations and harmonizations. In art, it can create visual works imitating the style of famous artists or develop new artistic styles.

The sophistication of the results is directly linked to the quality and variety of the data entered into the system. The broader and more diversified the datasets, the more capable the AI will be to “understand” and replicate the subtle nuances that characterize human creativity.

Generative Artificial Intelligence: What are the Risks?

The availability of a tool capable of creating increasingly sophisticated content raises ethical issues, such as respecting copyright and the possibility of creating deceptive or false content with great realism. These issues require careful consideration and regulation, particularly concerning intellectual property.

The aspects to consider regarding the works produced by a generative AI system are primarily their originality and authorship. It is imperative to establish whether the output provided by the system is genuinely original or, on the contrary, could somehow violate another party’s copyright. The training dataset of the AI system is composed of a library of works produced by other authors, from which the necessary information for the development of the creative ability of the system is extracted.

The second question raised by the contents produced by generative AI concerns their authorship, i.e., identifying the subject holding the intellectual property rights connected to such works. Current regulations adopt an anthropocentric perspective, not considering the possibility of a non-human author.

Issues related to the protection of personal data that users provide to AI, more or less consciously, are also of particular relevance. The measure adopted by the Italian Data Protection Authority against ChatGPT in April 2023 and the notification of personal data breach recently sent to OpenAI demonstrate how the data processing by such technologies can violate European data protection regulations.

GPT and Sentient Artificial Intelligence: The Future of AI

GPT-3, along with cutting-edge technologies like Bard and the latest Gemini, marks significant progress toward the long-term goal of developing artificial intelligence that approaches the concept of sentience. These platforms significantly advance the possibilities of AI in machine learning, enabling the creation of texts, images, and sounds with an increasingly refined level of precision.

The goal of sentient AI is to facilitate interaction that is as natural and indistinguishable from human interaction as possible, both in terms of understanding and emotional-cognitive response. Despite these advances, the path to AI that can completely emulate the human brain in its most sophisticated functions and interaction capabilities remains complex and fraught with technical and ethical challenges.

The ability of these technologies to create highly realistic and persuasive content, as seen in deepfakes, introduces risks of information manipulation that can influence public opinion and alter the perception of reality. These false or misleading representations can be used for various malicious purposes, including spreading false news with the intention of deceiving or causing misinformation, targeted defamation campaigns to harm individuals through the creation of compromising content that appears authentic, or privacy violations by producing materials that invade personal space without consent.

While there are currently no AI systems that can be defined as fully sentient in the true sense of the term, technological progress could lead to systems exhibiting behaviors increasingly similar to those of human or animal intelligence, raising unprecedented questions about the autonomy, awareness, and morality of machines.

In this context, it is essential not only to develop responsible and secure technologies but also to establish legal and regulatory frameworks regulating the use and distribution of content generated by AI. A careful assessment of legal responsibilities in case of misuse of these technologies is required, as well as the implementation of content detection and verification mechanisms to prevent the spread of harmful material. Furthermore, it is crucial to educate the public about the capabilities and limitations of generative AI, aiming to promote critical understanding that can help mitigate the negative impacts of these powerful technologies on society.

Institutional Responses

Faced with the new scenarios posed by technological evolution, it is now imperative for institutions to collaborate in identifying new regulatory solutions.

In this regard, the European Union has taken several measures to respond to the needs posed by technology, which is now becoming an integral part of our daily lives, and to guide its development in accordance with the principles underpinning the Union.

The new regulation on Artificial Intelligence (the so-called AI Act) proposed by the European Union represents a historic moment for the regulation of artificial intelligence and the first attempt to strike a balance between the development and implementation of AI on the one hand, and the need to protect privacy, human rights, and security on the other. At present, the AI Act has overcome a series of intense negotiations and has been approved by all 27 EU Member States; only the official approval of the European Parliament is awaited, scheduled for April 2024, which seems to be a mere formality. This regulation is essential to define the responsibilities of AI developers and users and also includes significant sanctions for non-compliance, which can go up to 7% of the company’s annual global turnover.

On a complementary level, the Directive proposed by the European Commission on extra-contractual civil liability for Artificial Intelligence systems will also be able to operate, awaiting approval.

In an era of rapid and sometimes uncontrollable changes, the European Union positions itself as a guiding light, demonstrating its commitment not only at the regulatory level but also as a promoter of constructive dialogue between technological progress and the protection of human rights, aiming for a sustainable balance that can serve as a model globally.

Speak to our experts