AI Law - International Review of Artificial Intelligence Law
G. Giappichelli Editore

01/03/2024 - Some preliminary notes about Artificial Intelligence Law

argument: Commenti/Comments - Legal Technology

Generative AI highlights the potential impacts of artificial intelligence and the complex legal challenges it presents, especially in areas like intellectual property and data protection. This necessitates an interdisciplinary approach for the development of AI law. The text delves into the author's exploration of AI's legal, technical, and technological dimensions, emphasizing the importance of engaging with AI technology to understand its legal implications effectively. It discusses the variety of AI technologies, their distinct purposes, and the philosophical considerations that underpin legal theories related to AI. Furthermore, it addresses the contentious issue of liability in AI use, the difficulties in determining responsibility, and the challenges of applying traditional legal concepts to AI, given its complexity and the rapid technological advancements. The piece advocates for ongoing collaboration among legal scholars, technologists, ethicists, and other stakeholders to create a legal framework that keeps pace with AI's development, ensuring its responsible and equitable application.


written by Marco Perilli

Upon discovering ChatGPT over a year ago, I quickly appreciated the significant impact artificial intelligence (AI) could have, as well as the complex legal challenges it might present. Due to my focus on legal issues concerning technology, especially intellectual property and data protection, I recognized early on that AI would bring about new considerations in legal theory, demanding an interdisciplinary approach for the development of what could be termed the law of artificial intelligence.

In the time since, I have devoted a substantial part of each day to studying both the legal publications and the technical and technological facets of artificial intelligence, a field in which I am an avid user, particularly in one of its forms, the generative kind. I believe that to discuss AI's legal aspects effectively, one should engage with the technology to the extent possible and stay informed about its continuous advancements.

Herein, I wish to share, in the manner of somewhat unstructured notes, some of the principal personal acquisitions from this first year of study. I begin, perhaps with a truism for those versed in this topic, but I believe it worth reiterating that intelligence is neither a singular entity nor a precise concept. The latest draft of the AI ACT, leaked in January 2024 and on track for final approval, defines an AI System as "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." The lack of a precise definition raises concerns regarding the 'principle of legality,' a foundational concept that underpins legal systems across both civil law (Continental) and common law (Anglo-Saxon) traditions. This principle is crucial in determining whether an entity can be reasonably certain of being subject to the AI act. The principle of legality, essential in criminal law, international human rights law, and international humanitarian law, mandates that laws be sufficiently clear and predictable. Hence, legal entities should be able to ascertain with reasonable certainty whether they fall within the purview of specific regulations, such as the AI act.

 

There exist many types and categories of artificial intelligence, classified on the basis of their operational methods and, most importantly, their purposes and objectives. So much so that we speak of a family of technologies whose common denominator is the performance of tasks which, if undertaken by human beings, would require intelligence. We have generative intelligence (to simplify, ChatGPT by OpenAI, Dalle by OpenAI, Midjourney, Gemini by Google, Claude by Anthropic, to name but a few forms developed in United States of America) as well as predictive, emotional, perceptive, analytical, interactive, and autonomous AI, to mention just a few. Current AI systems, as they operate today, do not exactly mirror human intelligence, and they are particularly distant from any notion of consciousness. This distinction is crucial, as the debate on consciousness is not inherently tied to the functionality or capabilities of these systems. My prior academic work in 'Animal Rights' has unexpectedly provided a valuable foundation for navigating the philosophical dimensions of law, which are equally relevant in the context of artificial intelligence.

One of the most debated issues in the legal doctrine of AI is that of liability arising from its use. Who will be responsible for the civil and criminal consequences of its operation: the producer, the distributor, the provider, the user? In this sense, the AI Act is not particularly helpful, regulating the safety and compliance of AI systems according to EU principles. Every proposed solution I have encountered thus far has seemed not completely adequate, especially when considering the technology itself. From my perspective, there are two principal problems that render the identification of a univocal solution to the issue of liability quite challenging: 1) The inscrutability of the internal processes of AI systems, even to their creators; 2) The immense complexity arising from human interactions with AI systems such that one cannot determine with a sufficient grade of reliability the causal efficacy of each individual interaction in relation to the system's final behavior.

Moreover, complicating the framework of liability derived from artificial intelligence is the consideration of jurisdiction and applicable law in relation to AI systems. This is not only in terms of identifying producers but also all entities whose interactions may have contributed causally, even through interaction with the systems and their training, to any damages caused by the use of artificial intelligence. This adds a layer of complexity when considering damages arising from AI, as it challenges traditional legal concepts of jurisdiction and applicable law.

Furthermore, the idea that I have occasionally encountered, which suggests pursuing Artificial Intelligence for its misdeeds as if it were a single entity, with a "name and surname" and a determined place of residence, reveals not only a naively simplistic approach to the problem but also a lack of understanding of the technological aspects of artificial intelligence systems. This perception underestimates the distributed, networked, and often international composition of AI development and deployment, making the attribution of legal personhood or specific location-based responsibility to AI both impractical and misguided. For instance, OpenAI, as of November 2023, has enabled users to freely train and share with others trained versions of CHATGPT, specialized for a wide array of functions (I have already created about thirty of these). This development may sideline some doctrinal speculations centered on the exclusive responsibility of the AI system producer. Thus, a legal analysis that ignores the (rapid) technological evolutions of AI is likely to be flawed from the outset.

Ultimately, an interdisciplinary approach that involves legal scholars, technologists, ethicists, and other stakeholders is essential to develop a legal framework that is equipped to handle AI's unique characteristics. Continuous dialogue between these disciplines will be crucial as we seek to understand and shape the evolving relationship between AI and the law. As AI continues to advance, so too must our legal systems. It is a journey that requires us to be agile, thoughtful, and collaborative, ensuring that we harness the benefits of AI while minimizing its risks and ensuring justice and fairness in its application.