WHY THE AI ACT IS DOOM TO FAIL
The text of the AI Act was unanimously endorsed last Friday by the Member States of the EU. The regulation has not yet been formally and definitively approved; nor do we know its effects, but we can get an idea of its impact, considering the nature of Artificial Intelligence, its challenges, and its advancements.
The AI Act can be analyzed from various areas, but in this brief analysis, I will focus on whether the current regulation and legislative structures are prepared to address Artificial Intelligence.
To begin with, we are immersed in a legislative process for the regulation that has spanned several years. This process has been the setting for numerous modifications, intense negotiations among European institutions, and meticulous debates concerning the inclusion and wording of specific articles.
Meanwhile, the realm of Artificial Intelligence has not halted its rapid advancement; during this same period, we have witnessed the launch of significant innovations such as ChatGPT. Three years of legislative process can lead to several advancements in Artificial Intelligence or even a single innovation that completely transforms the landscape. The evolution of AI is not governed by negotiations or the timelines of the legislative process. Crafting regulations in a timely and appropriate manner is an exceedingly complicated task; regulating AI too soon may be counterproductive, and doing so too late may lead to providing solutions for problems that barely admit a solution. I am not suggesting that the slowness of the processes is inadequate per se, but that even with swift regulation, we would face the issue that it may not be the most suitable time.
In the current context, where legislative processes tend to be bureaucratic and sometimes opaque, it becomes extremely challenging to effectively intervene in the field of Artificial Intelligence to ensure its proper functioning. This situation is exacerbated by the problem identified by Friedrich Hayek regarding the capacity of central legislators: the need for precision and a detailed understanding of specific circumstances that central planners simply do not possess. The inherent complexity of Artificial Intelligence, along with its rapid evolution, demands a more agile and adaptive regulatory approach, capable of responding to the specific and often unpredictable dynamics of this advanced technology.
Secondly, continuing with Hayek's analysis in his work "Law, Legislation and Liberty", the dispersed and fragmented nature of information permeating society is highlighted, underscoring the challenge legislators face in capturing and understanding this diversity in their regulatory efforts. This challenge is further intensified in the context of Artificial Intelligence, whose dynamic, unpredictable, and exponentially growing nature further complicates regulatory complexity. In the realm of privacy and data exchange, where each individual values and manages their personal information based on a unique set of preferences and risk perceptions, AI adds additional layers of complexity. Individuals make decisions reflecting their autonomy and their own cost-benefit evaluations, creating a mosaic of behaviors and expectations that defy any attempt at uniform regulation. The tendency of legislators to impose 'top-down' uniform regulations, based on standard European values, carries the risk of generating inefficiencies and a lack of adaptation to specific circumstances, especially in a constantly evolving field like AI.
Thirdly, how do we know that the risk-based approach is suitable for regulating AI? This approach can be analyzed from various angles, but I will focus on one: who the problema of managing diverse risks. In the study "A Risk to a Right? Beyond Data Protection Risk Assessments," the authors point out that risk assessment methodologies are based on the assumption of predicting the future based on statistics and probabilities. However, this comes at the cost of reducing the full range of uncertainties to the more comforting illusion of controllable, probabilistic but deterministic processes. The choices regarding risk criteria will determine what is considered a risk in the first place.
This simplifying approach may fail to grasp the inherent complexity and dynamic nature of AI, leading to a deceptive perception of control and a potential underestimation of risks. Moreover, the subjectivity in defining and prioritizing risks unveils a dilemma about the suitability of such methods to address the specific and constantly evolving challenges of AI.
As Luhmann points out in "The Morality of Risk and the Risk of Morality", the distinction between danger and risk reveals how technological development, despite containing few intrinsic dangers, increases risks by transforming dangerous situations into risk-based decisions. In the realm of AI, this is manifested in the inevitable emergence of new risks as the technology progresses. The study underscores the importance of assuming and experiencing these risks, as only through direct interaction and the gathering of experiences can we define, evaluate, and ultimately develop strategies to mitigate these risks. Active management and practical experience are crucial for understanding and handling the risks associated with the continuous evolution of Artificial Intelligence.
It would be problematic if regulation were to hinder experimentation with these risks, as such a restriction would hamper our ability to fully understand and respond to the unique challenges presented by AI. By not allowing experimentation and controlled risk-taking, valuable information needed to develop effective mitigation and prevention measures would be lost. This could result in regulations that are both overly restrictive and ineffective, limiting innovation and technological progress without providing adequate protections against real and significant risks. In short, regulation that does not allow a certain degree of experimentation and risk-taking risks being both ineffective and detrimental to the advancement and safe application of Artificial Intelligence.
Jesús Fernández Villaverde's paper, "Simple Rules for a Complex World", examines the role of simple rules in the era of Artificial Intelligence. “The simple rules were the product of an evolutionary process. Roman law, the Common law, and Lex mercatoria were bodies of norms that appeared, over centuries thanks to the decisions of thousands and thousands of agents (Berman, 1983). Roman law, for example, became predominant in Western Europe outside England in the late Middle Ages not because kings and dukes liked it (in fact, they did not), but because armies of lawyers and business people saw that it solved their problems. Good law is nothing more than applied optimal mechanism design. The forces of evolution, by trial and error, led us to the optimal solution to such a mechanism design problem, not always tidily, but inexorably.
Jurists and agents, through a combination of reasoning and experience, saw what worked and what did not. Those rules that led to Pareto improvements survived and thrived. Those that did not, dwindled.“
The AI Act markedly differs from the evolutionary legal development process described in "Simple Rules for a Complex World". Unlike historical legal systems, the AI Act is the result of a top-down regulatory approach. It is formulated and imposed by legislative bodies, with regulatory changes also decided from the top, contrasting the natural, grassroots evolution of historical legal frameworks.
Moreover, the regulation faces the challenge of maintaining that simplicity and clarity in the complex and rapidly evolving field of AI. The current regulatory proposal is far from simple and clear, offering a wide margin for interpretation. This interpretation ultimately falls to the public bodies themselves, leaving little room for innovation in this area by the private sector.
My concern lies in the fact that the current regulation seems to be configured not so much to adapt to the real-world experience of society with Artificial Intelligence, but rather to shape AI to fit the contours of a pre-established regulation. This dynamic does not lead us to properly prepare for the future of AI; on the contrary, it seems to primarily direct us to comply with the demands of Europe. However, these European guidelines do not seem to comprehensively address the complex and constantly evolving needs of Artificial Intelligence.
Trabajos citados
Fernández-Villaverde, J. (2020). Simple Rules for a Complex World . University of Pennsylvania.
HAYEK, F. A. (s.f.). DERECHO, LEGISLACION Y LIBERTAD (2 edición). Unión Editorial.
Luhmann, N. (2010). The morality of risk and the risk of morality . New York University.
Niels Van Dijk, R. G. (2015). A risk to a right? Beyond data protection risk assessments. Computer Law & Security Review
.Image generated by DALL·E