laitimes

Shang Quan Recommend丨Cheng Long: From Method to Topic: The Distinction between Empirical Law and Artificial Intelligence Law

author:Shang Quan Law Firm

In the article "Review and Return of the Conceptual Terminology of "Positive Law": The Integration Path of Evidence-based Empirical Legal Research Based on Literature" (hereinafter referred to as "Xiong Wen"), Professor Xiong Moulin systematically reviewed and sorted out the "beginning, inheritance and transition" of empirical research in the field of civil law in the past half century, and showed the past, present and future of empirical law as a method and emerging discipline with concepts and terminology as the center. At the same time, when Xiong Wen analyzes many derivative concepts of empirical law, he attributes the history of the first signature of the concept of "artificial intelligence law" to the author. In this regard, combined with the author's further research and thinking on artificial intelligence law in recent years, especially the critical reflection on the concept of artificial intelligence law, it is necessary to make a more adequate and in-depth discussion and explanation on the related topics of artificial intelligence law. In the following, we will first start from the differences between the research methods of artificial intelligence law and empirical law, and clarify the author's thoughts on the incommensurability between empirical law and artificial intelligence law. Then, on the topic of "algorithmic procedural justice" in criminal procedure law, the author's critical views on artificial intelligence jurisprudence are clarified.

1. The incommensurability of empirical law and artificial intelligence law

From a conceptual point of view, empirical jurisprudence, especially empirical research based on data analysis and statistical analysis, does have a natural affinity with legal artificial intelligence technology. In essence, legal artificial intelligence is an analysis algorithm based on a large amount of data, and its core is to find data rules through data analysis, and then form an algorithm and realize human brain simulation, and finally assist or even replace human judicial activities. Xiong Wen also pointed out that computational law is a legal method based on computer technology, but it still belongs to the ranks of empirical research. Naturally, we will equate empirical jurisprudence with AI jurisprudence, and regard AI jurisprudence as the territory of empirical jurisprudence.

In addition to the emphasis on data, empirical jurisprudence and AI jurisprudence also share the logical starting point of attaching importance to judicial practice. From the perspective of academic history, roughly in the second half of the first decade of the 21st century, mature and systematic empirical research began to emerge in the civil law community. The core methodological idea of these studies is to recognize the huge difference between "law in books" and "law in action". This idea can even be traced back to the earlier research on "rule of law nativism" represented by Su Li or the study of social science law. They all noted the limitations of statutory law, namely that the normative nature of statutory law does not equal its effectiveness in practice. The difference is that empirical legal research is more willing to reveal "what is actually is", while social science legal research may focus more on "what is not actually what"; Empirical legal research prefers to use large samples and big data, while social science legal research emphasizes the in-depth description of individual cases. In short, it has become the "original intention" of empirical jurisprudence to locate the perspective on judicial practice and intend to transform judicial practice through the study of judicial practice. From the perspective of technical premise, AI jurisprudence also has a strong tendency to observe judicial practice rather than legal provisions. A basic example is that the "feed" that feeds machine learning and forms algorithms can only be judicial data, not legal letters. At the same time, after the formation of artificial intelligence algorithms, they can in turn assist and even influence the formation of judicial decisions. From this logic, empirical jurisprudence and AI jurisprudence do have a considerable degree of relevance. This is also an important reason why after the "first year of artificial intelligence" in 2016, a large number of legal empirical researchers began to pay attention to or even turn to legal artificial intelligence research. However, the author still believes that it is necessary to pay attention to the incompatibility between empirical law and AI law.

First of all, empirical jurisprudence and AI jurisprudence have different purposes for the use of data. The purpose of using data in empirical jurisprudence is to reveal the judicial reality, so as to show the difference between legal normativity and effectiveness, and the data itself is the object of empirical research. Data such as the rate of witnesses appearing in court, the appeal rate of plea cases, the arrest rate, and the time taken by public prosecutors after the summary procedure prosecutor appearance reform are themselves the "findings" of empirical research. However, data in legal AI is merely a "nourishment", a basic raw material for machine learning. Legal AI cannot distinguish between whether the judicial data fed is normative and legitimate. On the contrary, the purpose of the use of a large amount of judicial data is to enable legal AI systems to achieve "the same case and the same judgment". In other words, the data in AI legal research is probably not the same thing as the data in the field of empirical jurisprudence. Data in empirical law is the object of research, while data in artificial intelligence law is only a technical premise and a basic material for forming a legal artificial intelligence system. AI jurisprudence attaches great importance to data, not because of the importance of data itself as a research object, but because data is an indispensable technical foundation for the formation of legal AI systems.

Second, positive jurisprudence and AI jurisprudence have different attitudes towards positive law. If social science law and legal anthropology are also included in the category of positive law, then it can be found that positive law itself has the function of criticizing the current legislation. From the "Qiu Ju Lawsuit" to the "Yan'an Yellow Saucer Case", the mainstream position of social science jurisprudence is to affirm the wisdom of practice and criticize legal dogmatism by revealing the differences between legal practice and legal texts. Empirical jurisprudence based on data analysis may be slightly less critical, but it still shows a certain opposition between action and text; Even if it does not deny the normative nature of legislation, it at least reveals the "powerlessness" of this normative nature in reality. However, as a tool to assist the judiciary, legal artificial intelligence cannot contradict the current law from the design concept, and it cannot produce a situation of "looking for laws other than the law". Contrary to positive jurisprudence, the role of legal AI is to reduce or even eliminate the difference between "law in action" and "law in books"; By reasonably limiting the discretion of judges, arbitrariness in the application of the law is avoided. As Professor Ji Weidong said, "Any legal expert system software implies a pure legal positivist presupposition. Based on this, AI jurisprudence assumes that the current law can and must be fully and accurately applied, and aims to achieve "the same case and the same judgment" through the legal AI system, and it will not and cannot "declare war" on positive law.

Thirdly, the scope of research of positive law is different from that of artificial intelligence law. Positive jurisprudence can be used to study all legal actions and legal practices. In recent years, in addition to justice, empirical research on law has also involved many fields such as legislation, law enforcement, and law-abiding. However, the application of artificial intelligence in the law on which artificial intelligence jurisprudence is based is still limited to a single aspect of auxiliary justice. As a result, the research scope of AI law is mainly limited to the judicial field, and it is difficult to extend it to the all-round legal research fields such as legislation, law enforcement, and law-abiding.

Finally, positive jurisprudence has a different intellectual tradition from AI jurisprudence. When positive jurisprudence emerged, legal scholars generally regarded it as an "interdisciplinary research method" because it required sufficient knowledge of mathematics, statistics, sociology, anthropology, ethnology and other disciplines as a research foundation. However, after all, the above-mentioned disciplines belong to the category of traditional human knowledge systems, and the knowledge outside of law involved in artificial intelligence law is also at the forefront and boundary of human science. At present, the development of artificial intelligence technology is changing with each passing day, and technological products such as ChatGPT are exploding, which brings about the speed and unknown of knowledge update. Limited to the liberal arts nature of the discipline of law, most of the legal scholars' understanding of artificial intelligence technology is in the "discourse stage", and the principles and methods of artificial intelligence technology are actually difficult to know. Algorithms, machine learning, etc., are in the "imagination" and "observation" stages. To put it bluntly, most researchers in AI jurisprudence haven't even touched the machines used for machine learning; And the computers they use to write AI law papers may not have RTX4090 graphics cards plugged in. If the requirements of empirical research for mathematics have discouraged many people, then for the advanced knowledge structure of artificial intelligence law, the learning cost that lawyers need to pay will be unimaginable.

In summary, although empirical law and artificial intelligence law have a certain intimate relationship in perceptual cognition, there are many differences between the two in nature, and there may be further differentiation or even "parting ways" in the mature development of the two types of research in the future.

2. Reflections on the topic of artificial intelligence law: taking algorithmic procedural justice as an example

The development of artificial intelligence jurisprudence has prompted legal scholars to start thinking about the impact of legal artificial intelligence technology on the traditional legal theory system. Among them, the legal regulation of algorithms has attracted special attention. In fact, almost all legal AI systems have algorithms at their core, using algorithms to simulate human thinking to deal with legal issues. As a result, the algorithm itself becomes the de facto "arbiter". However, algorithms, as adjudication techniques, will impact traditional procedural justice theories. For example, algorithmic bias affects adjudicator neutrality, algorithmic black box affects program disclosure, and algorithmic monopoly affects program reciprocity. However, the traditional theory of procedural justice has many limitations, and it is difficult to regulate algorithms. For example, blind spots in the virtual digital space, technical personnel and other substantive procedural participants are neglected, digital procedural regulation is lacking, and the acceptability of adjudication is declining. Therefore, the academic community proposes that algorithms should be regulated by a new theoretical framework of algorithmic due process. The content of algorithmic due process shall include: (1) openness and transparency of algorithms; (2) the interpretability of the algorithm; (3) Listen to opinions, allow questions, hearings, and corrections.

Recognizing the impact of legal AI technology on the current legal system is a sign of the maturity of the development of AI jurisprudence. However, it is still necessary to think deeply about whether to "start anew" from the existing legal framework and create a new artificial intelligence legal system. Due to the conservative and lagging nature of the law itself, in fact, the emergence of all new technologies may exceed the original legal concept system and form new regulatory needs. For example, 20 years ago, the emergence of online virtual property gave rise to the question of the positioning of online virtual property in civil law and criminal law. However, the academic circles at that time did not intend to create a "virtual property law" or a "virtual property rights system" that was completely different from the traditional legal system, but incorporated virtual property into the traditional property rights or property rights system, and expanded the boundaries of traditional theories to regulate them. That's not to say that the law doesn't need to be reformed. For example, the bankruptcy regime gave rise to bankruptcy law, and the development of international trade brought about by the facilitation of transportation led to the emergence of international economic law. When a new thing or new behavior is completely unregulated by the traditional legal system, it is indeed necessary to create a new legal concept and normative system. But the question is, has the development of legal artificial intelligence really reached this point?

In the author's opinion, it is difficult to give a positive answer. Taking AI-assisted adjudication as an example, if AI itself has greatly influenced the current judiciary, and the existing procedural law system does not regulate AI adjudicators, then a set of theories of "algorithmic procedural justice" or even "artificial intelligence procedural justice" should be formed. But this premise faces "fatal" questions: first, should artificial intelligence become the referee? Second, can the participation of artificial intelligence in trials be regulated by existing laws?

First, neither artificial intelligence nor algorithms should be the subject of adjudication in the foreseeable present or future. The algorithmic procedural justice theory actually assumes the subjectivity of artificial intelligence in adjudication. In other words, it believes that the algorithm itself has become the process of forming the thinking and mental evidence of judicial adjudication, which poses a major impact on the traditional judge-centered litigation process, so it is necessary to establish a new procedural justice model to regulate it. Indeed, at present, the mainland judiciary, especially criminal justice, has become very dependent on artificial intelligence. Although legal AI is still regarded as an "auxiliary" in adjudication in theoretical discourse, it is difficult to accurately delineate the scope, space and depth of this auxiliary function. If this kind of adjudication assistance has become the main basis for the judge's mental evidence, and the judge firmly pursues the "theory of the supremacy of technology", then this kind of assistance is actually a decision. At this level, the concerns about algorithmic procedural justice are not unreasonable. But the problem is that reality does not equal legitimacy, and practice does not equal norms. Professor Chen Jinghui believes that due to the comprehensiveness and supremacy of the law, algorithms cannot become laws, but can only be the objects of law. The biggest challenge to algorithmic due process is that it unthinkingly believes that the argument algorithm actually becomes the subject of adjudication, which is equivalent to arguing that the algorithm can become the subject of adjudication. On the contrary, the algorithm itself should not be the subject of adjudication, or even as an auxiliary subject of adjudication. On the one hand, it is a blatant violation of the mainland's constitution and laws, and the mainland's judicial power is exclusive to the courts and judges, and the algorithms, the technical personnel and commercial institutions behind the algorithms should never become the subjects that affect the judicial trials. On the other hand, the influence of algorithms on adjudication will have a huge impact on the basic principles of trial justice, such as personal experience, human nature, and situationality, thereby detracting from the legitimacy of adjudication, and it is difficult for the current people to accept it. Moreover, the accuracy and scientificity of algorithms themselves are also full of controversy, and if we now regulate them as one of the main bodies of adjudication, it is likely to obscure the discussion of the rightful status of algorithms in judicial operations.

Second, the impact of AI and algorithms on the administration of justice should be regulated, and it can be regulated through traditional legal frameworks. Denying that AI and algorithms are the de facto subject of adjudication does not mean ignoring their impact on the administration of justice, but rather arguing that this influence should not have a decision-making function, but can be a source of decision-making information. Moderate algorithmic procedural justice theorists are also constructing new theories around how algorithms, which are decision-making information, enter court hearings. However, the author still believes that the existing legal system can fully regulate algorithms. As mentioned above, the theory of algorithmic procedural justice actually requires that algorithms be transparent, algorithmically explainable, and algorithmically refutable. If we regard algorithms as content that should be adjudicated, then the requirements of algorithmic procedural justice can be fully accommodated by the existing litigation system. The central takeaway is that concluding observations resulting from legal AI should be treated as evidence. First of all, the evidentiary nature of the information opinions generated by the algorithm is clarified, and the algorithm actually becomes an appraiser or a person with specialized knowledge. In this way, the scientificity and robustness of the algorithm can be tested to clarify the applicability of the algorithm, which is equivalent to determining the expert qualification of the algorithm as a digital appraiser or expert assistant. Second, the formation process of the algorithm itself should be disclosed, so as to meet the requirements of openness and transparency, which is equivalent to reviewing and confirming the professional qualifications of appraisers and expert assistants. Thirdly, litigants who use algorithmic opinion conclusions shall bear the burden of proving the reliability of their conclusions, and fully explain the process of producing algorithmic opinions, scientific principles, and so forth, which is similar to the burden of proving the scientificity and reliability of their evaluations and special opinions. The other litigant has the right to conduct cross-examination, and may hire experts to assist in questioning algorithms and algorithmic opinions, and may even re-use the legal artificial intelligence system to regenerate algorithmic opinions. Finally, the court shall, after hearing the cross-examination of the opposing parties' opinions on the algorithm, form a mental evidence on whether or not the opinions on the algorithm are adopted, and must not determine the opinions on the algorithm without cross-examination. From this point of view, in fact, the algorithm and the opinions generated by it can be used as a new appraisal opinion or special report, and the relevant provisions of the Evidence Law and the Procedural Law are applied to deal with it, and there is no need to establish a new theory of algorithmic procedural justice to regulate it.

3. The Essence of Artificial Intelligence Jurisprudence: Refusal to "Package"

As the first proposer of the concept of "artificial intelligence jurisprudence" identified by Xiong Wen, the author's thinking and research in recent years may have gradually produced a "rebellion" against this concept. The core point is that, on the one hand, the theoretical basis and problem orientation of artificial intelligence law and positive law are different, and the view that AI law can grow from positive law and become a representative of the new era of positive law may not be valid. On the other hand, AI jurisprudence as a topic may also exaggerate the impact on existing legal theories. On the basis that the premise of necessity cannot be properly demonstrated, the assumption and effort of artificial intelligence jurisprudence based on practice and the intention to establish a completely different system of legal theory may face dual doubts of legitimacy and necessity. The kind of artificial intelligence jurisprudence packaged as "high" technology is destined to be difficult to be widely accepted by the academic and practical circles. As Professor Xiong Moulin said, "Any high-tech expression that is divorced from the legal education of jurists and law students, or that separates the ability of legal researchers, is destined to stay away from law." No matter how cool and reasonable it sounds, in the end, it can only be because of the excessively high starting point, which makes the participation of jurists less and less, and finally the starting point is blocked by the fear of methods.

Source: "Huxiang Law Review" 2024 Issue 2 (total No. 12) "Special Writing" column

Author: Cheng Long, Doctor of Law, Associate Professor, School of Law, Yunnan University

Read on