laitimes

How to balance innovation and security in risk governance when AI legislation is in progress?

author:21st Century Business Herald

21st Century Business Herald reporter Zhong Yuxin and Xi student Liu Yuexing report from Beijing

Since the advent of Chat-GPT at the end of last year, generative AI has set off a new round of technological revolution, profoundly changing social productivity and production relations. Technological evolution is developing rapidly, and there are also associated risks such as data breaches, copyright disputes, and discrimination and bias.

It can be seen that the world is engaged in a discourse competition on AI governance, and the formulation of an AI law has also been included in the State Council's 2023 legislative work plan. In this context, how will China contribute to the path of AI governance with Chinese characteristics through legislation?

On December 18, sponsored by the Institute of Law of the Chinese Academy of Social Sciences, and co-organized by the Network and Information Law Research Office of the Institute of Law of the Chinese Academy of Social Sciences, the emerging and interdisciplinary disciplines of digital law of the Chinese Academy of Social Sciences, and the Nancai Compliance Technology Research Institute, the "Seminar on Artificial Intelligence Security Risks and Legal Regulation" was held in Beijing.

At this meeting, the relevant responsible comrades of the Legislative Affairs Commission of the Standing Committee of the National People's Congress, the Department of Policies and Regulations of the Ministry of Science and Technology and other relevant departments attended the speech, and scholars from the Party School of the Central Committee, Tsinghua University, Institute of Scientific and Technical Information of China, Beijing Institute of Technology, University of International Business and Economics, Tongji University, Guangdong University of Finance and Economics and other scholars talked freely, and Tencent, Baidu, Douyin, JD.com, Alibaba, Ant, Sina, Xiaomi, NetEase, Qianxin, 360, iFLYTEK and other enterprises shared industry practices.

How to balance innovation and security in risk governance when AI legislation is in progress?

Security risks such as AI loss of control require special attention

"The two lines of technological evolution and legal policy evolution are intertwined, which has had a chain and far-reaching impact on the development of artificial intelligence and even the whole society, and it can be said that mankind has once again reached a crossroads. Zhou Hanhua, deputy director and researcher of the Institute of Law of the Chinese Academy of Social Sciences, said.

Around the world, countries are accelerating AI governance. Zhou Hui, deputy director of the Network and Information Law Research Office of the Institute of Law of the Chinese Academy of Social Sciences, introduced the recent progress of artificial intelligence legislation in the United States and the European Union based on his recent research experience in the United States. He pointed out that AI governance in the United States is characterized by decentralization, and at the same time, there is a strong tendency towards market dominance. The executive order on artificial intelligence issued by the White House in October this year is forward-looking, and there may be a "Washington effect" in the future.

In addition, the "Brussels effect" will further ripple in the field of AI with a provisional agreement on the EU's Artificial Intelligence Act, which aims to ensure that fundamental rights, democracy, the rule of law, and environmental sustainability are protected from high-risk AI, while promoting innovation and making Europe a leader in AI governance.

Yao Zhiwei, a professor at the Law School of Guangdong University of Finance and Economics, analyzed the current situation of artificial intelligence regulation from the perspective of domestic law. In terms of algorithms and models, domestic laws provide for filing, security assessment, and discriminatory risk regulation. In terms of data, domestic laws mainly regulate from the perspectives of data source security, data content security, and data labeling security.

Feng Jue, editor of the Institute of Law of the Chinese Academy of Social Sciences and deputy editor-in-chief of Legal Research, talked about the risk of artificial intelligence getting out of control, "When the AI singularity moment comes, are the principles set by humans for machines still effective? How to ensure that humans have the ability to press the pause button? Will AI completely replace human labor and will it reshape our interpersonal relationships? All these questions are worth pondering." ”

Zhang Xin, an associate professor at the Law School of the University of International Business and Economics, believes that compared with security in the traditional sense, AI security has the characteristics of multi-subjectivity, full cycle, dynamics, and consensus on global governance.

She said that the construction of an AI security governance framework system should be approached from the perspective of the life cycle, and it needs to cover the entire cycle of security design, security development, security deployment, security operation and maintenance. In addition, AI security needs to be carried out in a multi-dimensional way, on the one hand, it is necessary to build technical standards, and on the other hand, it is necessary to clarify the rule system of security responsibility from the legal level.

Chen Tianhao, an associate professor at the School of Public Policy and Management at Tsinghua University, focused on the analysis of the shortcomings and room for improvement in the Measures for the Review of Science and Technology Ethics (Trial) to deal with AI risks. From the perspective of incentive alignment, he pointed out that enterprises have strong demand for profitability and competitive pressure, have their own incentive functions, and do not want to "step on the brakes". In addition, the operation of the Science and Technology Ethics Review Committee requires high costs, and how to assess risks, how to participate in the public, and how to determine the degree of openness are also challenges.

"The ethical review of science and technology has changed from an ethical obligation to a legal obligation, and how AI companies can comply with the law and how to improve the ethical review mechanism of science and technology still need to be paid attention to. A possible path forward is agile and responsive governance. Chen Tianhao said.

At the seminar, a number of industry experts from Tencent, Douyin, Ant, Taotian, 360, Baidu, Alibaba, Anyuan AI, Meizu, Qianxin, Xiaomi, JD.com, iFLYTEK, Sina, China Mobile, Kunlun Digital Intelligence and other enterprises discussed issues such as AI value alignment and copyright disputes over large model training. Some experts pointed out that after the emergence of GPTs, "agents" will be the main direction of the future development of large models. In the context of continuous iterative upgrading of large models, if the goals pursued by large models are inconsistent with the values of human society, it may bring catastrophic risks, and it is especially necessary to pay attention to the problem of "AI out of control".

In the view of industry experts, in the AIGC era, enterprises need not only the technical team and algorithm team to develop large models, but also the risk team to do the compliance governance of large models. At the same time, the regulatory authorities are very important for agile and collaborative governance in the field of artificial intelligence. The situation of multiple supervision and overlapping rules will bring greater compliance pressure to enterprises. "With the rapid development of artificial intelligence, technology governance, industry self-discipline and government regulation are indispensable. ”

Zhu Yue, an assistant professor at Tongji University Law School, believes that there are three logics for the classification and grading of AI security risks, namely, capability logic, structural logic and application logic. Particular attention is paid to the coherence of logics and the resolution and cohesion of conflicts between laws.

Transparency and explainability face difficulties

The security of AI is different from traditional fields, including corpus security, model security, accuracy and reliability of generated content, and so on. In terms of AI security assessment, some industry experts pointed out that the assessment mechanism should be internationally interoperable, and at the same time, it must be able to accurately identify the extreme risks brought by the underlying model, so that enterprises can truly implement it. Overall, a balance should be struck between developing the potential of AI and ensuring its safety, taking into account the different needs of regulators, businesses and the public.

"Models and data are the core assets of AI companies and are easy targets for attackers. Industry experts said that enterprises should pay attention to issues such as sensitive data desensitization and privacy protection in all aspects of data collection, cleaning and processing, and at the same time, data poisoning, errors and biases in manual labeling also need to be considered in security assessments.

In addition, industry experts have called for further clarification of standards and timelines for AI security assessment mechanisms, so that developers can have more stable expectations.

Transparency and explainability have become the consensus principles of AI development, but there are still controversies and difficulties in the implementation process. Chief Researcher of Nancai Regtech Research Institute, Wang Jun, deputy director of the compliance news department of the 21st Century Business Herald, analyzed that explainability is the objective attribute of the algorithm model, that is, whether an algorithm has the conditions for algorithm interpretation in terms of technical architecture, and transparency is the relationship between the operation results of the algorithm and subjective expectations, that is, the extent to which the explanation of the application of a certain algorithm can show the internal logic of the algorithm decision and the actual influence of specific factors in the algorithm, so that the algorithm operation results can meet the subjective cognition and expectations of users.

She noted that achieving transparency and explainability is not an easy task. "In terms of transparency, different entities have different requirements for transparency, and from the perspective of system design and compliance costs, it is impossible to require AI developers and providers to provide all the information. The emphasis on explainability can increase costs and increase the risk of leaking trade secrets and users' personal information. ”

Xu Xu, an associate professor at the School of Law at the University of International Business and Economics, stressed that the transparency and explainability of AI must be distinguished from the transparency and explainability under the Personal Information Protection Law. In addition, transparency and explainability are both tools for information regulation, and the purpose of information disclosure is to hold AI accountable. He believes that accountable AI should be built, and that the core of companies' implementation of the principles of transparency and explainability lies in information disclosure to regulatory authorities, while information disclosure to the public should be incentivized and voluntary.

Wang Lei, a researcher at the Intelligent Technology Law Research Center of Beijing Institute of Technology, believes that transparency and explainability will produce different results based on different scenarios and platforms, such as the landing results of social media and e-commerce platforms. "Enterprises have endogenous needs for compliance, as well as external regulatory requirements, and different entities have different cognitive standards for transparency and explainability. ”

Industry experts suggest that AI companies should adhere to the basic principles of intellectual property protection to promote transparency and explainability, and adopt a categorical and hierarchical approach, such as making appropriate disclosures for low-risk scenarios and avoiding lengthy text explanations. At the same time, it is necessary to ensure that users have strong predictability in terms of AI data processing and rights impact.

AI legislation in progress

"With the rapid development of artificial intelligence technology and the rapid popularization of related applications, there is a problem that the governance capacity and governance system cannot keep up with the pace of technological development. Mo Jihong, director and researcher of the Institute of Law of the Chinese Academy of Social Sciences, pointed out that it is an urgent task to promote artificial intelligence legislation, improve the artificial intelligence governance system, and improve the legal ability of artificial intelligence.

"It is expected that the AI governance system will be improved through legislation, so that the development and application of AI can be effectively regulated, the abuse or improper use of technology can be prevented, and the results of scientific and technological innovation will not become a source of social inequality, social insecurity, and damage to people's interests." At the same time, the formation of a scientific governance mechanism through legislation can also promote the joint efforts of different subjects, especially relevant stakeholders, to establish a collaborative mechanism to counter potential threats, and ensure the healthy development of artificial intelligence technology with the rule of law and thinking. Mo Jihong said.

The Institute of Law of the Chinese Academy of Social Sciences attaches great importance to the legal research of artificial intelligence, and has achieved outstanding results in scientific research and teaching. The Institute of Law of the Chinese Academy of Social Sciences has established the Research Office of Network and Information Law, and at the same time, the Department of Digital Law has been established in the Law School of the University of Social Sciences. In recent years, under the leadership of Deputy Director Zhou Hanhua and Deputy Director Zhou Hui, fruitful research results have been achieved, and courses such as cyber law, artificial intelligence ethics and introduction to the rule of law have been set up in the Law School of the University of Social Sciences.

In June this year, the State Council issued the "State Council's 2023 Legislative Work Plan", which includes the Artificial Intelligence Law. On August 15, sponsored by the Institute of Law of the Chinese Academy of Social Sciences and co-organized by the Network and Information Law Research Office of the Institute of Law of the Chinese Academy of Social Sciences and the Nancai Compliance Technology Research Institute, the seminar on "China's Plan for Artificial Intelligence Legislation under the Global Governance Discourse Competition" was held in Beijing. The Model Law has had a wide impact in academic and practical circles at home and abroad.

The Model Law is divided into six chapters, including the basic principles of artificial intelligence, measures to promote development, risk management system, allocation of main responsibilities, design of governance mechanism, and legal liability. Adhere to the Chinese style of governance, not only development, but also adhere to the bottom line of safety.

The Model Law designs the system according to the three types of "developer-provider-user", jumps out of specific application scenarios, and proposes a negative list management mechanism for the creation of a negative list, and clarifies the responsibilities and corresponding obligations of AI actors in light of industrial development and risk level.

Nancai Regtech Research Institute pays close attention to the development of artificial intelligence, continues to track the development of artificial intelligence legislation, and participates in the drafting of the Model Law. Launched special reports on "AI Contract Theory" and "AI Legislation in Progress", and followed up relevant developments with more than 200 manuscripts for interpretation and analysis. At the same time, Nancai Regtech Research Institute is also jointly leading the discussion of version 2.0 of the Model Law with the Institute of Law of the Chinese Academy of Social Sciences, continuously adapting to the security situation of AI development, and better promoting the consensus on the development and governance of AI in accordance with the law.

For more information, please download the 21 Finance APP

Read on