laitimes

AI Security Guardian Program Launched! The AIIA Security Governance Committee released three types of model security evaluations

AI Security Guardian Program Launched! The AIIA Security Governance Committee released three types of model security evaluations

Author | vanilla

Edit | Li Shuiqing

Zhidong reported on July 24 that today, the Safety Governance Committee of the Chinese Artificial Intelligence Industry Development Alliance (AIIA) held a results conference. At the meeting, Shi Lin, director of the Security and Metaverse Department of the Institute of Artificial Intelligence of the China Academy of Information and Communications Technology, interpreted the organizational structure formed by the committee since its establishment six months ago, the work of the two working groups of security and governance, and launched the AI security guardian plan, and released the safety evaluation results of three categories.

The AIIA Security Governance Committee was established at the end of December 2023, and after half a year of operation, there are two working groups, the governance group and the security group, and nearly 100 units have joined.

Among them, the work of the governance group focuses on AI governance framework, compliance governance, and enabling governance, and the security group mainly conducts research and benchmarking on large model security and compliance.

In June this year, the China Academy of Information and Communications Technology (CAICT) launched the "Artificial Intelligence Security Guardian Plan" based on the committee, including the establishment of a threat information sharing mechanism, the implementation of AIGC real content source trust, and the establishment of an AI insurance mechanism.

1. Nearly 100 units have joined in the first half of the year since its establishment, forming two working groups of security and governance

The AIIA Security Governance Committee was established at the end of December 2023, and after half a year of operation, the AIIA Security Governance Committee has organized two working groups, the existing governance group and the security group, and is also responsible for the operation of the Security Alignment Partner Program, the Trusted Face Application Guardian Program, and the Content Technology Industry Promotion Phalanx.

AI Security Guardian Program Launched! The AIIA Security Governance Committee released three types of model security evaluations

▲The overall situation of the AIIA Security Governance Committee

At present, nearly 100 units have joined the AIIA Security Governance Committee, which is composed of the director unit, the deputy director unit, the expert committee and the office, as well as the working group and partner program set up around the business direction.

Among them, the director unit is led by the China Academy of Information and Communications Technology, and the deputy director units include vivo, Baidu, Tencent, 360, Huawei, China Mobile, Alibaba, Zhejiang University and Ant Group.

The Expert Committee is responsible for checking the overall work of the AIIA Security Governance Committee, while the two working groups and the Partnership Program are responsible for carrying out AI-related research and promoting AI security governance among all parties in the industry chain.

According to Shi Lin, the current work progress of the governance team focuses on three aspects, including AI governance framework, AI compliance governance, and AI empowerment governance.

AI Security Guardian Program Launched! The AIIA Security Governance Committee released three types of model security evaluations

▲The progress of the work of the governance group

Specifically, the governance working group focuses on the research and drafting of the overall AI governance framework, and compares it with international benchmarks, such as ISO/IEC 42001, and inputs international excellent risk management experience.

At the same time, the governance group focuses on typical applications such as face recognition, supports local cyberspace authorities to carry out compliance practices for the cultural and tourism industry, and forms a research report on the governance of face recognition.

In addition, focusing on AI-enabled governance, the governance group carried out discussions on technical standards and specifications related to legal large models, and the relevant specifications were formally finalized.

The security team mainly carries out security benchmarks based on hot topics such as large model security and compliance, and promotes the compilation and technical exchange of a number of AI-native security specifications.

AI Security Guardian Program Launched! The AIIA Security Governance Committee released three types of model security evaluations

▲The progress of the work of the security group

At present, the security team has carried out two batches of AI security benchmark tests around the security compliance and technical research of large models, and has incorporated more common attack methods in the second quarter, including inducement attacks, prompt word injection attacks, content generalization attacks, etc., which can more comprehensively and intuitively reflect the security situation of closed-source models at home and abroad.

In the first half of this year, the security team held more than 20 online and offline exchanges and seminars, and carried out standard formulation, testing and evaluation, including large model security, AI network security large model specifications, and AIGC detection specifications.

2. Launched the AI security guardian plan, and released the results of the three major security evaluations

Shi Lin said that in the practice of the two working groups, the committee found that the separate working groups each work from a rule or technology perspective around security or governance, and in this process, technology and rules need to be integrated. Therefore, it is important to establish cross-group security capabilities.

In June this year, the China Academy of Information and Communications Technology (CAICT) launched the "Artificial Intelligence Security Guardian Program", or AI Guard, relying on AIIA, with the goal of uniting multiple forces to improve the level of AI technology and governance capabilities in mainland China and promote the healthy and orderly development of the industry.

AI Security Guardian Program Launched! The AIIA Security Governance Committee released three types of model security evaluations

▲Artificial intelligence security protection plan

First of all, the plan will establish an AI threat information sharing mechanism, from infrastructure such as AI chips to vulnerabilities in data, algorithms, applications, etc., through timely warning through mutual assistance and co-governance at the committee level, so as to improve security prevention capabilities, so as to deal with AI security threats and other problems.

Secondly, the plan will carry out the work of trusting the authentic source of AIGC's real content, and build the ability of content traceability through the establishment of a unified content standard platform. At present, the implicit watermarking method is mainly used to establish a mutual recognition mechanism for multi-modal content such as images, audio, and video. There are still certain technical difficulties in the implementation of text content, and breakthroughs will continue to be made in the future.

In addition, the plan will provide relief to relevant personnel and units through the AI insurance mechanism, and provide compensation strategies.

Finally, Shi Lin released the results of the evaluation of the security risk prevention capability of the large model, the content security prevention capability evaluation of the multi-modal graphic and text large model, the special evaluation of face recognition security and the evaluation of the security risk prevention ability of the code large model, and vivo, Ant, Alibaba Cloud, Baidu, iFLYTEK, SenseTime, etc. obtained the certificate as representatives.

AI Security Guardian Program Launched! The AIIA Security Governance Committee released three types of model security evaluations

▲The results of the evaluation of the security risk prevention capability of the large model and the content security prevention capability of the multi-modal graphic and text large model

AI Security Guardian Program Launched! The AIIA Security Governance Committee released three types of model security evaluations

▲The results of the special evaluation of face recognition security

AI Security Guardian Program Launched! The AIIA Security Governance Committee released three types of model security evaluations

▲The evaluation results of the security risk prevention ability of the code model

Conclusion: Promote the development of AI in a safe, reliable and controllable direction

Large models are developing rapidly, but the seriousness and urgency of AI security issues cannot be ignored, and security challenges have expanded from the traditional security issues of the technology itself to multiple aspects, and countries and regions have basically formed local frameworks. For example, United States launched a risk management framework, the European Union built a risk classification governance plan, Singapore launched a governance model framework and proposed nine dimensions, and Japan issued guidelines to formulate a code of conduct.

In China, the AIIA Security Governance Committee is actively promoting the construction of cutting-edge technology governance tools for precise governance, and has achieved certain results in governance framework, risk management, health and safety, and security applications. In the future, with the joint efforts of all units, it is expected that the mainland can form a complete security risk mechanism to ensure the development of AI in a safe, reliable and controllable direction.

Read on