Technological measures to address risks

2025-02-06
Responding to the above risks, AI developers, service providers, and system users should prevent risks by taking technological measures in the fields of training data, computing infrastructures, models and algorithms, product services, and application scenarios.
 
4.1 Addressing AI’s inherent safety risks
4.1.1 Addressing risks from models and algorithms
(a) Explainability and predictability of AI should be constantly improved to provide clear explanation for the internal structure, reasoning logic, technical interfaces, and output results of AI systems, accurately reflecting the process by which AI systems produce outcomes.
(b) Secure development standards should be established and implemented in the design, R&D, deployment, and maintenance processes to eliminate as many security flaws and discrimination tendencies in models and algorithms as possible and enhance robustness.
4.1.2 Addressing risks from data
(a) Security rules on data collection and usage, and on processing personal information should be abided by in all procedures of training data and user interaction data, including data collection, storage, usage, processing, transmission, provision, publication, and deletion. This aims to fully ensure user’s legitimate rights stipulated by laws and regulations, such as their rights to control, to be informed, and to choose.
(b) Protection of IPR should be strengthened to prevent infringement on IPR in stages such as selecting training data and result outputs.
(c) Training data should be strictly selected to ensure exclusion of sensitive data in high-risk fields such as nuclear, biological, and chemical weapons and missiles.
(d) Data security management should be strengthened to comply with data security and personal information protection standards and regulations if training data contains sensitive personal information and important data.
(e) To use truthful, precise, objective, and diverse training data from legitimate sources, and filter ineffective, wrong, and biased data in a timely manner.
(f) The cross-border provision of AI services should comply with the regulations on cross-border data flow. The external provision of AI models and algorithms should comply with export control requirements.
4.1.3 Addressing risks from AI system
(a) To properly disclose the principles, capacities, application scenarios, andsafety risks of AI technologies and products, to clearly label outputs, and toconstantly make AI systems more transparent.
(b) To enhance the risk identification, detection, and mitigation of platforms where multiple AI models or systems congregate, so as to prevent malicious acts or attacks and invasions that target the platforms from impacting the AI models or systems they support.
(c) To strengthen the capacity of constructing, managing, and operating AI computing platforms and AI system services safely, with an aim to ensure uninterrupted infrastructure operation and service provision.
(d) To fully consider the supply chain security of the chips, software, tools, computing infrastructure, and data sources adopted for AI systems. To track the vulnerabilities and flaws of both software and hardware products and make timely repair and reinforcement to ensure system security.
 
4.2 Addressing safety risks in AI applications
4.2.1Addressing cyberspace risks
(a) A security protection mechanism should be established to prevent model from being interfered and tampered during operation to ensure reliable outputs.
(b) A data safeguard should be set up to make sure that AI systems comply with applicable laws and regulations when outputting sensitive personal information and important data.
4.2.2 Addressing real-world risks
(a) To establish service limitations according to users’ actual application scenarios and cut AI systems’ features that might be abused. AI systems should not provide services that go beyond the preset scope.
(b) To improve the ability to trace the end use of AI systems to prevent high-risk application scenarios such as manufacturing of weapons of mass destruction, like nuclear, biological, chemical weapons and missiles.
4.2.3 Addressing cognitive risks
(a) To identify unexpected, untruthful, and inaccurate outputs via technological means, and regulate them in accordance with laws and regulations.
(b) Strict measures should be taken to prevent abuse of AI systems that collect, connect, gather, analyze, and dig into users’ inquiries to profile their identity, preference, and personal mindset.
(c) To intensify R&D of AI-generated content (AIGC) testing technologies, aiming to better prevent, detect, and navigate the cognitive warfare.
4.2.4 Addressing ethical risks
(a) Training data should be filtered and outputs should be verified during algorithm design, model training and optimization, service provision and other processes, in an effort to prevent discrimination based on ethnicities, beliefs, nationalities, region, gender, age, occupation and health factors, among others.
(b) AI systems applied in key sectors, such as government departments, critical information infrastructure, and areas directly affecting public safetyand people's health and safety, should be equipped with high-efficient emergency management and control measures.