Framework for AI safety governance

2025-02-06
Based on the notion of risk management, this framework outlines control measures to address different types of AI safety risks through technological and managerial strategies. As AI research, development, and application rapidly evolves, leading to changes in the forms, impacts, and our perception of safety risks, it is necessary to continuously update control measures, and invite all stakeholders to refine the governance framework.
 
2.1 Safety and security risks
By examining the characteristics of AI technology and its application scenarios across various industries and fields, we pinpoint safety and security risks and potential dangers that are inherently linked to the technology itself and its application.
 
2.2 Technical countermeasures
Regarding models and algorithms, training data, computing facilities,products and services, and application scenarios, we propose targetedtechnical measures to improve the safety, fairness, reliability, and robustness of AI products and applications. These measures include secure software development, data quality improvement, construction and operations security enhancement, and conducting evaluation, monitoring, and reinforcement activities.
 
2.3 Comprehensive governance measures
In accordance with the principle of coordinated efforts and joint governance, we clarify the measures that all stakeholders, including technology research institutions, product and service providers, users, government agencies, industry associations, and social organizations, should take to identify, prevent, and respond to AI safety risks.
 
2.4 Safety guidelines for AI development and application
We propose several safety guidelines for AI model and algorithm developers, AI service providers, users in key areas, and general users, to develop and apply AI technology.