Principles for AI safety governance
2025-02-06
-Commit to a vision of common, comprehensive, cooperative, and sustainable security while putting equal emphasis on development and security
-Prioritize the innovative development of AI
-Take effectively preventing and defusing AI safety risks as the starting point and ultimate goal
-Establish governance mechanisms that engage all stakeholders, integrate technology and management, and ensure coordinated efforts and collaboration among them
-Ensure that all parties involved fully shoulder their responsibilities for AI safety
-Create a whole-process, all-element governance chain
-Foster a safe, reliable, equitable, and transparent AI for the technical research, development, and application
-Promote the healthy development and regulated application of AI
-Effectively safeguard national sovereignty, security and development interests
-Protect the legitimate rights and interests of citizens, legal persons and other organizations
-Guarantee that AI technology benefits humanity
1.1 Be inclusive and prudent to ensure safety
We encourage development and innovation and take an inclusive approach to AI research, development, and application. We make every effort to ensure AI safety, and will take timely measures to address any risks that threaten national security, harm the public interest, or infringe upon the legitimate rights and interests of individuals.
1.2 Identify risks with agile governance
By closely tracking trends in AI research, development, and application, we identify AI safety risks from two perspectives: the technology itself and its application. We propose tailored preventive measures to mitigate these risks. We follow the evolution of safety risks, swiftly adjusting our governance measures as needed. We are committed to improving the governance mechanisms and methods while promptly responding to issues
warranting government oversight.
1.3 Integrate technology and management for coordinated response
We adopt a comprehensive safety governance approach that integrates technology and management to prevent and address various safety risks throughout the entire process of AI research, development, and application. Within the AI research, development, and application chain, it is essential to ensure that all relevant parties, including model and algorithm researchers and developers, service providers, and users, assume their respective responsibilities for AI safety. This approach well leverages the roles of governance mechanisms involving government oversight, industry selfregulation, and public scrutiny.
1.4 Promote openness and cooperation for joint governance andshared benefits
We promote international cooperation on AI safety governance, with the best practices shared worldwide. We advocate establishing open platforms and advance efforts to build broad consensus on a global AI governance system through dialogue and cooperation across various disciplines, fields, regions, and nations.