Safety guidelines for AI development and application
2025-02-06
6.1 Safety guidelines for model algorithm developers
(a) Developers should uphold a people-centered approach, adhere to the principle of AI for good, and follow science and technology ethics in key stages such as requirement analysis, project initiation, model design and development, and training data selection and use, by taking measures such as internal discussions, organizing expert evaluations, conducting technological ethical reviews, listening to public opinions, communicating and exchanging ideas with potential target audience, and strengthening employee safety education and training.
(b) Developers should strengthening data security and personal informationprotection, respect intellectual property and copyright, and ensure that datasources are clear and acquisition methods are compliant. Developers should establish a comprehensive data security management procedure, ensuring data security and quality as well as compliant use, to prevent risks such as data leakage, loss, and diffusion, and properly handle user data when terminating AI products.
(c) Developers should guarantee the security of training environment for AI model algorithms, including cybersecurity configurations and data encryption measures.
(d) Developers should assess potential biases in AI models and algorithms, improve sampling and testing for training data content and quality, and come up with effective and reliable alignment algorithms to ensure risks like value and ethical risks are controllable.
(e) Developers should evaluate the readiness of AI products and services based on the legal and risk management requirements of the target markets.
(f) Developers should effectively manage different versions of AI products and related datasets. Commercial versions should be capable of reverting to previous versions if necessary.
(g) Developers should regularly conduct safety and security evaluation tests. Before testing, they should define test objectives, scope, safety and security dimensions, and construct diverse test datasets covering all kinds of application scenarios.
(h) Developers should formulate clear test rules and methods, includingmanual testing, automated testing, and hybrid testing, and utilizetechnologies such as sandbox simulations to fully test and verify models.
(i) Developers should evaluate tolerance of AI products and services for external interferences and notify service providers and users in forms of application scope, precautions, and usage prohibitions.
(j) Developers should generate detailed test reports to analyze safety and security issues, and propose improvement plans.
6.2 Safety guidelines for AI service providers
(a) Service providers should publicize capabilities, limitations, target users, and use cases of AI products and services.
(b) Service providers should inform users of the application scope, precautions, and usage prohibitions of AI products and services in a user-friendly manner within contracts or service agreements, supporting informed choices and cautious use by users.
(c) Service providers should support users to undertake responsibilities of supervision and control within documents such as consent forms and service agreements.
(d) Service providers should ensure that users understand AI products' accuracy, and prepare explanatory plans when AI decisions exert significant impact.
(e) Service providers should review responsibility statements provided by developers to ensure that the chain of responsibility can be traced back to any recursively employed AI models.
(f) Service providers should increase awareness of AI risk prevention, establish and improve a real-time risk monitoring and management mechanism, and continuously track operational security risks.
(g) Service providers should assess the ability of AI products and services to withstand or overcome adverse conditions under faults, attacks, or other anomalies, and prevent unexpected results and behavioral errors, ensuring that a minimum level of effective functionality is maintained.
(h) Service providers should promptly report safety and security incidents and vulnerabilities detected in AI system operations to competent authorities.
(i) Service providers should stipulate in contracts or service agreements that they have the right to take corrective measures or terminate services early upon detecting misuse and abuse not conforming to usage intention and stated limitations.
(j) Service providers should assess the impact of AI products on users, preventing harm to users' mental and physical health, life, and property.
6.3 Safety guidelines for users in key areas
(a) For users in key sectors such as government departments, critical information infrastructure, and areas directly affecting public safety and people's health and safety, they should prudently assess the long-term and potential impacts of applying AI technology in the target application scenarios and conduct risk assessments and grading to avoid technology abuse.
(b) Users should regularly perform system audits on the applicable scenarios, safety, reliability, and controllability of AI systems, while enhancing awareness of risk prevention and response capabilities.
(c) Users should fully understand its data processing and privacy protection measures before using an AI product.
(d) Users should use high-security passwords and enable multi-factor authentication mechanisms to enhance account security.
(e) Users should enhance their capabilities in areas such as network security and supply chain security to reduce the risk of AI systems being attacked and important data being stolen or leaked, as well as ensure uninterrupted
business.
(f) Users should properly limit data access, develop data backup and recovery plans, and regularly check data processing flow.
(g) Users should ensure that operations comply with confidentiality provisions and use encryption technology and other protective measures when processing sensitive data.
(h) Users should effectively supervise the behavior and impact of AI, and ensure that AI products and services operate under human authorization and remain subject to human control.
(i) Users should avoid complete reliance on AI for decision making, monitor and record instances where users turn down AI decisions, and analyze inconsistencies in decision-making. They should have the capability to swiftly shift to human-based or traditional methods in the event of an accident.
6.4 Safety guidelines for general users
(a) Users should raise their awareness of the potential safety risks associated with AI products, and select AI products from reputable providers.
(b) Before using an AI product, users should carefully review the contract or service terms to understand its functions, limitations, and privacy policies. Users should accurately recognize the limitations of AI products in making judgments and decisions, and set reasonable expectations.
(c) Users should enhance awareness of personal information protection and avoid entering sensitive information unnecessarily.
(d) Users should be informed about data processing practices and avoid using products that are not in conformity with privacy principles.
(e) Users should be mindful of cybersecurity risks when using AI products to prevent them from becoming targets of cyberattacks.
(f) Users should be aware of the potential impact of AI products on minorsand take steps to prevent addiction and excessive use.
Table of AI Safety and Security Risks to Technical
Countermeasures and Comprehensive Governance Measures
