• An Interpretation of the Interim Measures on the Administration of Human-like Interactive Artificial Intelligence Services

    Recently, the Cyberspace Administration of China released the Interim Measures on the Administration of Human-like Interactive Artificial Intelligence Services (Draft for Public Comment), marking a critical step forward in China’s governance of artificial intelligence. The Measures not only provide a timely response to emerging business models, but also offer important guidance for the healthy, orderly, and ethically sound development of the artificial intelligence industry.

    18 2026-01-26
  • Laying a Solid Foundation for the Policy and Institutional Framework of Further Promoting the “AI plus” Initiative

    The State Council issued the Opinions on Further Implementing the ""AI +"" Initiative, laying a solid policy and institutional foundation for the in-depth integration of artificial intelligence with all fields of China's economy and society.

    4 2026-01-25
  • DeepSeek shot to fame, what does the US government think?

    With DeepSeek rapidly gaining widespread attention over the past few days, the trend of public discourse in the US has begun to shift. Initially marked by surprise, admiration, and praise, discussions have increasingly been accompanied by skepticism, resentment, and even hostility. 

    11 2025-05-19
  • Can China and Europe Cooperate in the Field of AI?

    Recently, the University of Oxford hosted a seminar that attracted universities, think tanks, and AI enterprises from China, the UK, and the EU. The theme of the seminar was: "Can China and Europe Cooperate in the Field of AI?" This topic holds significant practical relevance for China's AI industry today. Currently, whether large corporations or startups, many are seriously considering "going global." 

    10 2025-05-19
  • Li v. Liu, Case of Infringement against the Right of Authorship and the Right to Network Dissemination of Information

    Beijing Internet Court A Civil Judgment of Li v. Liu (Case of Infringement against the Right of Authorship and the Right to Network Dissemination of Information)

    128 2025-02-07
  • Large Language Model Security Testing Method

    The "Large Language Model Security Testing Method," developed and issued by the World Digital Technology Academy (WDTA), represents a crucial advancement in our ongoing commitment to ensuring the responsible and secure use of artificial intelligence technologies. As AI systems, particularly large language models, continue to become increasingly integral to various aspects of society, the need for a comprehensive standard to address their security challenges become

    15 2025-02-06