Experiencing the AI Action Summit 2025
Reposted from 东不压桥研究院
Here are some of my thoughts on my participation in the Summit:
- During the AI Safety Summit 2023 (Bletchley Summit), I was also present. The Bletchley Summit, along with the subsequent AI Seoul Summit 2024 (Seoul Summit), primarily focused on AI safety risks and measures to prevent them. The biggest headlines revolved around which signed the Frontier AI Safety Commitments. Nominally, the AI Action Summit 2025 (Paris Summit) was a continuation of the previous two, but its primary focus had shifted from AI safety to AI innovation and development. While safety remained a topic, it was largely marginalized. The French government had initially planned several panel discussions focused on AI safety at the Grand Palais, but these were ultimately removed from the agenda. An international panel of experts led by Youshua Bengio releases the 298-page International AI Safety Report, although it remains doubtful how many people will thoroughly read it. Moreover, the discussion on the report was downgraded to a side event. The final summit declaration barely retained any text from the Bletchley Declaration, which displeased the UK, leading to its refusal to sign the final statement.
- Generally, there is no consensus on AI safety. In particular, there is significant skepticism about whether AI genuinely poses a threat to humanity as scientists, ethicists, policymakers, and the public have vastly different perceptions of AI safety risks. Even within the scientific community, opinions vary widely. For instance, Yoshua Bengio and Yann LeCun hold completely opposing views. Yann LeCun believes Artificial General Intelligence (AGI) is still far from the status quo, unlikely to cause loss of control. In this sense, people will continue to shape engineering design of advanced AI. The next AI Summit is set to be held in India (specific dates yet to be confirmed). Possibly in response to criticisms that the France Summit excessively downplayed safety issues, India has reintroduced the word "safety" into the summit's title. It is expected that India will still prioritize AI development and innovation, focusing on its model ecosystem and infrastructure development. Currently, relations between China and India appear to promote rapprochement. Unless new border conflicts or other unexpected events occur, India is likely to invite China to participate.
- In stark contrast, development and innovation were the dominant themes of the summit. President of the European Commission Ursula von der Leyen announced the "InvestAI" program, the EU's version of a "Stargate", pledging to raise €200 billion for AI development, including a new €20 billion fund to establish four "AI super factories". More than 20 investment firms, including Blackstone, KKR & Co., and EQT, committed to investing €150 billion in AI development over the next five years. European Commission Executive Vice President Henna Virkkunen stated that efforts are underway to streamline AI regulations to foster technological innovation. From the period of the Draghi Report to the present, it appears that the EU has realized it cannot catch up with China and the US in the AI race through only excessive regulation alone, but substantial financial investment is essential. However, realizing this does not guarantee execution. As former Google CEO Eric Schmidt remarked, it must first eliminate bureaucratic hurdles that impede reform for the EU to truly develop AI, such as excessive regulation of financial and investment sectors, which makes AI funding challenging.
- Whether in formal summit sessions or various side events, DeepSeek unsurprisingly became a central topic. Public speakers, primarily made by Europeans, whose overall attitude toward DeepSeek was positive, considered that it as a genuine innovation and a significant opportunity for Europe. Arthur Mensch, the co-founder of Mistral, which is famed as the rising star of European AI, referred to DeepSeek as "China's Mistral", affirming that its success validated the open-source approach and provided inspiration for Mistral's continued development.
- Many American AI policy experts also attended the summit. I asked nearly everyone I met whether the US would ban DeepSeek. The consensus was that a ban was unlikely, with three main reasons cited:
1) it is not a social media platform (and thus lacks public opinion attributes and social mobilization capabilities in the Chinese context);
2) its user base remains relatively small;
3) its open-source model also benefits American companies.
A conversation with the head of government affairs for Microsoft France revealed that Europe's favorable stance on DeepSeek is largely due to its open-source nature, which aligns with European values of inclusion, sharing, and transparency. This raises an interesting question: If Chinese AI companies aim to minimize geopolitical and national security concerns when expanding internationally, could embracing open-source strategies be an effective approach?
- Chinese participants also discussed DeepSeek. Most were objective and rational, though some overpraised it. DeepSeek is undoubtedly a source of national pride for China, but claims that it has surpassed US models or that China no longer fears US technological restrictions are exaggerated. Several key facts must be acknowledged:
1) DeepSeek still lags behind the most advanced US models, and even its claimed $6 million training cost depends on specific measurement criteria—some analyses suggest that, under equivalent standards, Gemini 2.0 might be even cheaper.
2) While DeepSeek is innovative, it does not represent a paradigm shift—it remains within the transformer architecture. Notably, Elon Musk has publicly announced in the Middle East that Grok 3, which he claims will be "more powerful and eerily intelligent" than GPT and DeepSeek, will launch on February 18.
3)Even if China continues along the path pioneered by DeepSeek, upgrading its large models will still require US-regulated GPUs and massive computing power investments. Avoiding complacency and maintaining a realistic perspective is crucial for continued success, particularly in international AI discourse, where overstatement of strengths serves no purpose.
- The global governance of AI may undergo a reshuffling. Literally, the most universally representative AI governance platform should be the United Nations, which China has consistently supported. However, the US believes that governance led by the UN would benefit China, as China often secures enough votes to override Western positions. The UK seized the opportunity to launch the Bletchley process—not so much out of a concern for AI safety, but rather as a strategic move to position itself as a governance hub amid its technological and industrial limitations compared to the US and China. By leveraging post-Brexit governance flexibility, the UK sought to attract global AI investment to London.
- The evolution of the Bletchley process has underscored a hard truth: Leading global AI governance requires robust technological capabilities, a powerful industrial base, and substantial financial resources. The UK, despite being the birthplace of DeepMind, ultimately saw Google claim the prize. While the UK has world-class AI research institutions like the Turing Institute, Oxford, and Cambridge, its lack of industry integration and application scenarios has lost its technological edge. The UK cannot compete with China and the US in AI industry development, nor can it match the investment capabilities of countries like the United Arab Emirates. France, also eager to attract AI investments, stole some of the UK's spotlight, leading to a somewhat sour conclusion to the summit. The claim that the Summit 2025 "weakened the safety agenda" might simply be a face-saving narrative for the UK. However, can France truly become the new leader in AI governance? If Macron were confident in this role, he would not have felt the need to align with India for support. The disordered organization and poor visitor experience at the Grand Palais seem to have foreshadowed the summit's failure.
- The ultimate direction of global AI governance may still depend on China and the US. The American national character has always been to establish rules for others to follow. The US was inherently uncomfortable with the UK initiating a summit on AI governance, but due to its traditional alliance with the UK, it refrained from outright opposition. Before Vice President Harris traveled to London for the Bletchley Summit, President Biden publicly signed an executive order on AI, demonstrating the US approach to AI governance. Harris held a press conference at the US Embassy in the UK upon arrival, officially announcing the establishment of the US AI Safety Institute (AISI), thereby overshadowing the UK's AI Safety Institute. Subsequently, the US only sent low-level officials to the Seoul Summit as a formality while hosting the first meeting of the International AISI Network independently, the Global AI Safety Summit in San Francisco, signaling its intention to supplant the UK.
- With Trump now in office, who didn't really like Europe, even these diplomatic niceties are being dispensed with. This time, Vance did not bring any representatives from the AISI and spoke bluntly that the US is, and will remain, the leader in AI. Also, it rejects Europe's AI regulatory framework, which only stifles American companies and the world must follow the standards provided by them. After making these statements, Vance left abruptly, ignoring the displeased expressions of European leaders and skipping the group photo session. Ultimately, the US refused to sign the declaration, showing no concern for diplomatic courtesies.
- Compared to the US, China has taken a more measured and courteous approach. Although not entirely endorsing these summits, China still participated at a high level in the Bletchley Summit and the Paris Summit, particularly showing respect for Macron and the French government. However, China's participation appears to be more about maintaining bilateral relations with the UK and France rather than a strong endorsement of their AI governance frameworks. China has its own Global AI Governance Initiative and has established a comprehensive domestic regulatory system, including the Interim Measures for the Management of Generative AI Services, the AI Security Governance Framework, and various technical standards and guidelines. Coupled with rapidly advancing large-scale models and diverse application scenarios, China is well-positioned to compete for leadership in global AI governance. At the summit, China also promoted the upcoming World Artificial Intelligence Conference in Shanghai in July, warmly inviting global countries to participate.
- At the conference, China announced the establishment of the China AI Safety & Development Association (CnAISDA), positioned as a counterpart to the AI safety institutes in the US and the UK. Currently, the US, the UK, Japan, Singapore, and France have all established AI safety institutes. Moreover, American and British institutes have even collaborated on evaluations of models such as OpenAI's GPT-4o and Anthropic's Claude 3.5. For some time, Western analysts have speculated whether China would establish a similar institution and have examined potential candidates. The naming of CnAISDA explicitly reflects a balance between AI development and safety. The association includes almost all of China's leading AI research institutions. Its primary function appears to be engaging in dialogue and cooperation with international AI safety organizations, and it remains to be seen whether it will eventually conduct model evaluations like its US and UK counterparts.
- Chinese companies were also present at the summit. Following Zhipu AI's signing of the "Frontier AI Safety Commitment" at the Seoul Summit last year, MiniMax and 01. AI also joined as signatories. However, the organizers took an unusually low-key approach, updating the commitment list on their website just two days before the summit without any publicity. Additionally, Nvidia and Magic, a startup from San Francisco, also signed the commitment. Moreover, Baidu and Lenovo, as founding members, joined the "Coalition for Environmentally Sustainable Artificial Intelligence", an initiative led by France to promote environmentally sustainable AI development, along with 35 other technology firms from Europe and the US.
- AI and geopolitics were also major topics at the summit. From the US's statements, it is clear that AI and geopolitics will remain interconnected for the foreseeable future. In his speech, Vance spent considerable time making indirect references to China, reiterating familiar concerns such as foreign adversaries, software weaponization, surveillance and censorship, AI-enhanced military intelligence, data security and foreign propaganda. He emphasized that the US would "protect AI and semiconductor technology from theft and misuse" and work with allies and partners to restrict adversarial nations' access to AI capabilities. Vance also cited surveillance and 5G equipment as cautionary examples, warning other countries against adopting China's "cheap technology". He stated, "If a deal looks too good to be true, it follows the Silicon Valley adage: if you're not paying for the product, you are the product." Some policy experts from the US noted that Vance left the summit's state dinner hosted by Macron since Chinese representatives supported the UN's role in AI global governance.
- Considering Vance's stance and recent key appointments at the US Department of Commerce—such as the nomination of Landon Heid as Assistant Secretary for Export Administration—it is likely that US semiconductor export controls and AI diffusion policies toward China will not only persist but may even tighten further. His remarks about "unpaid products" could be interpreted as a reference to open-source AI models. Given that DeepSeek is widely integrated by Western companiesand operates on a free-access basis, it remains uncertain whether the US government will impose sanctions on it.
- However, some argue that AI safety should be an area of US-China cooperation, akin to climate change. Notably, former Google CEO Eric Schmidt, a prominent voice in US AI policy, has emphasized the importance of monitoring open-source models to prevent China from leading their development. Yet, he also advocates for cooperation on AI safety, asserting that "the West should collaborate with China on AI safety because all nations face similar challenges with this powerful technology. Providing them with information to enhance the security of their models is not detrimental to us."
- With the EU reassessing its AI strategy and joining the race for AI development, global competition for AI computing power, talent, and capital is expected to intensify. The focus on AI safety may enter a period of relative dormancy, with scientists continuing to study AI-related risks but requiring substantial empirical evidence to convince policymakers and the public of the imminent dangers posed by AI. The US has consistently sought to isolate China in AI global governance, with Biden's administration attempting to establish an "AISI Network" to unify Western AI security standards and create de facto market barriers that exclude Chinese AI technology and firms. The Trump administration's stance on this initiative remains unclear, but its overall strategic direction is unlikely to change significantly. In response, China may either establish its own parallel AI governance framework or seek closer cooperation with countries like the UK and France that are less inclined to align with US containment strategies. As a result, global AI governance may increasingly resemble past trends in internet and data governance, marked by growing geopolitical fragmentation and polarization. This is far from an ideal scenario for technological progress, industry development, and globalization, posing significant challenges ahead.
Finally, sincere respect must be given to the Chinese experts and institutions who traveled great distances to participate in the summit and organize various side events. Engaging with public figures such as Ambassador Fu Ying, Dean Xue Lan, Vice Dean Liang Zheng, and Academician Andrew Chi-Chih Yao, as well as numerous experts and staff from Chinese AI research institutions and enterprises, has reinforced the conviction that China will neither fall behind nor be absent from this transformative technological revolution.