Anthropic Case Shows Two Contrary Visions for Governing AI
U.S. AI company Anthropic has obtained a restraining order from a California court, which temporarily blocks a ban by the U.S. Department of Defense over claims that the firm poses a "supply chain risk."
The dispute centers on Anthropic's refusal to allow its AI technology to be used in autonomous lethal weapons systems without human oversight. The judge observed that the government's action appeared to be "unconstitutional retaliation" against the company's ethical position.
This case is far more than a commercial dispute. It exposes a fundamental tension of the AI era: Should the development of frontier technologies be guided by shared human ethics and the well-being of humanity, or be reduced to tools serving the strategic ambitions and security anxieties of a single state?
The U.S. government's response reveals a logic that treats technology primarily as an instrument of power. A company, motivated by concerns for humanity's future, attempts to draw ethical boundaries around the military use of its innovations. But the result is not recognition of those concerns. Instead, the government swiftly places it on a "risk" list followed by administrative pressure that threatens to disrupt its business.
This approach — "those who comply will prosper, those who resist suffer" — expands the notion of "national security" without limit, allowing it to suppress independent voices that challenge the trajectory of military and surveillance applications.
This reality stands in stark contrast to the U.S.' frequent advocacy of "responsible AI" in international forums. The gap between rhetoric and practice highlights a deeper contradiction: technological nationalism dressed in the language of ethics.
In contrast, China has proposed and advanced a governance philosophy with a "people-centered approach in developing AI for good." This is no slogan; the concept has been developed into a systematic framework that spans both domestic policy design and international cooperation.
In October 2023, China released the Global AI Governance Initiative, which for the first time outlined at the international level a vision of an open, fair and inclusive AI governance system, opposing technological monopolies and hegemonic practices.
In July 2025, the Global AI Governance Action Plan translated these principles into 13 measures, emphasizing respect for national sovereignty, secure and controllable development, and international cooperation to help developing countries build computing infrastructure and narrow the digital divide.
These principles are increasingly reflected in practical cooperation projects. In Southeast Asia, a China-Laos AI innovation cooperation center is helping Laos systematically enhance its technological capabilities for the intelligent era. Malaysia's national AI infrastructure strategy, launched in 2025, adopted Chinese AI chips and open-source models, enabling data to be stored domestically and operated locally — strengthening what policymakers there describe as "AI sovereignty."
In Africa, the Tanzania National ICT Broadband Backbone project, built with Chinese assistance, has significantly reduced telecommunications costs and expanded connectivity in remote regions, enabling more people to access the digital economy. Such initiatives demonstrate that the ideas of "sovereign AI" and equitable technological development can translate into tangible benefits.
The White Paper on the Development of Global Sovereign Large Models, released at the 2026 Zhongguancun Forum Annual Conference on March 27, furthers this cooperative pathway. The report proposes an open, collaborative, controllable and inclusive framework, offering technical architectures ranging from open-source foundation models to full-stack solutions.
The goal is to enable countries to build AI capabilities aligned with their own languages, cultures and development priorities — providing an alternative path for the Global South to avoid technological monopolies and potential forms of digital colonialism.
Seen in this broader context, the Anthropic case is significant. The fact that a company must turn to the courts to defend its refusal to build "autonomous killing machines" is a striking commentary on one model of technological governance.
The approach China advocates — from international initiatives to concrete projects — illustrates a different possibility. AI need not become another instrument of geopolitical rivalry or technological domination. Instead, it can be a force for shared development, enabling countries to pursue innovation while respecting sovereignty and promoting global well-being.