China’s AI Ascent and the Balance of Promise, Peril, and Historical Lessons

By Michael Megarit 


Few could have envisioned during the Cold War that the most catastrophic nuclear incident would unfold in a little-known power facility in Ukraine. The Chornobyl disaster in 1986 stemmed from a combination of a defective nuclear reactor design and human errors. While the escalating superpower arms race commanded the world’s attention, this seemingly experimental technology posed its unique dangers. Surprisingly, even against the backdrop of crises like the 1962 Cuban missile standoff, it was a lapse in basic safety, compounded by bureaucratic mishandling, that led to the release of radiation equating to 400 times the atomic bomb unleashed on Hiroshima in 1945. The aftermath of Chornobyl is still felt, with estimates of radiation-induced deaths ranging vastly and an exclusion zone double the size of London that remains largely uninhabited.

Today, as China and the United States engage in a new competitive phase, the race in artificial intelligence (AI) technology stirs anxieties mirroring the nuclear era. The possibility of an AI-driven world, with autonomous weaponry and warfare at machine speed, poses formidable threats. China’s deployment of AI in its actions against the Uyghur population in Xinjiang stands as a dark testament.

However, an equally pressing issue is the potential for inadvertent AI-related mishaps with catastrophic outcomes. While AI systems don’t erupt like nuclear reactors, their spectrum of havoc spans from crafting lethal pathogens to hacking vital infrastructures, including power grids and pipelines. China’s relaxed stance on technological perils and its history of crisis mishandling amplify AI-associated risks. It’s imperative to address these threats, as their repercussions could ripple beyond Chinese shores, urging a reevaluation of the AI field’s risk assessments.

Understanding the AI Threat Landscape

The discourse on AI dangers has grown louder, with some prophesying AI as a future existential menace and others dismissing such predictions. But even if we sideline dystopian scenarios, there’s ample evidence pointing to the immediate risks of unintended AI-triggered disasters.

For instance, rapid AI-driven interactions in the financial world could inadvertently plunge markets, as exemplified by the 2010 “flash crash” where algorithms led to a fleeting erasure of a trillion dollars from stocks. Demonstrations of AI’s potency in churning out tens of thousands of biochemical weapons within hours highlight the technology’s dual-use nature. Advanced AI cyber assaults could inadvertently destabilize vital societal structures, reminiscent of the 2017 NotPetya cyber onslaught initiated by Russia against Ukraine but which eventually spread globally. With AI advancements outpacing safety protocols, the challenge intensifies.

While many Americans might be unfamiliar with the intricacies of these dangers, there’s a general consensus about the risks of embedding potent new technologies into vital systems. A 2022 Ipsos survey revealed that merely 35% of Americans see AI’s advantages over its pitfalls, placing the U.S. amongst the most skeptical nations regarding AI. This caution extends to AI professionals in America. Geoffrey Hinton, an AI luminary, has even exited the industry, urging researchers to temper their AI advancements until its controllability is ascertained.

In stark contrast, China brims with optimism on AI, with an overwhelming majority believing in its benefits. Unlike the U.S.’s growing skepticism towards an aggressive tech development ethos, China continues to champion a bolder approach. Chinese tech magnates laud their nation’s readiness to navigate AI’s uncertain terrains, a stance, as Kai-Fu Lee, a seasoned AI specialist and Chinese tech executive points out, that would deter the more risk-averse Western politicians.

Unchecked Advancements

China, known for its aggressive approach to innovation, is on a determined mission to establish itself as the world’s foremost hub of artificial intelligence innovation by 2030.

Since the unveiling of Xi’s “Made in China 2025” vision in 2015, AI’s significance in China’s defense strategy, surveillance system, and authoritative governance has grown remarkably. This commitment has seen billions of dollars channeled annually into its AI sector, coupled with efforts to acquire technological secrets from foreign corporations.

The results are evident. China now produces a considerable portion of the world’s elite AI engineers, surpassing the United States. The nation’s dominance in AI research publication is also on the rise, as it leads in the number of citations in AI journals globally. If the current trajectory persists, experts like the U.S. National Security Commission on Artificial Intelligence predict that China might overshadow the United States in AI supremacy within the next decade.

Historically, rapid technological races often lead to a compromise on safety, especially when the competitors have varying risk appetites. Beijing’s acceleration in various domains has, in the past, resulted in catastrophic events. From the ill-fated Great Leap Forward that caused the worst famine in history to the demographic imbalance instigated by the one-child policy, China’s history is littered with examples of rushed initiatives that backfired. Even its forays into space and infrastructure projects under the Belt and Road Initiative have been marred with significant challenges.

A History of Crises

China’s ambitious AI endeavors haven’t yet precipitated any dire situation. However, past experiences hint that if a crisis does emerge, the regime’s response might exacerbate the situation. Centralized governments, like China’s, often mishandle emergencies, suppressing early signs of issues rather than addressing them head-on. The infamous famine that followed Mao’s Great Leap Forward serves as a stark reminder.

Modern-day China still exhibits this troubling pattern. The HIV epidemic in the 1990s, triggered by contaminated blood transfusions, was actively concealed by local officials. The SARS outbreak in 2002 witnessed similar administrative obfuscation, causing a global health threat. The initial handling of the COVID-19 pandemic too was characterized by delays and censorship, which eventually contributed to its global spread.

Urgency and Caution

Critics arguing against the potential AI crisis in China might point to Beijing’s regulatory framework around AI. However, the primary goal of these regulations appears to be ensuring company allegiance to the state rather than genuine safety. While the U.S. still leads in breakthrough AI technologies, it’s evident that the adaptation of existing systems could yield dangerous outcomes. Given China’s expertise in adapting and enhancing foreign technologies, there’s a valid cause for concern.

AI-related risks are not exclusive to China. With AI’s ubiquitous reach, incidents resulting from unintended consequences, such as self-driving car accidents and facial recognition errors, are on the rise globally. However, China’s unique blend of ambitious technological pursuits, a casual approach to potential risks, and a history of mishandling crises makes its AI journey particularly fraught with dangers.

To counter these challenges, the U.S. could prioritize the establishment of global AI safety standards, monitor global AI labs for potential threats, and restrict the flow of weaponizable AI research to China. Engaging the international community in these efforts could be pivotal.

Understanding and prioritizing the safety risks of AI is essential, especially in centralized regimes. In the face of technological competition, avoiding an AI-induced disaster must be at the forefront of global considerations.