Bind AI Safe, Find AI Safe? How might Generative AI threaten national security

“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” ~Stephen Hawking

Artificial intelligence, though still largely falls behind a human brain, has demonstrated its tremendous impact on almost every aspect of human life. Despite all the benefits it has brought, AI poses severe challenges to human society in various areas of data privacy, labor rights, government surveillance, criminal justice, and even democratic institutions. What arouses the most concern is artificial generative intelligence (AGI), as ChatGPT sweeps the tech community with its utility and versatility powered by gigantic algorithmic modeling. RAND Corporation’s latest report, authored by two AGI experts, reveals another severe challenge AI might pose: national security.

In the beginning, the report draws a comparison between AGI and the atomic bomb (A-Bomb). The key difference is what the A-Bomb can potentially do was clearly understood, while what AGI can potentially do is beyond understanding. That’s why various AI labs are investing billions of dollars into model-training as the model performance scales with the size of datasets. The scaling effect makes what AGI can cognitively perform reach human-level or even superhuman level. With this superhuman performance emerge five major problems that could severely undermine national security.

The first problem is that AGI might trigger a cross-national race for the first-mover advantage in a game-changing wonder weapon. This kind of military advantage may come from crushing enemy cyber-defense, launching cyber-attack, crafting autonomous weapon systems, and constructing “fog-of-war” disinformation machines.

The second problem is that AGI might precipitate a systemic shift in the balance of national power and competitiveness. Regarding national power, AGI holds the potential to rewrite military playbooks in critical areas such as precision weapons or organizational command & control.  Regarding national competitiveness, AGI could possibly rock a nation’s democratic institutions and regulatory frameworks through its complexity, manipulativeness, and unpredictability. Besides, AGI could disrupt the current balance of economic power with unexpected boost in industrial productivity and scientific discoveries.

The third problem is that AGI might open a Pandora’s box of WMDs (weapons of mass destruction). “Foundation models”, one of AGI’s killer applications, could serve as “malicious mentors” to simplify complex methods into step-by-step instructions for users to follow. This lowers the entry barriers for non-experts to engage in developing WMDs such as biomedical pathogens or cyber malware.

The fourth problem is that AGI might metamorphose into a dangerous artificial agent with its own autonomy and agency. Human’s over-reliance on AGI for optimalization may cause the problem of too much optimalizaiton but too less human control. At first AGI may accidentally operate out of sync with pre-set objectives, but gradually AGI may pursue its own objectives in counter to human intentions, largely undermining human agency.

The fifth problem is that AGI might generate unprecedented instability and uncertainty in a global scale. With AGI’s potential to wield unparalelled power and cause unthinkable damage, a new kind of arms race among sovereign states would emerge as the new normal. Nations grow concerned about not merely capabilities and intentions of their own, but also those of their rivals. The consequent misperceptions and distrust easily provoke preemptive strikes, and make the world a more tumultuous place.

What should American policymakers do to address these five problems? The authors argue that the current U.S. policy focuses more on maintaining America’s competitive advantages against China in critical technology and building up a U.S.-led global technological ecosystem. This policy does not take adequate consideration on the widespread implications of AGI’s offensive and defensive characteristics. The authors suggest that the U.S. should make contingency response plans for the security challenges that AGI poses, while conduct scenario exercises to evaluate security impacts.

In 2014, the great scientist Stephen Hawking warned that human-created AI might evolve on its own, and eventually terminate its creators. Hawking’s fear was shared by American entrepreneur Elon Musk, who called AI “our biggest existential threat.” We can’t find AI safe unless we bind it safe. The promise of technology is to empower human beings, not to enslave them. It’s the inescapable responsibility of AI engineers, entrepreneurs, and policymakers to keep that promise.

The article is based on RAND Corporation’s “Artificial General Intelligence’s Five Hard National Security Problems”, authored by Jim Mitre and Joel B. Predd. Read the full report

Leave a comment