The common conception of a technologically enabled apocalypse foresees a powerful artificial intelligence that, either deliberately or by accident, destroys human civilization. But as a new report from the RAND Corporation points out, the reality may be far subtler: As AI slowly erodes the foundations that made the Cold War possible, we may find ourselves hurtling towards all-out nuclear war.
There’s a “significant potential” for artificial intelligence to undermine the foundations of nuclear security, according to a new report published today by the RAND Corporation, a nonprofit, nonpartisan research organization. This grim conclusion was the product of a RAND workshop involving experts in AI, nuclear security, government, and military. The point of the workshop, which is part of RAND’s Security 2040 project, was to evaluate the coming impacts of AI and advanced computing on nuclear security over the course of the next two decades. In light of its findings, RAND is now calling for international dialogue on the matter.
At the very core of this discussion is the concept of nuclear deterrence, in which the guarantee of “mutually assured destruction” (MAD), or “assured retaliation,” prevents one side from launching its nuclear weapons at an equally armed adversary. It’s a cold, calculating logic that has—at least to this stage in our history—prevented an all-out nuclear war, with rational, self-preservational powers opting to fight a Cold War instead. As long as no nuclear power maintains significant first-strike capabilities, the MAD concept reigns supreme; if a weapons system can survive a first strike and hit back with equal force, assured destruction remains in effect. But this arrangement could weaken and become destabilized in the event that one side loses its ability to strike back, or even if it starts to believe that it runs of the risk of losing that capability.
This equation incentivizes state actors to avoid steps that could destabilize the current geopolitical equilibrium, but, as we’ve seen repeatedly over the past several decades, nuclear powers are still willing to push the first-strike envelope. See: the development of stealth bombers, nuclear-capable submarines, and most recently Russian president Vladimir Putin’s unveiling of an invincible ballistic missile.
Thankfully, none of these developments have truly ended a superpower’s ability to hit back after a first strike, but as the new RAND report makes clear, advanced artificial intelligence, in conjunction with surveillance technologies such as drones, satellites, and other powerful sensors, could erode the technological equilibrium that maintains the delicate Cold War balance. AI will achieve this through the mass surveillance of an adversary’s security infrastructure, finding patterns invisible to the human eye, and revealing devastating vulnerabilities, according to the report.
“This isn’t just a movie scenario,” said Andrew Lohn, an engineer at RAND who co-authored the paper, in a statement. “Things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful.”
An exposed adversary—suddenly aware of its vulnerability to a first strike, or aware that it could soon lose its ability to hit back—would be put into a very challenging position. Such a scenario might compel the disadvantaged actor into finding ways of restoring the balanced playing field, and it may start to act like a wolverine that’s been backed into a corner. Advanced AI could introduce a new era of distrust and competition, with desperate nuclear powers willing to take catastrophic-scale, and possibly even existential-scale, risks.
Disturbingly, the pending loss of assured destruction could lead to a so-called preventative war, whereby a war is started to prevent an adversary from attaining a capability for attacking. In the years leading up to the First World War, for example, Germany watched with grave concern as its rival, Russia, began to emerge as significant regional power. Its experts predicted that Russia would be able to defeat Germany in armed conflict within 20 years, prompting calls for a preventative war. And in the immediate post-WWII era, some thinkers in the United States, including philosopher Bertrand Russell and the mathematician John von Neumann, called for a preemptive nuclear strike on the Soviet Union before it could develop its own bomb.
As these examples show, the period in which developments are poised to disrupt a military advantage or a state of equilibrium (i.e., MAD) can be a very dangerous time, prompting all sorts of crazy ideas. As the authors of the new RAND report point out, we may be heading into another one of these transition periods. Artificial intelligence has “the potential to exacerbate emerging challenges to nuclear strategic stability by the year 2040 even with only modest rates of technical progress,” write the authors in the report.
Edward Geist, an associate policy researcher at RAND and a co-author of the new report, says autonomous systems don’t need to kill people to undermine stability and make catastrophic war more likely. “New AI capabilities might make people think they’re going to lose if they hesitate,” he said in a statement. “That could give them itchier trigger fingers. At that point, AI will be making war more likely even though the humans are still in ‘control’.”
In conclusion, the authors warn of grim future scenarios, but concede that AI could also usher in an era of unprecedented stability. They write:
Some experts fear that an increased reliance on AI could lead to new types of catastrophic mistakes. There may be pressure to use it before it is technologically mature; it may be susceptible to adversarial subversion; or adversaries may believe that the AI is more capable than it is, leading them to make catastrophic mistakes. On the other hand, if the nuclear powers manage to establish a form of strategic stability compatible with the emerging capabilities that AI might provide, the machines could reduce distrust and alleviate international tensions, thereby decreasing the risk of nuclear war.
The authors said it’s impossible to predict which of these two scenarios will come to pass, but the global community has to act now to mitigate the potential risks. In terms of solutions, the RAND authors propose international discussions, new global institutions and agreements, acknowledgement by rival states of the problem, and the development of innovative technological, diplomatic, and military safeguards.
Such is the double-edged sword of technology. AI could either lubricate the gears to our doom, or, as it did in such films as Colossus: The Forbin Project (1970) and War Games (1983), protect us from ourselves. In this case, it’s best to adopt the old adage in which we’re reminded to hope for the best while planning for the worst.
No comments:
Post a Comment