Integrating artificial intelligence into nuclear systems is risky, and the temptation to do so will only get worse.
The Terminator Conundrum
Imagine you could get a fresh-baked pizza from your favorite Italian joint delivered in less than eight minutes. As enticing as that sounds, depending on where you live, it’s probably impossible—at least, not without drone delivery.
Now consider this. Experts predict countries will soon be able to deliver nuclear weapons anywhere in the world in that same amount of time. Hypersonic weapons, stealthy cruise missiles, and automated systems that use artificial intelligence (AI) will make this possible.
Nuclear experts warn of the risks of integrating AI-enabled systems into early warning, nuclear command and control, and delivery systems. But the temptation to do so will only increase in the future.
AI-enabled systems would offer countries that adopt them a slight time advantage over others. In a nuclear conflict, having more time to make decisions could mean the difference between a nation’s survival and its annihilation.
If one country integrates AI into its nuclear systems, others might feel they must follow. However, there are some grave dangers. Such systems would transfer decision-making authority from humans to machines. This would significantly increase the risk of accidental use of nuclear weapons, unintended escalation, and nuclear war. What is intended to provide security and guarantee survival might end up killing us all.
Senior U.S. military leaders have referred to the dilemma of autonomous systems as the “Terminator Conundrum”. Even if countries understand the risks, the “logic” of nuclear deterrence may persuade some to go down the AI-enabled path.
The Need for Speed
Time has always been a critical element in shaping nuclear competition. During the Cold War, the U.S. Air Force boasted that Minuteman intercontinental ballistic missiles (ICBMs) could reach their target in thirty minutes or less. This was supposed to make U.S. adversaries too afraid to attack.
Before the U.S. deployed the solid-fuel Minuteman in 1962, extremely volatile liquid fuel had to be stored separately from the missiles. The fuel could only be pumped into the ICBMs shortly before launch. The minimum delay of fifteen minutes seemed like an eternity when every minute counted against the U.S. ability to retaliate against a nuclear attack. The Minuteman solved this problem.
The need for speed also underpins the “logic” of nuclear deterrence. To function properly, deterrence requires that adversaries coexist in a state of “mutually assured destruction”. In other words, each side must be capable of annihilating the other in order to prevent nuclear war. Any technology that enables one side to carry out a preemptive disarming attack removes the chance for retaliation and increases the risk of nuclear war. To prevent such a scenario, time is of the essence.
In the future, the President may not have enough time to retaliate prior to the first detonation of a nuclear weapon on U.S. soil. To save time, the decision to retaliate with nuclear weapons would be made by a machine, not a human.
Since the dawn of nuclear weapons, strategic thinkers have worried about the speed of delivery, the speed of detection and warning, the speed of decision, and the speed of launch. Some experts have recently argued for the integration of AI-enabled systems into U.S. early warning, detection, and command and control systems to achieve additional gains in speed. In the future, the President may not have enough time to retaliate prior to the first detonation of a nuclear weapon on U.S. soil. To save time, the decision to retaliate with nuclear weapons would be made by a machine, not a human. It’s reminiscent of Stanley Kubrick’s doomsday device from the classic film, Dr. Strangelove. And most of us know how that movie ended.
Time’s Been Short for a While Now
But, do these short decision times really matter? If so, why?
It’s hard to fathom that decisions made in thirty minutes are much better than those made in fifteen or eight minutes—that is, when they concern the life and death of millions of innocent civilians.
In reality, we’ve already lived with this for many decades: submarine-launched ballistic missiles (SLBMs) can reach their targets in less than eight minutes if they are located near the coast. Of course, using these second-strike weapons in a first-strike attack defies the logic of nuclear deterrence. The key point is this: timeframes for nuclear decision-making have been short for a long time.
If this is true, why do we think there’s a new problem? Stanley Kubrick explained it best in Dr. Strangelove: the problem exists because “deterrence is the art of producing in the mind of the enemy... the fear to attack.” It matters because we fear it.
Is a Doomsday Device Really Necessary?
Today, about four hundred Minuteman III ICBMs stand ready to launch in response to a nuclear attack. Due to the inherent vulnerabilities of ICBMs, the President would face incredible pressure to “use or lose them”. In other words, the perceived need to launch ICBMs before they are destroyed by an adversary exacerbates the decision time crunch.
Wouldn’t the elimination of ICBMs get rid of this problem? In a world without ICBMs, decision-makers would no longer have to make life or death decisions in mere minutes.
Nuclear deterrence experts would remind us that ICBMS also help protect the U.S. from a preemptive decapitation strike. During the Cold War, a preemptive strike on just five targets within the U.S. would have jeopardized its ability to retaliate. In the future, shorter decision times will exacerbate the potential for a loss of command and control in a first strike. In such a scenario, the quick launch of ICBMs would ensure early retaliation before loss of leadership or communication.
However, in extremely compressed situations, there might not even be time for the U.S. President to reach a decision, give the retaliation order, and/or move to a safe location from which to launch a retaliatory strike.
In the past, the U.S. responded to this problem by strengthening communications and early warning systems and ensuring a massive second-strike capability. In the future, under the logic of deterrence, such situations might seem “to require” some type of doomsday device to ensure effective retaliation.
Instead of considering such extreme measures, wouldn’t it be more logical to eliminate nuclear weapons? In a world without nuclear weapons, there would no longer be a need to worry about decapitation strikes and no need to automate decisions to use nuclear weapons.
Nuclear deterrence experts would warn us that as long as U.S. adversaries retain their nuclear weapons, the U.S. needs a credible and effective nuclear deterrent to support its national security strategy and to prevent nuclear war. According to the logic of deterrence, eliminating nuclear weapons would actually increase the risk of their use if only one country decided to keep them.
When Deterrence Dictates Our Choices
Decades of U.S. nuclear strategy reveal a similar unbridled optimism in the logic of deterrence. It assumes rationality and the availability of information. It assumes accurate interpretation of signaling and calculations. It demonstrates a belief in the ability to control nuclear escalation. It takes as given the absence of panic or miscalculation on both sides in midst of a nuclear attack. It ignores the potential risk of accidental launch, false alarms, or miscalculations.
Indeed, the logic of deterrence has been used to justify the development of Minuteman ICBMs, the production of tens of thousands of strategic nuclear warheads, the design of tactical nuclear weapons for use on the battlefield, provocative advancements such as multiple independent reentry vehicles, and much more. Today, such thinking supports the estimated trillion-dollar modernization of U.S. nuclear weapons systems, the development of a low yield submarine-launched ballistic missile, and nuclear-capable hypersonic missiles. All of this justified to prevent nuclear war.
The U.S. Nuclear Triad
Someday, the logic of deterrence may also dictate the integration of AI-enabled systems with nuclear weapons This will not happen overnight. Rather, it will occur gradually, under the radar and largely out of public view. We are entering an age where AI-enabled systems can do things faster than humans. They can absorb more data, detect nuances in patterns invisible to the human eye, and make complex decisions in nanoseconds. Human decision-makers will increasingly appear to be the slowest cogs in the system. We may even begin to see human involvement in the use of nuclear weapons as a security risk. And the day may come where leaders decide to replace fearful humans with logical AI-enabled systems that operate at high speeds. The logic will be dictated by deterrence.
The Courageous Choice
Can humans resist the allure of machine speed for nuclear weapons?
First, we must remember that the so-called “logic” of nuclear deterrence boils down to fear. And since fear is not logical, it will not be well understood by AI-enabled systems. By removing human fear, judgment, and gut instinct from nuclear decision-making, AI-enabled systems would increase the risk of false alarms, escalation, miscalculation, and unintended nuclear wars.
Second, we must work to eliminate these risks in the first place. To do this, we must pursue greater reductions in nuclear weapons, strengthen the nonproliferation norm, and keep humans in the loop of decision-making as long as nuclear weapons exist. At the height of tensions between the Soviet Union and the U.S., even President Ronald Reagan came to the courageous conclusion that eliminating nuclear weapons was the only truly logical solution to confronting their threat. We must have the courage to believe in a world without nuclear weapons and to take tangible steps toward that end. Only then will we be able to resist the allure of speed.