Return to site

The Danger of Autonomous War Machines

By Henry Salem '20, Staff Writer

· Henry Salem

In a recent report, an anonymous scientist told the South China Morning Post that the Chinese navy is working to update their nuclear submarines with Artificial Intelligence (AI) systems. This development reportedly aims to substantially reduce the need for human action and decision-making on these submarines and bolster it with a computer system that directs and operates the submarine autonomously.

 

 

Under the current mode of operations, hundreds of people are required to operate the nuclear vessels and make split second decisions that have potentially deadly consequences. By removing the need for humans to make these decisions, such as whether to steer the submarine away from dangerous oceanic areas or whether to fire a torpedo, China believes that human error will be eliminated and these vessels will be safer.

 

While this is true, there is a great danger in creating an autonomous ship with the capacity to fire deadly weapons. Implementing an AI “brain” onto a submarine simply substitutes human error with machine error; any small malfunction or lapse in the information system can cause the autonomous submarine to behave dangerously and unpredictably. Furthermore, expecting a relatively primitive AI system to utilize the same logic and reason as a team of humans seems dangerous. This begs the question: is applying AI to submarines a safer option?

 

Aside from the potential of technological malfunction and unreliability, the recursive self-improvement capabilities of AI systems allow for systems like those planned to be placed inside nuclear submarines to better themselves with input from experience and simulations. Artificial intelligence that harnesses this capability has the potential to improve itself to a point beyond human control. It does not matter if there are crew members on these nuclear submarines to check and turn off these AI systems. These systems have the capability of exponential improvement, which left unchecked, could lead to incredibly dangerous situations in the future.

 

The decision by China to devise and implement a central AI system onto military submarines will undoubtedly begin an arms race to improve and apply artificial intelligence to other machines of war. The allure of an AI race is that modernizing a country’s military systems will allow a country to have an advantage on the others; the logic here is that AI has the potential to outperform and outsmart humans, and thus weaponry and systems controlled by AI will outperform those controlled by humans.

 

While the belief is that AI-controlled military systems will better create and promote security for a country, it is foolish to believe that a world flooded with autonomous militaries will be anything but dangerous. The logical next steps after global military powers enhance their submarines with AI “brains” is to apply similar AI functions to nuclear weapons themselves and other weapons of mass destruction. If this is the case, the power to destroy the world multiple times over will be given to artificial intelligence with autonomous control.

 

The fear of a world war ignited by a technological malfunction should be enough to prevent countries from introducing AI to military systems. If that is not enough, the fear of a futuristic apocalypse in which AI-controlled military systems self-improve to the point that humans can neither control nor comprehend it should suffice. While military improvement should be of relative importance for most countries, the implementation of AI into these militaries should be an incremental and well-regulated process.

All Posts
×

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly