Lethal Autonomous Weapon Systems and Artificial Intelligence in Future Conflicts

by Patrick Truffer. He has been working in the Swiss Armed Forces for more than 15 years, holds a bachelor’s degree in public affairs from the Swiss Federal Institute of Technology in Zürich (ETH Zurich), and a master’s degree in international relations from the Free University of Berlin.

According to critics, mankind will be a huge step closer to self-extinction in the not-too-distant future. More specifically, this work could be carried out by lethal autonomous weapon systems (LAWS) equipped with artificial intelligence (AI). But how realistic such apocalyptic future scenarios really are? What is the current status of LAWS and how likely is an international ban on such weapon systems?

More or less unnoticed by the public, systems with weak AI are evolving: they improve the results of search queries on the Internet, are used in speech recognition or machine translation, and such systems will also be used to control drones and vehicles, to increase efficiency in logistics, in medicine and in many other areas in the near future.

Through the use of “deep learning neural networks“, the progress in the field of AI has been remarkable in recent years. Put simply, this approach first extracts abstract solution strategies from an extensive data collection on a specific problem. The solution strategies are then supplemented, expanded and improved using data known and unknown to the system – comparable to training. Finally, the perfected solution strategies are applied to specific problems. [1]

This approach can also be found in systems used in the field of security. For example, there are pilot projects for the autonomous evaluation by surveillance cameras that strike an alarm in the event of “conspicuous behaviour” (Jefferson Chase, “Facial Recognition Surveillance Test Extended at Berlin Train Station“, Deutsche Welle, 15.12.2017). Therefore it is not surprising that armed forces are also interested in these new technologies.

Theoretically, the Phalanx Close-In Weapon System CIWS could be operated autonomous whiteout a man in or on the loop.

Theoretically, the Phalanx Close-In Weapon System CIWS could be operated autonomous whiteout a man in or on the loop.

Automatic weapons systems have been in use in the armed forces for decades, but with operators always involved in the detection, identification and selection of targets or in the final decision on the use of (lethal) force. In autonomous systems, on the other hand, these processes take place almost without human interference. Such systems do not have “free will”, but they are able to carry out certain tasks independently, without human interaction, under unforeseeable conditions, on the basis of their rules of engagement (Paul Scharre, “Army of None: Autonomous Weapons and the Future of War“, W. W. Norton & Company, 2018, 27ff). Theoretically, this is already the case with some defence systems, such as the Aegis Combat System, the Phalanx Close-In Weapon System CIWS, and modern air defence systems [2]. Currently, more than 30 states have such autonomous defence systems. However, this excludes drones that are still used operationally, because they are remote-controlled and not autonomous, at least in the decisive phase of the use of force (Scharre, “Army of None”, 4).

The defence industry is advancing research in the field of LAWS in which AI systems will play a decisive role. Test flights conducted in 2013-2015 demonstrated the capabilities of Northrop Grumman X-47B to take off and land autonomously from an aircraft carrier and to carry out air refuelling autonomously. Autonomous systems play an important role in the U.S. Third Offset Strategy, which aims to secure the technological lead of the U.S. armed forces in the long term. Pentagon’s Defense Advanced Research Projects Agency (DARPA), the Office of Naval Research and the U.S. Air Force have been experimenting for years with the use of swarms of low-cost autonomous microdrones. Initial approaches to such systems have already been tested, even if they have not yet been given a kill order. In 2016, the U.S. Air Force released 103 Perdix micro-drones from an airplane, which then went autonomously into formation and independently carried out various small missions; the associated video was released in early 2017. According to Stuart J. Russell, a British AI scientist at the University of California, the U.S. Armed Forces would be able to cost-effectively produce swarms of such drones within 18 months. When produced serially, micro-drones are expected to cost between 30 and 100 U.S. dollars apiece (Andreas Mink, “Wie Roboter uns töten werden“, NZZ am Sonntag, 02.12.2017).

Loitering Munition also has a high degree of autonomy. It is a type of guided weapon that is initially launched without a specific target, can orbit over a target area for a long time, and then uses its sensors to attack the target. This includes, for example, the IAI Harop, which comes in the form of a stealth drone. It can autonomously eliminate radar sources from its waiting position above the target area, in which it can stay for 6 hours. Israel Aerospace Industries (IAI) had exported the Harop to Azerbaijan, where it was used in the Nagorno-Karabakh region for the first time – albeit not as originally planned. The device hit an Armenian bus with militia soldiers who were being transported to the contested region. Seven soldiers were killed (Thomas Gibbons-Neff, “Israeli-Made Kamikaze Drone Spotted in Nagorno-Karabakh Conflict“, Washington Post, 05.04.2016).

This shows that the U.S. is not the only one conducting research in the field of LAWS. With the Taranis, an autonomous combat drone from BAE Systems, the UK pursues a similar demonstrator program. The findings will be incorporated into the Franco-British Future Combat Air System together with the equivalent French project nEUROn, also involving the Swiss RUAG Aerospace. In February 2017, the French Defence Minister Jean-Yves Le Drian announced that AI systems will play an increasingly important role for France in the development of new military technologies. France will to ensure that the connection to the U.S. and the UK will not be lost in this area (Jean-Yves Le Drian, “L’intelligence artificielle: un enjeu se souveraineté nationale’, in Intelligence artificielle: des libertés individuelles à la sécurité nationale, Eurogroup Consulting, 2017, 11–24). Chinese President Xi Jinping, for his part, wants to transform China into a “superpower of artificial intelligence” by 2030, with massive investments (Mink, “Wie Roboter uns töten werden”). Similar to the UK and France, but still lagging behind technologically, China is researching an autonomous reconnaissance and combat drone, the AVIC 601-S Sharp Sword. Russian President Vladimir Putin has also recognised the importance of AI systems. In September 2017, he said that whoever would become the leading nation in AI would achieve world domination (“‘Whoever Leads in AI Will Rule the World’: Putin to Russian Children on Knowledge Day“, RT International, 01.07.2017). Despite lagging significant behind the USA and China, Russia has intensified its development of autonomous systems (Vincent Boulanin and Maaike Verbruggem, “Mapping the Development of Autonomy in Weapon Systems“, SIPRI, November 2017, 97f).

In formation (from right to left): Dassault's nEUROn, Rafale et Falcon 7X.

In formation (from right to left): Dassault’s nEUROn, Rafale et Falcon 7X.

The military use of autonomous systems and the proliferation of the corresponding technologies are associated with some risks. At the strategic level, it should not be underestimated that the autonomous systems lack the ability to understand own actions in the overall context. LAWS will be used with a set of rules, which will define their actions. However, the context of a conflict can change rapidly. In such cases, the stubborn observance of preassigned rules, the lack of anticipation, empathy and gut feeling can lead to unforeseen or unwanted escalations. Every soldier knows that, in the course of a mission, the command received at the beginning may not coincide anymore with the original intention of the superior. A soldier must be able to adapt himself to the situation, so that the intention of the superior can be implemented at all times and all unforeseeable circumstances. The absence of this consideration of the overall context can lead to instability at the international level. If LAWS had already existed during the Cuban Missile Crisis, a U.S. policy might have ensured that Soviet ships (including submarines) would have been forcibly prevented from crossing the blockade line if necessary. A Soviet rule of engagement would probably have meant that the sinking of a strategic submarine would have to be answered with a regionally limited nuclear retaliation. If both systems would have begun to interact with each other uncontrollably with the respective guidelines, for example because the U.S. system detects a violation of the blockade line (whether rightfully or not), this would have lead to a catastrophe at a breakneck speed. [3]

Interestingly, such considerations do not play a decisive role in the discussion on an international ban on LAWS. The reason for this is that the efforts to ban LAWS are being driven almost exclusively by NGOs active in the field of human rights and international humanitarian law. This creates two different camps which demand a ban for different reasons:

  • Consequentialists fear negative effects on the use of LAWS because these systems will not be able to comply with the principles of international humanitarian law. These include, for example, the distinction between combatants and non-combatants, the principle of proportionality and the prevention of unnecessary suffering. The problem with this argumentation is that if LAWS were to violate the principles of international humanitarian law, their use would already be prohibited today – an additional ban is not necessary. On the contrary, such a ban would benefit those states which do not recognise international law and would therefore not comply with this ban either. Also unanswered is the question of how to proceed from the point of view of the consequentialists, if the progressive technology would enable a much more precise use of force through the use of LAWS, which could incapacitate the opponent in a much more targeted and selective way. [4] Wouldn’t LAWS have to be used as an outright priority in such a case?
  • Deontologists argue that the use of LAWS is unethical regardless of its effects, as is the case with torture, for example. Decisions on life and death would have to be made exclusively by people, for only they are in a position to morally weigh up such a decision in its full scope. Even if LAWS could reduce the number of deaths in wars, their use would violate human dignity (Scharre, “Army of None”, 285ff). The problem with this argument is that it is completely unrealistic – the days of a fight in which one human soldier faces another are long gone. Where is the dignity in being mowed down by a machine gun, shredded by a bomb, or killed by an infection? In wars, there is no right to a dignified death. In this way, deontologists do not criticize LAWS, but wars per se.

The Campaign to Stop Killer Robots, which has been active since 2013 and is coordinated by Mary Wareham of Human Rights Watch and supported by a number of NGOs, belongs to the camp of deontologists. The campaign is committed to a “pre-emptive and comprehensive ban on the development, production, and use of” LAWS. Although the campaign does not (yet) insist on a general ban on AI in weapon systems or remote-controlled or automated weapon systems in armed forces, it demands that such systems must in any case be controlled by humans.

In early April, the Group of Governmental Experts (GGE) formed by the Convention on Certain Conventional Weapons (CCW) addressed the topic for the second time. However, it would be premature to conclude on the success of the campaign – the GGE is nothing more than a discussion forum without a negotiating mandate. A first decisive hurdle will be whether a generally accepted definition of LAWS can be formulated. Although 26 countries currently support a ban, with the exception of China, these countries are not technological pioneers in the field of LAWS. With few exceptions, these states seem to be more interested in restricting the capacity of more powerful states to act than in humanitarian or even ethical factors. This will make it difficult to get the USA, Russia, UK and France, all of which explicitly opposed such a ban, on board.

Even if, in the long term, the international community of states could impose such a ban, the genie can hardly be pushed back into the bottle. The developments that form the basis for LAWS are not military but civilian. The key to this is the software development, which is largely independent of future use and the eventually chosen hardware platform. The ability of an autonomous system to move around in an unknown building, to sketch and identify rooms, equipment and people inside, can be used positively in the hands of rescue forces, but negatively in the hands of terrorists (Scharre, “Army of None”, 121ff).

According to Frank Kendall, former U.S. Under Secretary of Defense for Acquisition, Technology and Logistics, the main threat posed by LAWS is not the application by armed forces, but the uncontrollable proliferation in which virtually anyone can access the corresponding technologies and use them for their ends (Scharre, “Army of None”, 120). [5] What a future with killer drones in the wrong hands could look like was impressively demonstrated last November in the video “Slaughterbots”, which caused quite a stir.

Automatic weapon systems have been a reality in the armed forces for decades, although always reserving the ultimate power of decision to humans. LAWS go one step further in this respect: they have the ability to carry out certain tasks independently and without human interaction under unforeseeable conditions. This is not a very distant vision of the future – simple autonomous systems in clearly defined fields of application, for example in air defence, theoretically already exist today. But progressive development and use of LAWS do not necessarily have to end in an apocalypse, although the challenges of such weapon systems in terms of ethics, international stability and proliferation are also considerable. A ban on LAWS not only tries to push the genie back into the bottle, which will hardly be possible, but also prevents the benefit of potential opportunities. It cannot be ruled out that LAWS could make a much more targeted and precise use of force possible. Taking into account the development efforts for the underlying technology, not driven primarily by the military field, and the interests of the states involved in the discussion on a LAWS ban, a comprehensive international ban on LAWS seems rather unlikely.

[1] AlphaGo, the AI system that could beat the best human Go player, was originally fed and trained with data from 30 million Go games played by people (Scharre, “Army of None”, 125f). Following video series presents a good introduction to “deep learning neural networks”: Part 1 – “But What is a Neural Network?”; Part 2 – “Gradient Descent, How Neural Networks Learn”; Part 3 – “What Is Backpropagation Really Doing”.
[2] Theoretically, because these systems have integrated human controls despite their autonomous capabilities (for example, with a decision on fire release).
[3] This example is not so far-fetched: on October 27, 1962, the U.S. aircraft carrier USS Randolph, together with 11 destroyers, forced the Soviet submarine B-59 to surface. To this end they used depth charges with a small explosive force (roughly equivalent to a hand grenade). Because the submarine could not maintain contact with Moscow, the authorisation to use the 15 kiloton nuclear warhead (equivalent to the explosive force of the Hiroshima bomb), which was carried by the submarine unbeknownst to the U.S. Navy, was delegated to the commander of the submarine, the political officer and the fleet commander. Under the impression of being sunk by the U.S. ships, the submarine commander, with the consent of the political officer, gave the order to fire the nuclear torpedo and thus destroy all U.S. ships in the vicinity in one fell swoop. Finally, the fleet commander was the only one who suppressed the execution of the command (Scharre, “Army of None”, 310; “The Cuban Missile Crisis, 1962: Materials from the 40th Anniversary Conference“, The National Security Archive, 11.10.2002).
[4] According to Ronald C. Arkin, professor at the Georgia Institute of Technology and U.S. scientist in the field of robotics, this is quite feasible (Thompson Chengeta, “Prof. Ron Arkin UN Debate on Autonomous Weapon Systems“, 2016; Ronald C. Arkin, Patrick Ulam, und Brittany Duncan, “An Ethical Governor for Constraining Lethal Action in an Autonomous System“, Defense Technical Information Center, 2009).
[5] The basic software for creating “deep learning neural networks” can already be freely downloaded from the Internet.

This entry was posted in Drones, English, International law, Patrick Truffer, Proliferation, Security Policy, Technology.

1 Response to Lethal Autonomous Weapon Systems and Artificial Intelligence in Future Conflicts

  1. Fantastic.
    I am really energized after reading this article.
    You are very clear and details article about Artificial Intelligence and Deep Learning.

    AI and DL is future and almost 1 – 5 of the job is for AI during 2025. It’s my keen desire to become an AI engineer and server this community. Even, I am also helping my students to learn this new technology and achieve the hipe of the Modern world.

    My teachers are also starting AI program with the support of the Pakistani Government.
    Definalty, in my first class, I recommend this AI article to my student.

    Thank you Offiziere

    Zohaib Saleem

Leave a Reply

Your email address will not be published. Required fields are marked *