Closing the Gaps:

Lethal Autonomous Weapons and Designer Responsibility

 

 

 

 

John Forge,

School of History and Philosophy of Science,

The University of Sydney

jjohn@tpg.com.au


 

Abstract

 

It has been said that agents are only responsible for the outcomes of their actions if they know what these will be and if they can control them. These two conditions are shown not in fact to be necessary for holding agents to account for what they do, and hence claims to the effect that advanced AI systems are such that their designers cannot be held responsible, because they cannot predict how their systems will behave, do not stand up. The focus of the paper is a particular kind of advanced AI system, namely Lethal Autonomous Weapons (LAWS) and the question as to whether there would be any ‘responsibility gaps’ were such systems designed and deployed. An attempt to show that there would be no gaps because of the distributed nature of military command responsibility is seen to fail. To address the problem, an account of the responsibility of weapons designers based on a three-fold taxonomy of uses/functions of artefacts is reviewed and then applied to topic at hand. It is then established on this basis that designers would be responsible for whatever LAWS do. The moral of the story, however, is that designers should not engage in programmes to design LAWS.

 

 

 

 

 

 

Keywords: LAWS, Responsibility Gaps, Designer Responsibility, Autonomy, Just War.


There are, as far as we know, no lethal autonomous weapons (LAWS) in existence at present. But there are good reasons why technologically advanced states, such as the United States, would like to develop them. For instance, the high political cost of causalities for technologically advanced states engaged in the war on terror, and again we can single out the United States here, has led to a search for ways and means to minimise such causalities, such as drones and, one suspects, LAWS. Those who deplore weapons research and believe that the expertise and resources devoted to the development of new ways to kill would be much better spent in devising new ways to help rather than harm people will be opposed to any project to make LAWS simply because this is a new kind of weapon. But there have also been appeals to ban the research and development of LAWS because of concerns specific to this particular kind of weapon, and in this paper I want to discuss one such concern, namely that deploying LAWS has the potential to lead to responsibility gaps.

 

One can speculate about just what kind of things LAWS will be. I will not go down this path, but just think of LAWS on analogy with drones, drones that are capable of choosing and carrying out missions autonomously, without a pilot.[1] Such systems will have to have learning capabilities and flexible programming, but I will not speculate here about how this might be achieved because I want to focus on the conceptual issues not the technology. There has been debate in the literature about responsibility gaps, about whether these would or could arise if LAWS were deployed, about whether this would mean that LAWS should not in fact be deployed or developed and about where precisely the gap might be. The latter issue concerns various (human) candidates for standing in the responsibility relation, such the designer, the operator/programmer, the military commander, etc. My focus of attention is the responsibility of the designer, although I will have something to say about military command structure. I begin by discussing the significance of responsibility gaps and why it is that the deployment of LAWS is thought to give rise to them, beginning with a statement of a general problem, in the form of a dilemma, with advanced AI systems. I move on to consider at attempt to close the responsibility gap by appealing to the idea of command responsibility which fails to avoid the dilemma. In the last section of the paper, I apply an account of designer responsibility that I have developed elsewhere to address issues to do with weapons research, and we will see that (at least) the designers of LAWS would be responsible for what their creations do.

 

Responsibility Gaps

 

We can take the following statement by Matthias as a summary of a general problem with advanced AI systems, systems that have the kind of learning capacity and flexible programming that we associate with LAWS.

 

Traditionally, the  operator/designer of a machine is held (morally and legally) responsible for its operation. Autonomous learning machines … create a situation where the designer is in principle not capable of predicting the future machine behaviour and thus cannot be held morally responsible for it. The society must decide between not using this kind of machine any more … or facing a responsibility gap which cannot be bridged by traditional concepts of responsibility ascription (Matthias: 175).

 

Matthias is presenting us with a dilemma: either admit to (the possibility of) there being responsibility gaps or forgo autonomous learning machines.

 

It is perhaps debatable whether the designer of a machine has always been held morally responsible for its operation in every circumstance. This would at least presuppose that the machine was used properly, and there may be other considerations that might persuade us to dispute this as a universal claim, but I will not worry about such matters here and refer to this special sense of responsibility as designer responsibility. It will be necessary to see what underpins designer responsibility and the way in which it engages with LAWS. In the introduction to his paper – the quote passage is his abstract – Matthias says that a necessary condition for responsibility is control, and hence if an agent is not able to predict what is going to happen as a consequence of her actions, then she is not in control of what is happening and hence cannot be held responsible. She therefore has an excuse: the question of any blame or praise therefore does not arise. We need to begin by examining this line of reasoning.

 

If agent P does not know about the outcomes or consequences of her action when she performs the action, then it may appear unfair if she is held accountable for them – the view that one is always accountable for everything one does is called strict liability and the consensus is that agents are not strictly liable. So it seems we should accept: P did not know that by doing X, Y would come about, therefore P is not morally responsible for Y and so cannot be held to account if Y is harmful. This is part, but not all, of what is implicit in the passage from Matthias. That it is not all he has in mind becomes clear when we read more. At the very beginning of his paper, after the summary quoted above, Matthias states that “For a person to be rightly held responsible for her behaviour…she must have control over her behaviour” (Matthias: 175). So if P does not know what will follow as a consequence of her doing X, then she has no control over Y and any other consequences, or so it may seem. We may note here that  this second excusing condition about lack of control, as well as the one in regard to ignorance, can both be traced back to Aristotle and hence has a venerable history. Aristotle however believed that the two conditions were singly sufficient to excuse; that is, it was not necessary to be both be ignorant of consequences and lack control over them (Aristotle 1962: 52-68). What he, and others, have in mind in regard to lack of control was coercion, where P knows full well that what she does will lead to Y, but has no choice but to do X. Having said this, Matthias is addressing a special case, where a designer provides the basis on which to construct an artefact. All of those involved in the use of the artefact, the operators and the designers in particular, need to make sure it does what it is supposed to do, and hence control seems central to this.

 

There are many occasions in which agents perform actions that have outcomes which are unforeseen, which are harmful and which the agents in question would much rather not have performed. Many examples could be given. For instance, suppose my neighbour in running late for a plane and the only way he can catch it is if I give him a lift. I do, he gets his flight, but tragedy strikes and his plane crashes due to unprecedented bad weather with a loss of all life on board. I am responsible for his getting the flight in that I caused him to get it because I gave him a lift and I intended he get it, but I did not of course intend for him to perish. Since I am not responsible for what happened to him even though I made a causal contribution, it could be said that there is therefore a gap in responsibility. But to say that implies that I somehow should be responsible but I’m not, that something is somehow amiss here. So it is inappropriate to speak of a responsibility gap, assuming it has this connotation. I am assuming that it does have this connotation, as follows: doing X leads to Y, which is a harmful or bad in some other way: apparently, P is not responsible for Y even though she did X, but she (or someone else) should be responsible for Y. Matthias, as we have seen, identifies this as a general problem with advanced AI systems, whose behaviour is in principle unpredictable, hence unforeseeable and hence uncontrollable. This is a problem because of designer responsibility: designers are, or should be, responsible for the proper functioning of their creations.

 

I do not think that either ignorance on the part of the agent or lack of control should always persuade us to excuse the agent. Here is an example that suggests that this is not always the case. P, for reasons of her own, decides to plant an explosive device with a timer that has only a low probability of detonating, depending on the operation of a random number generator, at a time in the future which is also controlled by the generator. Here is the scenario: P plants the device in a generic component for a personal appliance, phone, tablet, laptop, etc.: she has access to the factory where these are made but does not know whether her device will end up in a phone, or tablet or whatever, neither does she know where it will be send to worldwide. Once the appliance is turned on, the random number generator determines if the device will explode or not, and sets the condition for that to happen and the time in the future when the device would explode. The condition is that a certain sequence of letters would have to be typed in a given time period, as might happen in a text message. P has set up the situation such that she cannot know whether the device will explode, when it will do so and if it will in fact harm anyone: she is both ignorant of the situation and not in control of it. And she does not actually intend that it harm others, for if that was her aim, then she would not have ensured that the detonation was a low probability event: her motivation is opaque, but we assume it is not to do with harming others. There is surely no doubt that if the device does explode and cause harm, P is responsible. This is true even though P would predict that the device would not explode, given that it is a low probability event. This thought experiment is therefore shows the following: that lack of intention is not always sufficient to excuse, that lack of knowledge about an outcome is not always sufficient to excuse and that lack of control is not always sufficient to excuse, and so there is no responsibility gap here. I will come back to this example in the last section of the paper.

 

LAWS and Responsibility Gaps

 

Just exactly what a lethal autonomous weapon would be like and what it could do has been discussed in the literature and at international meetings, and a range of views and positions have been developed. It is of course necessary to have a working definition of a lethal autonomous weapon is if one is to argue that they are to be banned: calling for something to be banned presupposed that the thing to be banned is clearly identified. Much of the discussion around this issue, and about LAWS more generally, has, as we would expect, focussed on autonomy and the senses and ways in which LAWS could be autonomous. It is generally agreed that we cannot assume that LAWS are autonomous in the sense that human moral agents are autonomous and the scenarios that need to be considered are not set in some future where strong AI projects have been realised. Sparrow (Sparrow 2007: passim ) has proposed a spectrum of possible ways in which LAWS can be autonomous, Gubrud has suggested that landmines represent a good starting point for talking about LAWS because they are autonomous in the sense of not being triggered by their operator, and there have been other useful contributions.[2] For the present purpose it is not necessary to canvas these alternatives. It is sufficient to assume that a weapon is autonomous if it selects its targets as well as launching whatever ordnance it is equipped with. In other words, something is a LAW if it decides who to kill. So for instance, it could be something that functions in the same way as the current generation of drones in every way except that there is no pilot, no sensor analyst and no intelligence officer (see Gusterson 2016: 33). A responsibility gap would then open up if such a LAW attacked the wrong target. If a drone did this, then it might be that the pilot, the intelligence officer, or the sensor operator would be called to account, but these persons are out of the loop when it comes to LAWS.

 

The most prolific user of drones, the US, sets out the conditions under which it is legal to wage war in the Department of Defense Law of War Manual – the latest available edition was published in 2015. This is a comprehensive, well-written and interesting document, and its principal tenets are supposed to be binding on all US military personnel, and its aim is to inform commanders what the laws of war are. Here we may note that the in bello just war principles of discrimination and proportionality appear (as expected). These principles are familiar and we can think of them here as guiding qualitative and the quantitative dimensions of targeting. Thus, discrimination means that only enemy combatants, and possibly enemy non-combatant support persons, can be targeted. It is therefore illegal to target civilians. The application of force must be proportionate to the objective or mission, which means that the minimum amount of force consistent with achieving the mission is required, which in turn implies that as few people as possible are to be killed. The point here is that the US, and other countries as well, explicitly sets up a framework of principles that specify what is the ‘correct’ way to wage war, and hence this serves as a basis to hold members of its armed forces responsible for what they do. Hence it would appear that those who set up such frameworks are committed to abide by them.

 

Holding members of the armed forces responsible and accountable does not only come into play when wrongdoing has been committed. If a country wishes to claim that it has fought a just war, with just cause and by means that conform to the principles of discrimination, proportionality, then its soldiers are assumed to have been accountable for what they did: they only engaged enemy combatants, did so at levels of force commensurate with the missions’ and operations’ objectives, and so forth. Killing in war is (supposed to be) justified if it is done in the prosecution of a just war, and if this is the case, then those who did the killing are accountable, for if they were not accountable, then their actions could not be justified – being justified implies that one is accountable. I mention this here, after having talked about the ‘moral framework’ that countries like the US erect about their military ventures, because even if LAWS never targeted civilians, never used disproportionate force and so on, we still want to hold some one accountable for what they do.[3] Any argument to the effect that LAWS could never commit any wrongdoing on the battlefield does not make questions of responsibility gaps go away.

 

Now suppose that our drone operators make the wrong targeting decision and kill innocent civilians. Surely they are responsible for what they have done and must be called to account, assuming that there is nothing that would excuse them? This must be true if US does indeed have rules that are binding on the members of its armed forces, and we assume that it does. In his discussion of the topic, Schulzke appeals to ideas about distributive responsibility, which encompasses the responsibility of commanders as well as soldiers on the ground. He says “Even in a relatively simple case, such as one soldier intentionally killing a civilian, many actors may have a share in the guilt depending on what they did or failed to do” (Schulzke 2013: 211). The ‘many actors’ Schulzke has in mind include the commanders of the soldier who intentionally kills the civilian. Here there are three possibilities: either the commander ordered the soldier to kill the civilian, knew this was going to happen but did not order it, or did not know it would happen. In the first case, the commander is clearly responsible, and perhaps the soldier has an excuse. I will set the second possibility aside and focus on the third. Schulzke’s view here is that commanders must assess the suitability of the troops for the mission at hand and if they fail to perform adequately and something goes wrong, then even though they do not know or anticipate that this is going to happen, they are still responsible. This is part of what is meant by “command responsibility”.

 

When it comes to ‘robots’, Schulzke makes essentially the same point, drawing an analogy between commanding soldiers and ‘commanding’ robots: “Military and civilian leadership in a robot’s chain of command should be held responsible for how[L]AWS are used to wage war” (Schulzke 2013: 214). However, if the commanders do their duty with regard to human soldiers and do their best to ensure that the right people are chosen for the missions under their command, and if the soldier in the example cited above still intentionally kills the civilian, then someone is still responsible, namely the soldier - Schulzke acknowledges that commanders cannot always be right. But the whole point of the issue of responsibility gaps is that if a LAW, a ‘robot’, does the killing, then it seems that no one is responsible – the assumption here is that the commander has done everything she can to make sure the LAW is the right tool for the job. If she can never to be so assured, then she can never send a LAW into battle. We are back to Matthias Dilemma. By discussing command and distributed responsibility, Schulzke has identified the context and conditions under which LAWS would be deployed, but in doing so he has shown that there still exist the possibility of responsibility gaps. This is because he focuses on the commanders (military and civilian) who would use the LAWS and his analogy with their responsibility for the actions of human soldiers implies that they cannot be held to account for everything a LAW might do.

 

Designer Responsibility

 

Designers are causally responsible for the artefacts they design. This is a truism, for by the definition of an artefact it is something that is produced, and all artefacts are made on the basis of some kind of design, however rudimentary, and so without a design there can be no artefact. But unless we subscribe to strict liability, it is unfair to hold a designer responsible – responsible in the sense of accountability, where there is the possibility of blame – for all the possible uses of an artefact. In order to distinguish those uses for which the designer is responsible, I have developed a three-fold taxonomy of uses/purposes of an artefact.[4] Thus, the primary purpose of an artefact is the purpose for which it is intended by the designer: it is what the designer designs it to do. The primary purpose of an assault rifle, for example, is to kill or otherwise harm others, by firing single shots or in semi-automatic mode, and whether a particular use of an assault rifle is justified depends on the circumstances. If the threatened use of the weapons is sufficient to stop some incident, then I call this a derivative purpose because it is contingent on the primary purpose (and not vice versa). Assault rifles are not intended to hold flowers, but a long-stemmed flower could be lodged down the barrel as a way to cheer up a barrack room. If so, this would be a secondary purpose of the weapon, and as such is not contingent on its primary purpose: it is quirky and fortuitous.

 

I have argued that designers are responsible for instances of the primary and (some times) derivative functions of the artefacts they enable, but not for the secondary purposes. The reason why this gives us the correct distribution of responsibility is because the primary purpose, as just mentioned, is what the designer intends the artefact to do: she invents a means to do something, something that cannot be done without the artefact or which can be done more efficiently, more cheaply, more easily, and so one, than otherwise. Thus when the artefact does what is designed to do, the designer cannot claim that she did not intend this to happen. So when an assault rifle is used in combat, or in other situation, and quickly kills a number of people, the designer cannot deny that this is what she intended her invention to do. She is therefore responsible. Turning to derivative purposes, these are uses that the artefact can have because of its primary purpose: derivative purposes of an artefact supervene on its primary purpose. A case can be made that the designer is responsible here as well, provided that she could have been expected to foresee that these would come about, so responsibility here depends on the case at hand. Secondary purposes, on the other hand, are fortuitous and therefore not the kind of things that the designer could be expected to know about and for this reason she is not responsible for them. I will assume here that this account of designer responsibility can be accepted. [5] How does it apply to LAWS?

 

A lethal autonomous weapon is able to select and engage its targets, that is to say, it decides who to try to kill and then it tries to kill them. This is surely its primary purpose. A derivative purpose may be that as a consequence of its deployment, the activities of insurgents are suppressed because they are fearful of being detected by the constantly patrolling LAWS, as has happened with drones. I will leave aside any secondary purposes. If the account is correct, then designers of LAWS are responsible when the robots select and engage targets. Are designers also responsible if these are the wrong targets, for instance, if LAWS fail to discriminate civilians or use disproportionate force? Or is there, once again, a responsibility gap: if LAWS are not suppose to select and engage the wrong targets and if this cannot be predicted by the designers (or anyone else), how can they be held responsible? But that LAWS are no different from ‘standard’ weapons like the assault rifle from the perspective of the designer. The designer does not know who will be the victims of her invention. Taking a real example, Mikhail Kalashnikov’s assault rifle was designed during the Second World War and came into production in 1947. Since that time, it has become the most widely produced weapon of all time and has killed more people since the end of that war than any other weapon. Kalashnikov could not have known all this, but I argue that he is still responsible because the weapon he designed did as it was intended to do (see Forge 2007) .

 

There are two responses to this charge of responsibility, and hence accountability, of the designer that can be made, as attempts to excuse. The first accepts the taxonomy of purposes and the corresponding distribution of responsibility but denies that the primary function of LAWS has been correctly identified. The primary function of a LAW, according to this rejoinder, is to select and engage only enemies, insurgents for examples and their non-combatant support personnel. If a LAW kills a civilian, then this was a mistake, was therefore not something intended by the designer, who is therefore not responsible. Killing civilians should be classified as a secondary function of a LAW. This response is not persuasive. The designer does not assign a pre-determined target list to the LAWS who are then deployed to kill those and only those on the list. LAWS are, by definition, autonomous: they decide, on the basis of their ‘flexible programming’, what is a legitimate target and what is not. If they were to be given a list of targets, then they would not be autonomous. The designer builds this autonomy into the system – we do not yet know how, but this is the assumption. The designer does not (of course) want the LAW to kill civilians. But the fact that this was not what she wants does not serve as an excuse. It does not serve as an excuse even if the designer was utterly convinced that her creation could not ‘malfunction’. It does not serve as an excuse because the designer should have known that it is possible that the LAW could chose the wrong target: the very fact that the weapon is flexible means that this was always a possibility. Therefore, the suggested revision to the designation of the primary function should be rejected.

 

The second response replays the original theme: if it is (highly) unlikely that LAWS will malfunction, and hence one would not predict that this would occur, and if the agent (designer) is not in control of the artefact, then she cannot be held responsible. In view of the discussion in Section 1, this response is not persuasive either. There we saw that the designer of the randomised explosive device would still be responsible for any harm that the device caused, even though that would be a highly unlikely outcome, and we can now see more clearly why this is. The designer deliberately and intentionally made a device that would only explode under conditions that were unlikely to occur at an unknown location. She knew that the device could cause harm. Thus, if it does cause harm, then she is responsible for that harm. That is surely uncontroversial. So the fact that it is unlikely that an LAW will kill civilians is also no excuse, and we can conclude that as far as the designer is concerned, there are no responsibility gaps: the designers, among other perhaps, is responsible for everything their LAWS do.

 

Conclusion

 

It has been suggested that LAWS should not be created, because if they were, there would, or could, be responsibility gaps. No humans would be responsible for what the weapons would, or could, do. I have argued that there would be no such gaps in responsibility as far as the designers are concerned: whatever LAWS do, their designers are responsible. However, in common with all weapons design, designers cannot know if their creations will be used for good or ill, or both. But, as I have suggested here and argued much more fully elsewhere, since designers are responsible for what their weapons do and if they take these responsibilities seriously, then they should never undertake weapons research – they should look for other employment. Weapons design is unique in that it is the only activity that seeks to design the means to harm others, and harming is wrong. Since weapons designers cannot know if the things they make possible will only have justified uses, they should renounce their profession. LAWS are just one kind of weapon, but they raise questions about weapons research in general which we would do well to examine in more depth.

 

 

References

 

Aristotle (1962) Nichomachian Ethics. Trans. M. Ostwald. New York: Boobs-Merrill.

Forge, J. (2007) “No Consolation for Kalashnikov” Philosophy Now, , 6-8.

Forge, J. (2008) The Responsible Scientist. Pittsburgh: Pittsburgh University Press.

Forge, J. (2012) Designed to Kill: The Case Against Weapons Research. Dordrecht: Springer.

Forge, J. (2017) The  Morality of Weapons Design and Development. Hershey, Penn.: IGI.

Gubrud, M. (2018) “The Ottawa Definition of Landmines as a Start to Defining LAWS”.  https://autonomousweapons.org/the-ottawa-definition-of-landmines-as-a-start-to-defining-laws/

Gusterson, H (2016) Drone: Remote Control Warfare. Cambridge, Mass.: MIT Press.

Matthias , A. (2004) “The responsibility gap: Ascribing responsibility for the actions of learning automata” Ethics and Information Technology, 6, 174-183.

Reaching Critical Will, (2018) Documents from the 2018 CCW Group of Government Experts on lethal autonomous weapons systems.  http://www.reachingcriticalwill.org/disarmament-fora/ccw/2018/laws

Schulzke, M. (2013) “Autonomous Weapons and Distributed Responsibility”, Philosophy and Technology, 26, 203-219.

Sparrow, R. (2007) “Killer Robots” Journal of Applied Philosophy. 24, 66-77.

 

 



[1] At the time of writing this essay, a group of government experts is meeting to discuss LAWS, Reaching Critical Will 2018. One of the most important tasks, before any attempts can be made to limit their development, is to characterise the systems, to say just what LAWS are. Without making an attempt to summarise the contributions by both governments – included here are the US, Russia, China, the UK and France – and non-government organisations, a recurrent theme is that ‘genuine’ LAWS would be able not only to engage targets autonomously be also select them autonomously.

[2] Gubrud refers specifically to the Ottawa convention banning antipersonnel landmines, which defines them as “’Anti-Personel mine’ means a mine designed to be exploded by presence, proximity or contact by persons and that will injure, incapacitate or kill one or more persons” Gubrud 2018: 1.

[3] Thus any claim to the effect that LAWS would never target civilians, or commit any other war crime and hence the issue of responsibility gaps could not arise in connection with their operation, is seen to be beside the point.

[4] I first formulated the taxonomy in my book on responsibility, Forge 2008: 156-59, developed it in more detail in Forge 2012: 142-149 and refined it further in Forge 2017: 34-43.

[5] I have argued at length elsewhere about the responsibility of weapons designers and how they are responsible for the harms that their creations enable. I cannot rehearse the arguments here, but must referred to reader to the work mentioned in the previous note.