The Case Against Weapons Research (28/8)

The following is a draft of my most recent paper on the topic of weapons research - I will remove this draft when it appears and replace it with a reference to the journal. I've omitted the title, abstract and other incidental bits, but I have left in endnotes, bibilography and other essentialis!. I've included the paper here as it gives an outline of the argument of Designed to Kill, or as much as I could do in some 700o words. Please get in touch if you have any comment! Here is the paper:

 

One of the most morally challenging, and most enduring, forms of technology is military technology, technology that is associated with the organised violence that has been in the stock-in-trade of armed forces since ancient times. Military technology covers a broad area, including some forms that are similar if not identical to civilian technologies, so-called dual use technologies (Forge 2010). However, there are other forms that are unmistakably military and these include all of those that enable weapons to be produced. My concern here is with the endeavour that leads to such technologies, what I call weapons research. The aim of weapons research is thus to produce the technology, or more directly the design, for a new or improved weapon, or for the ancillary structures, such as platforms, necessary for using a weapon (Forge 2012: 13-14). This topic has been almost entirely neglected by moral philosophers and others.[i] I suspect that this may be because it is thought to be subsumable under discussion about the morality of war, and that moral judgements about weapons research follow from moral judgements about war. I believe that this is wrong and that weapons research is a topic for discussion in its own right. Having said this, I do of course acknowledge that certain particular classes of weapons research, namely those directed towards weapons of mass destruction, especially nuclear weapons, have been discussed at length, and judgements have been made about the morality of such research. My own interest in the topic stemmed from my work on the responsibility for the use of the atomic bombs on Japan, but it has since expanded to cover weapons research as a whole.[ii]

 

It is surely clear that this a topic which falls within the scope of the present journal, and my thesis, that weapons research is morally wrong and morally unjustifiable, is certainly a judgement in regard to the ‘impact of ethics on technological advance’. Indeed, if we look back to one of the seminal works on technoethics by Mario Bunge, we find a clear statement to the effect that not all technology is good and he explicitly mentions military technology. Bunge writes: “Just think of thanatology or the technology of killing: the design of tactics and strategies of aggression, of weaponry” (Bunge 1977: 100). Moreover, Bunge makes it clear that those who design technologies must accept moral responsibility for the impacts of their work. I agree completely and entirely with these sentiments. I also note that Bunge, and following him Luppicini (see Luppicini 2008: 1-2), stress that technoethics is a highly inter-disciplinary field. This accords with my own experience in regard to the morality of weapons research.

 

The purpose of this paper is to set out an argument that aims to establish that weapons research is both morally wrong and not morally justified.[iii] I have made this ‘case against weapons research’ on several occasions (Forge 2004, 2007a), but most fully in Designed to Kill: The Case against Weapons Research (Forge 2012). This paper is a sketch of the argument of that book and makes no claim to be anything more that an outline – it takes a whole book to fully present the case. One way to mount such a case is to begin by affirming some form of pacificism, and then maintain that if war is wrong, so is weapons research. But this is not a good option, even granted that we could come up with a coherent version of pacifism. If weapons research provided the means for robust defence, then this might keep the peace by deterring war. In general, it is hard to see why weapons research is wrong if fighting is wrong. A better option is to address what I believe to be assumption behind the assimilation of weapons research to questions about the morality of war, namely that war and all that is needed for fighting wars is justified by appeal to defence and deterrence. The only justified war, or just war, is war which resists aggression. Hence weapons research is justified for the purposes of defence, and, even better because it prevents war, for deterrence. I refer to this as the standard justification for war and for all forms of defence spending, weapons procurement, etc. However, I believe that while the standard justification can apply to certain wars, it does not serve to justify weapons research. The case against weapons research can be seen in this sense to be a argument against the standard justification.

 

As a final comment by way of introduction, I claim that weapons research is not something relatively new, something that came about when scientific theory was applied to weapons design. The best-known instance of the latter, indeed of any episode of weapons research, is the Manhattan Project. It is clear that the very idea of the atomic bomb, let alone its detailed design, could not have been thought up without the experimental discoveries in, and theories about, nuclear physics that became available in the 1930s. Science had been applied to weapons design a century earlier, but hardly at all before that. However, systematic research has informed weapons research since at least the fourth century BCE. I have in mind here the designs of torsion catapults developed by Greek engineers in Sicily, Athens, Rhodes and elsewhere in the ancient world that were codified in manuals and expressed by mathematical equations. The work is both interesting and remarkable ( Marsden 1969, Rihill 2007). Given a broad definition of “research”, this work qualifies, and I see no reason why we are not at liberty to adopt it and hence to see weapons research as having a long history. [iv] I note that torsion catapults were the first ‘heavy’ projectile weapons: projectile weapons, and I include free fall bombs and well as ballistic and guided ordnance, artillery, etc., are the truly dangerous and harmful ones. I will therefore proceed as follows: I will identify a number of key propositions or premises, and discuss them as much as I can in the space available, and this will serve as a summary of my case against weapons research. The first three premises are concerned with the (prima facie) wrongfulness of weapons research, and the remainder with the justification, or lack thereof, of the activity.

 

The Case Against Weapons Research

 

1.      It is morally wrong to harm moral subjects, members of our species and other sentient creatures, without justification.

 

This is a moral principle, and it is surely not contentious: any system of morality that denied 1 would not be acceptable. However, not every moral system will use 1 explicitly as a starting point. For example, standard act consequentialism maintains that moral agents should maximise ‘the good’, and there are different views on what is good. My own preference is for a moral system that makes 1 explicit at the outset, and for this reason I have adopted Common Morality, the moral system developed by Bernard Gert (Gert 2004, 2005). The core of Common Morality is a set of ten rules, eight of which prohibit specific harms – for instance, the first two rules are “Do not kill” and “Do not cause pain”.[v] These rules are not absolutely binding: it is permissible to break or ‘violate’ them if there is adequate justification. In essence, an adequate justification establishes that one anticipates that ‘comparable’ harm will be prevented by the act in question, and moreover that any moral agent could accept that the violation of the moral rule in question be publicly allowed.[vi] 1 therefore amounts to a one sentence description of Common Morality. Returning for a moment to consequentialism, the demand to maximise ‘the good’ must be understood to incorporate its converse, namely minimisation of what is bad and, clearly, harm is bad. Right action for the consequentialist cannot entail, for instance, making some happy while harming many more. So while the question of the justification of acts that harm is separate from judgements about moral wrongdoing on non-consequentialist systems such as Common Morality, it is part and parcel of consequentialist judgements of right and wrong action. I have argued elsewhere that in the situations I am concerned with, namely making judgements about weapons research, act and especially rule consequentialists will address the same sorts of issues when making judgements about the rightness and wrongness of such activities as I do when appealing to Common Morality to consider whether these activities are justified.[vii]

 

Now consider

 

            2.  It is morally wrong to provide the means to harm moral subjects,                            members of our species and other sentient creatures, without justification.

 

2 is more contentious. Suppose person P designs household items like knives and scissors and these are used, on occasion, to harm. Setting aside for the moment whether it is correct to say that a designer ‘provides’ the artefact she designs, it seems that we should not class P’s actions as wrong because of such deviant or unintended use. P expected the artefacts she designs to be used for mundane household tasks, not for harming innocents. I agree with this response, so I would not assert 2 without qualification. My qualification is that what is provided is intended to be the means to harm, to be a weapon.[viii] What ever else can be said about 2, for instance how it might figure in discussions of dual use items, I believe it applies to weapons research.

 

In that case one must accept:

 

3.  The primary purpose of a weapon is to harm and (hence) weapons are the                                                                     means to harm.          

 

The notion that artefacts can only be used in ways that their creators intend when they design and make them is not generally accepted – Don Ihde calls it the “Designer Fallacy” and gives many examples of things that have had quite different, sometimes entirely unanticipated, uses from what their designers had in mind (Ihde 2009). I agree with much of what Ihde and others have to say on this subject, but I think there has been some confusion about, or lack of reflection on, what counts as a use and for this reason I prefer to talk about purposes. What I understand the primary purpose of an artefact is what it is intended to do. It may seem that this is what Ihde and others deny, that artefacts cannot have anything like a ‘primary’ purpose. It will help explain what I have in mind by using one of his favourite examples, the typewriter. Ihde tells us that this was originally designed to enable unsighted persons to write, but of course it has come to be used by sighted people as well. On my account, however, the primary purpose of a typewriter is to produce symbols, such as letters, on paper. That is what the typewriter mechanism was designed to achieve. That it then enables unsighted persons to write is what I call a derivative purpose, something for which the primary purpose of the artefact is necessary but not conversely (Forge 2012: 142-47). Typically, derivative purposes tend to be contextual, a function of the context, situation or occasion of the use of the artefact. A good example of this is deterrence. That one country feels it needs to deter another is something that occurs at some time and in some place -  it is what we might call an historical state of affairs – but circumstances can change and the need for deterrence may pass. So for a weapon to be used for the ends of deterrence certain (contingent) matters of fact need to obtain. However, weapons can only be used for deterrence, or for defence or making threats or to coerce, if they can be used to harm. But the converse is not true, which is why all of these other purposes weapons can be used for are derivative.

 

Going back to the previous example, I would say that the primary purpose of the household scissors is to enable all manner of domestic cuttings up, and that its use as a weapon is derivative. Following on from this, my view is that designers are committed to the primary purpose of the artefacts they create, in the sense that they have responsibility for those instances in which the primary purpose is realised, but not necessarily so for derivative purposes, as these depend in addition on further considerations and the designer may not be able to anticipate these. I claim that the primary purpose of a weapon is to harm: this is what weapons do and when they are  designed, they are designed in such a way as to be an efficient means to carry out this function. For example, when a projectile weapon is designed, then its designer (and her team of experts) works out how to send the projectile as accurately as possible to its target, given the constraints specified by the brief she has been given. What she is doing could not be anything but undertaking research into the means to harm – we have seen that weapons can be used for defence and deterrence presupposes that they are the means to harm. Indeed, in order defend, weapons must be capable of harming.

 

The standard justification for weapons research states that defence justifies all forms of weapons procurement, including weapons research. That defence is a derivative purpose of a weapon, combined with one other aspect of weapons research that I shall come to in a moment, is crucial for the case against weapons research. However, it is necessary first to address a response to the effect that even if it is accepted that weapons are the means to harm in the sense that this is what they really ‘are’, nevertheless some of these are defensive weapons, weapons that can only be used for defence. For that class of weapons, it would be necessary to qualify the primary purpose to something like ”means to harm, but only in defence”. I deny that there are any weapons which are defensive in the requisite sense, and it is important to be quite clear on what this sense is. Suppose w is a weapon that is very well-suited to defend an asset; perhaps that is the only purpose w could fulfil. But this is not a defensive weapon in the requisite sense, for what one is looking for is a class of weapons that cannot be use to aid any type of aggression. Those who embark on aggressive wars need to defend their assets, both at home and in the field, and hence need weapons that are suitable for this purpose.[ix] The requisite sense is therefore of what one may call an inherently defensive weapon, one that cannot aid an aggressive war in any way. I claim that there can be no such weapon, except in science fiction. For example, suppose that there was a weapons system that was totally effective, in that it defended a country from any form of attack, but it was only activated when the borders of the country were crossed by attackers. A state which has such a weapon would be able to attack others without fear of retaliation, and in this sense it would really be the ultimate offensive weapon.[x] It is only if every single state in the world had such a weapon that there could be no aggression. But this is science fiction for three reasons: there can (surely) be no such weapon; if there were such a weapons, not everyone would be able to build it, and if everyone could build it, there would be so much international upheaval as states tried to frustrate others’ programmes that they would never be built.

 

Premises 2 and 3 imply that we should judge weapons research to be morally wrong and to hold to that judgement unless and until there is adequate justification. My aim in formulating the premises was (of course) to arrive at this conclusion. As I have said already, it is only really possible here explain briefly what they amount to and for an adequate discussion and defence of them, I can only offer the reader further references. Recall that the case against weapons research divides into two parts: first it is necessary to show that we can make a preliminary judgement to the effect that the activity is morally wrong, but in the second place it is necessary to show that that judgement can be maintained in the face of attempted justifications. I now need to outline this second part of the  case, and to this end I put forward four more propositions. A quick comment before I proceed: when I claim that there is no justification for weapons research, I mean that there is no adequate or acceptable justification, that there is nothing that would persuade a reasonable person to revoke the judgement that weapons research is morally wrong. Of course, all manner of attempted justification are possible, and those with vested interests in weapons research will no doubt be inclined to accept one of them. I claim, however, that a unbiased person will not.

 

The other aspect of weapons research which I mentioned above concerns the nature of its product, namely

 

            4. Weapons research produces knowledge, in the form of designs for weapons,

                and as such is to be sharply distinguished from weapons manufacture.

 

Weapons research has this in common with all other forms of research, including basic or ‘pure’ research. There are different kinds of design, depending on how finely specified the instructions are for making the artefact in question. For example, what is known as an engineering specification is a set of instructions which gives precise and minute details for all the relevant parameters needed to realise the artefact. Recently, there has been a qualitative change in the relationship between design and production, with the advent of computer-driven three-dimensional printers which can be programmed to print artefacts directly - it is no surprise (to me at any rate) that some of the first examples have involved the ‘printing’ of plastic guns. At the other end of the scale, one might include under the heading much less detailed specifications, that require more work before they can be realised. For example, Szilard had the idea of a chain reaction in a fissile substance being used in a bomb of terrible power, before much of the background physics had been done. This is perhaps not so much a design itself and as idea for a design, one that became progressively articulated up until 1945, when it was realised. There is even a sense in which an artefact itself can represent its own design. An artefact that has been bought, stolen or captured could be reverse engineered by experts to work out how it was made, and subsequent copies produced. To summarise we can say this: a design is whatever information allows the ‘production unit’ to produce the artefact in question (Forge 2012: 17-24). What counts as a design is therefore contingent on there being a production unit available with the requisite skills and materials.

 

It follows from 4 that

 

            5. Unlike the artefacts that they enable, designs do not decay, wear out and                so become useless.

 

Because they are items of knowledge, designs are, if not immortal, potentially very long-lived. [xi] Nowadays, storage and transmission of all forms of data is extremely easy and hence designs can be readily copied and shared Hence it is much less likely than it once was that designs will get lost. What this means is that the relationship of designer to design, and hence to the artefacts that the design realises, will not be such that the designer can exert a great deal of control or influence – unless she never releases her design and tells no one about it. This would be true even if the designer has a patent – which will not normally be the case - because patent rights lapse, because of theft, and for other reasons. Indeed, the norm will be that she works for others, often a state-owned enterprise, which holds the rights to the design. The most celebrated case of the designers losing control of their work is once again the atomic bomb. Once they had enrolled in the Manhattan Project, Szilard and his colleagues were eventually only able to offer advice on how their invention might be used, but they had no actual say in the matter. [xii]

 

Some of the Manhattan scientists agreed with the atomic bombing of Japan, and so the fact that a designer may have no control over the products of her work does not mean that she will always disagree with how it is used. And it would be most surprising if that were true for most instances of run of the mill civilian design and R&D. However, weapons research is unique in that it produces the means to harm, so if someone engages in weapons research and the products of her work are used to harm people that she thinks should not be harmed, then she has contributed to something that she believes to be wrong, indeed believes to be morally wrong. This of course was why many of the Manhattan scientists were troubled after Hiroshima and Nagasaki: they did not believe atomic bombs should have dropped on civilians. Not everyone is moved by moral concerns; some dismiss such things as irrelevant in the ‘real world’ or are simply too selfish and egotistical to care. I have nothing to say here, and little to say elsewhere, about why one should be moral.[xiii] But I am assuming that to say that (one believes that) something is morally wrong is a very good reason not to do it, and to find out that one has done something that is morally wrong may place a burden on one’s conscience, as it did for Szilard, Oppenheimer, Franck, Rotblat and many others who worked on the atomic bomb project. However, there are examples of weapons researchers who have designed weapons in situations where they could anticipate and completely endorse the ways in which their weapons would be used, and it seems that they were quite justified in these expectations and beliefs (unlike the Manhattan Project scientists who endorsed the atomic bombing of Japan). For instance, the improvements to the designs of the T-34 tank and the Yakolev fighter in the World War Two (WW2) helped the Red Army defeat the Wehrmacht. This was a just war on the part of the Soviet Union and their weapons researchers helped them win it.

 

Weapons research intended to produce new systems, and not (marginal) improvements of existing systems, takes a lot of time. For instance, while very minor improvements were made to the T-34 in between the German attack on the Soviet Union in midsummer 1941 and the Battle of Stalingrad – such as a new hatch and rails for ‘tank riders’ -  it took another two years to upgrade to an 85mm gun. In contrast, the Kalashnikov assault rifle was not ready until 1947, hence its name, even though work on it began in 1941. The AK-47 is the most widely produced and widely used weapon of all time, but it was never used for the purpose envisaged by its creator, namely to help repel the Germans in WW2 (Forge 2007b). Indeed there are parallels here with the atomic bomb. Szilard, Fermi and others were concerned about a German atomic bomb, and their efforts to interest the US government in it were motivated by what they saw as the need to get the means to deter any use of such a weapon by the Germans. So just as they did not expect to see the atomic bomb used against Japanese cities, Kalashnikov did not expect to see ‘his gun’ used by child soldiers in Africa. This leads to

 

            6. Weapons designers cannot foresee (all) the uses to which the products of                 their work will be put.

 

I believe that 6 is well-supported by many other examples of weapons research.[xiv] Moreover, the fact that new weapons research often builds on work done in the past should also be acknowledged. I have mentioned improvements to existing systems, and this obviously presupposes that those systems already exist. But there are also weapons which are truly innovations but which are ‘based on’ existing systems. A thermonuclear warhead combines both nuclear fusion and nuclear fission, the latter being the physical principle by which atomic bombs function. Without the Manhattan Project, and comparable work done in the Soviet Union, Britain, France and elsewhere, there could be no thermonuclear systems. I understand 6 to cover both the direct effects of weapons research, the uses to which the weapons in question are put, and the indirect effects, the uses to which new weapons are put whose design is based on the work in question.

 

Weapons designers are by no means alone in not knowing all the particular ways in which their work will used, as I have acknowledged. But weapons design stands out from all other forms of design and technology – it is unique – in that it provides the means to harm in the sense explained. No other form of endeavour does this. It follows from 6 that weapons designers cannot know about all the particular ways in which the products of their work will be used; they cannot know about all the actual harms that are caused by the weapons, their primary function, nor whether these are for defence or aggression, and they cannot know if the weapons will used for deterrence, coercion, threatening, or any other derivative purposes. A weapons researcher therefore cannot satisfy the following demand:

 

            7. In order to provide justification for a morally wrong action, it must be                   possible at least in principle for an agent who commits such an action to              know about all the harms that the action typically gives rise to.

 

Is it reasonable to require that they do so?

 

The idea behind 7 is that if an agent does something that is morally wrong in the sense understood here, then the only possible justification is that at least comparable harms are prevented -  or if a great deal of ‘positive’ benefit ensues (not a view I subscribe to).[xv] The harms must therefore be knowable for the agent to decide to go ahead and act. 7 should not be understood to demand that an agent must know all the harmful consequences of any action that she undertakes before she is permitted to act, because it is possible even for the most mundane and simple act to give rise to dreadful unforeseeable consequences in which many people are harmed. If we were to require that any such act is morally wrong and condemn the agent, then this would not only be unfair, it would lead to a kind of paralysis. 7 should not be understood this way because, in the first place it applies only to actions which are already judged to be morally wrong, and where the issue now is with justification. Even with this qualification, it may seem to demand too much. Suppose the agent believes that by causing a little harm, she will save a lot of future harm and therefore performs a given act. But she is wrong, and it turns out that more harm is done than prevented. It is possible that she did all she could to estimate the consequences of her act, but through not fault of her own, things did not turn out as expected. Again it seems unfair to condemn her. But 7 includes the qualification “…all the harms that the action typically gives rise to.”

 

The harms which weapons research typically gives rise to are the harms that weapons cause, the killing, the destruction and so forth. No weapons researcher should be surprised when her work is used in these ways, because it is what weapons primarily do. That something which is designed to kill and destroy is used for this end is, on the contrary, to be expected. What cannot be anticipated are the particular occasions in which weapons are used: who is killed, what is destroyed, when and why these acts are carried out. So what should be expected is that weapons will be used to kill and destroy, what cannot be foreseen are the particular occasions on which they are used. So what cannot be foreseen is whether the particular uses to which a weapon is put, and here I include those in which it is used to deter and coerce, on the whole prevent more harm that would have otherwise occurred. But it is this demand that I claim must be satisfied before weapons research can be justified. 7 does not, I think, apply to many kinds of action; in fact it may only apply to weapons research. But for the reasons I have given, I believe it does so apply, and if I am correct, then we have:

 

            8. Weapons research is not only morally wrong, it is not morally justifiable.

 

Conclusion

 

What happens in wars is morally wrong: people are killed and otherwise harmed, their property destroyed and their livelihood taken away. It is therefore necessary to justify war if one is to be remain a moral agent. The same is true of weapons research. Weapons research gives rise to harms because it seeks to design the means to harm, and this is the first step in the process that leads to killing and destruction on the battlefield. It is not enough by way of justification to give what I have called the standard justification and say that the research is done in the interest of defence, as it is not enough when justifying war to say that it is done in self defence. And as is the case for war, it is necessary to show that the harms caused by the weapons in question (and those that comprise the next generations of systems) are balanced by harms prevented, avoided and reduced. But this is not possible. Weapons research is therefore morally wrong and cannot be justified.

 

References

 

Arigo, J. (2000) The Ethics of Weapons Research: A Framework for Discourse between Insiders and Outsiders. Journal of Power and Ethics, 1, 303-327.

 

Bunge, M. (1977) Towards a Technoethics. The Monist, 60 (1). 96-107.

 

Forge, J. (2004) The Morality of Weapons Research. Science and Engineering Ethics, 10, 531-542.

 

Forge, J. (2007a) What are the Moral Limits of Weapons Research?, Philosophy in the Contemporary World, 14, 79-88.

 

Forge J. (2007b) No Consolation for Kalashnikov. Philosophy Now, 6-8.

 

Forge , J. (2008) The Responsible Scientist: A Philosophical Analysis. Pittsburgh: PittsburghUniversity Press.

 

Forge, J. (2009) Proportionality, Just War Theory and Weapons Innovation. Science and Engineering Ethics, 15, 25-38.

 

Forge, J. (2010)  A Note on the Definition of Dual-Use. Science and Engineering Ethics.16, 111-117.

 

Forge, J. (2011) The Morality of Weapons Research. Wiley-Blackwell International Encyclopaedia of Ethics.

 

Forge, J. (2012) Designed to Kill: The Case Against Weapons Research. Dordrecht: Springer.

 

Forge (2014) On The Morality of Weapons Research (under review).

 

Gert, B. (2004) Common Morality. Oxford: OxfordUniversity Press.

 

Gert. B. (2005) Morality: Its Nature and Justification. Revised Edition. Oxford: OxfordUniversity Press.

 

Ihde, D. (2009) The Designer Fallacy and Technological Imagination in

Philosophy and Design (pp.51-60). P.Vermass et al. (eds). Berlin: Springer.

 

Luppicini, R (2008) The Emerging Field of Technoethics in R. Luppicini and R. Adell Handbook of Research on Technoethics. www.igi-global.com/emerging-field-technoethics

 

Marsden, E. (1969) Greek and Roman Artillery: Historical Development. Oxford: OxfordUniversity Press.

 

Resnick, D. (2013) Is Weapons Research Immoral?. Metascience, 23 (1), 105-107.

 

Rihill, T. (2007) The Catapult. Yardley, Penn: Westholme.

 

Rhodes, R (1986) The Making of the Atomic Bomb. Harmonsworth: Penguin.

 

Sinnott-Armstrong, W. (2002) Gert Contra Consequentialism in W. Sinnott-Armstrong and R. Audi (eds) Rationality, Rules and Ideals. Lanham: Roman and Littlefield.

 

 

[i]When I was asked to write on the “Weapons Research and Development” for the Wiley-Blackwell International Encyclopaedia of Ethics I was hard pressed to find references to anything others that my own work, see Forge 2011. In this paper the references will be mainly to my own work – for obvious reasons given the topic.

[ii]For a general account of the bomb and the decision to bomb Japan, see Rhodes 1986. For my account of the responsibility for that decision, see Forge 2008 Chapter 2.

[iii]This way of expressing the moral judgement implies that some actions that are morally wrong may nevertheless be permissible under certain special conditions. Most modern systems of morality allow for this.

[iv]One might also claim that composite bows and bronze swords were the products of weapons research, but that would be an inference based on the weapons themselves, not only any independent evidence about their genesis.

[v]Common Morality has been construed as negative rule consequentialism, though not by Gert himself. See Sinnot-Armstrong 2002. It appears to be a system of defeasible duties, but Gert himself did not like this description either!

[vi]I need to refer the reader to Gert for more on this. Gert 2004 is a short introduction to his system, and well worth reading.

[vii]For instance in Forge 2014 Chapter 3 I look in more detail than I do in Forge 2012 at the way in which a consequentialist moral system can be used to support the case against weapons research. I should add that I do not think that it is necessary to show that this is possible.

[viii]To anticipate a possible objection, I do not claim, and am not committed to the view, that the wrongfulness of an action is always a function of what the agent intends, nor do I think that agents are responsible for all and only their intended actions – Forge 2008 is a sustained defence of wide view of responsibility in which agents are responsible for (much) more than their intended actions. Premise 2 refers to the special situation in which agents are designers, the providers of certain means. Intention is important here, once we have identified what the means are for – this qualification is critically important.

[ix]The Wehrmacht, notorious for being highly aggressive,  was skilled at conducting defensive missions and operations. It is well-known that after Operation Citadel failed in 1943, the Wehrmacht was continuously on the defensive in the East, though it is less well known that it was engaged in defence along most of the front for much of August 1941. Germany was able to hold out until May 1945 because of its defensive ability.

[x]The Soviets, for instance, interpreted the US SDI, ‘Star Wars’, as a highly aggressive move, whereas President Reagan famously wanted an ultimate defensive weapon (Forge 2012: 98-103)).

[xi]This journal’s referee points out that designs, which he/she refers to as data, can become obsolete because the weapons themselves become obsolete: for instance, cyber weapons can become obsolete because the systems they are intended to attack are no longer in use or because effective counter-measures have been found. Certainly this is true, but it is not something I deny. I do not deny that designs may no longer have any use after a time and may thereafter never be realised. This is not just the case for cyber weapons - I could mention the torsion catapults for which there were designs as early as the fourth century BCE. My point is that designs are knowledge, and so their ‘status’ is thus different from the things they realise, not that they remain current and in use.

[xii]For my views on the Manhattan Project, including the responsibility for dropping the bombs, and for further references, see Forge 2008 Chapters 2 and 3, and Forge 2012 Chapter 5.

[xiii]Philosophers have struggled with this question for many centuries and I do not think that anyone has come up with a good answer. For my own view here, see Forge 2012: 111-13.

[xiv]I note that the T-34 tank and later generations of Soviet tanks were used to impose Soviet hegemony on Eastern Europe after the end of WW2, something that is clearly not in accordance with Just War Theory.

[xv]7 also accords with Gert’s two stage method for evaluating attempted justifications of moral wrongdoing, see Gert 2005, Chapter 6, although as I have indicated, the proposition has been formulated with weapons research in mind.