top of page
Writer's pictureMark Legg

Soldiers Protesting War: Another Reason Not to Use Killer Robots

The following is a short philosophy essay I wrote this semester for the Ethics of AI course. It is an example of applied philosophy, which means it's very specific, niche, and easily applicable to public policy or immanent real-world problems. If you're curious about what kind of philosophy I'm doing at the University of Edinburgh, here's the short essay in which I got my highest mark (grade) so far.


I don't expect many to brave it, and it's not very inspiring, but it seems interesting enough to post. Especially if you're curious about what doing modern, analytic philosophy entails.





“Let anyone who reflects with sorrow upon these evils so great, so horrid, and so savage, confess that he is miserable. [Whoever permits] these things without sorrow in mind is certainly much more miserable since he thinks himself happy because he has lost human feeling.”

– Saint Augustine[1] 


In rare cases, a soldier might spare an enemy's life. Moved by his moral faculty via empathy, conscience, or something else, he may not pull the trigger in battle even if he is otherwise theoretically justified to do so. In a similar fashion, I will argue that soldiers can resist unjust wars, even if they have no theoretical obligation to do so. Compare this to lethal autonomous weapons systems (LAWS), also called "killer robots." They do not possess moral faculties. One result is that LAWS will never not kill an enemy if they meet its criteria for killing—they always pull the trigger (Leveringhaus 2018; Sharkey 2010). Likewise, LAWS don’t possess the ability to resist entering unjust wars. LAWS may follow some set of jus in bello restrictions, like only killing combatants, but they cannot make jus ad bellum judgments. But the fighters in a war should be able to make moral judgments about that war. So, this provides a militarily virtuous reason to severely limit or ban LAWS.


To demonstrate this, I’ll first cover the relevant principles of just war theory and contrast LAWS with soldiers. Second, I’ll present the central argument that LAWS remove a desirable barrier to unjust wars because they cannot make jus ad bellum judgments, giving two versions of my conclusion. Finally, I’ll respond to an objection by clarifying and limiting the scope of the essay.  


Just war theory


Just war theory separates war into two ethical realms. Jus ad bellum refers to the proper justification for initiating war. Jus in bello refers to just action in combat, regardless of the war’s rightness or wrongness (Robinson 2017; Walzer 1977).[2] A soldier who fights in an unjust war but follows the rules of just combat should rest assured that he will not face criminal charges after the war. He is not obligated to abstain from fighting in an unjust war, but he is obligated to fight justly.


However, morality transcends legal obligation.[3] A soldier can sometimes judge jus ad bellum differently, and more correctly, than his country. For example, he can judge that a war fought for ethnic cleansing is wrong. While we may not hold a soldier criminally responsible for fighting in such an unjust war (as long as he follows jus in bello rules), we should hold him morally accountable, and he will need to answer his conscience. So, even though not legally obligated to make jus ad bellum decisions, soldiers can and should in some cases. 


LAWS, soldiers, and jus ad bellum


Some LAWS can make “human out of the loop” decisions to kill people, meaning no human command or oversight is necessary for them to kill someone.[4] This paper will focus on these “human out of the loop” LAWS. They learn algorithmically during training and act autonomously in combat (Roff 2015). LAWS make risk assessments and distinguish between combatants and non-combatants. Programmers encode jus in bello principles, such as killing combatants but not non-combatants, as rewards and punishments. LAWS do not, however, possess moral faculties. LAWS do not make jus ad bellum judgments even if they are programmed to follow jus in bello principles.

I established that soldiers could make jus ad bellum judgments. Of course, states train soldiers to unthinkingly obey a rigid chain of authority. Because of the violent, visceral nature of war, it benefits an army to have soldiers trained to act without rational or conscious reflection on the morality of combat while in combat. However, soldiers can conscientiously reflect on a war’s justness outside the heat of battle. So, LAWS cannot make jus ad bellum judgments, while soldiers can.


The argument


Here, the argument begins in earnest. A state’s access to LAWS removes a barrier to entering unjust wars. Even if soldiers are not legally obligated to do so, they can help prevent or undermine unjust wars. For example, they may publicly resist the war before it begins, surrender quickly during battle, or politically protest when they return home. States wanting to wage a prima facie unjust, unconscionable war face a hindrance from the moral conscience of their own army. Will their soldiers fight for the cause? Will they fight vigorously or half-heartedly? LAWS are predictable and certain. The minds of soldiers are not.  


Here’s the argument put analytically. If human soldiers make jus ad bellum judgments, then they can resist their state’s unjust wars. Human soldiers can make jus ad bellum judgments, so states may face resistance from soldiers if they want to wage an unjust war. LAWS do not make jus ad bellum judgments and so provide no resistance to unjust wars. Therefore, states using LAWS will face lower resistance to entering unjust wars.

Consider that Region B of Nation Y wants to secede and start a civil war. Say this violates the jus ad bellum principles of plausible victory and legitimacy of the state. Region B’s secession faction must ask themselves: Will soldiers drawn from Region B fight for the cause that violates jus ad bellum principles? Or will they flee and fight for Nation Y instead? In contrast, with an army of LAWS, region B may require much less support from human soldiers. They just need enough money.


Or suppose a state “goes to war” with a tribe of people with limited technology to get valuable resources from their ancestral land.[5] Say the mission requires the soldiers to kill the men and boys of the tribe, threaten the rest to move off the land, and then hide and kill themselves to keep the operation secret. This seems like an unlikely strategic option, partly because the soldiers the state would need to use may find initiating the “war” immoral. However, if the state uses LAWS, they face less difficulty. The LAWS can wipe out the men and boys of the village because they identify them as combatants. The LAWS then hide and self-destruct, keeping the operation a secret.


In both examples, replacing some or all soldiers with LAWS minimizes resistance to initiating evil wars. So, we have this conclusion:


(1) If those who fight cannot make moral judgments about the wars they fight in, more unjust wars will be engaged in.


If we accept this accountability as militarily virtuous (it's morally good to do even in the context of war), we also have this conclusion:


(2) Replacing soldiers with LAWS removes virtuous accountability for states wanting to initiate war—especially as accountability for the most unconscionable wars.


An objection


The strongest objection to my conclusion is that soldiers rarely make jus ad bellum judgments, or if they do, don’t follow through with them. And even if they do, their actions aren’t typically enough to make a meaningful difference. Soldiers are already trained to obey orders uncritically. The idea of “a few good men” standing up to an evil state and preventing an unjust war may seem naïve and unfounded. In any case, it is probably not enough to override positive reasons for using LAWS (Müller 2016). Trauma, suffering, and death are commonplace in war, and mediating those with LAWS far and above overrides the rare possibility of preventing an unjust war or less just battles. To respond to this objection, I’ll limit my argument’s scope and conclusion.


Limiting my conclusion


The above response claims my argument is contingently unsuccessful, attacking conclusion (1). This means that my argument is less cogent because it’s unlikely to affect the outcomes of wars. I agree that it seems inscrutable whether replacing some set of soldiers with LAWS will ultimately change the number of unjust wars. This is especially true since LAWS are usually used in conjunction with human soldiers. However, my argument still obtains for two reasons.


First, argument (1) shows why we should refrain from developing more autonomous LAWS. The more soldiers LAWS replace, the more contingently strong my argument becomes, as the barrier gets lower and lower to entering unjust wars. With more autonomous LAWS and more LAWS altogether, fewer soldiers will be needed to start a war, so fewer moral agents are present to resist or object to the war’s unjustness. In this way, conclusion (1) says we should slow this progression.


Second, I think my argument's intrinsic version (2) is immune from this attack. It’s intrinsic because it goes through regardless of whether the number of unjust wars would actually change. So, even if using LAWS probably won’t change the number of unjust wars, we should still ban or at least severely limit them. Amoral autonomous killing machines seem unethical to use in war because they can’t not enter the war. They have no moral skin in the game, and this seems wrong. I don’t think my point constitutes an absolute conclusion—there could be other, overriding intrinsic reasons for using LAWS.[6] Instead, conclusion (2) adds weight to the chorus of voices condemning their use based on military virtue (Roff 2015).


So, this paper puts forward a limited but meaningful contribution. One good reason for not using LAWS is that they do not make moral judgments about wars, even though they act autonomously. This removes an element of humanity important to war: the possibility of soldiers resisting evil wars waged by their state. If we turn war-making over to LAWS, we make wars less virtuous, and we risk removing an ethically significant barrier to unjust wars—the soldiers themselves.



Footnotes


[1] (Augustine 1994, 149)

[2] Walzer called jus ad bellum and jus in bello “logically independent.” (1977, 21) We want to keep this distinction because a soldier would likely break more jus in bello rules if he knew he would face punishment regardless of his actions (since his enemies think his war is an unjust one anyway). So, this distinction gives soldiers motivation to follow jus in bello rules (Frowe 2015, 107).

[3] Moral judgments can both go above and beyond what’s required of the law or necessitate an action contrary to the law.

[4] See the US Congressional report, “Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems.” https://crsreports.congress.gov/product/pdf/IF/IF11150

[5] Violating nearly every jus ad bellum restriction: just cause, proportionality, right intention, last resort and the public declaration of war (Frowe 2015, 52).

[6] For example, soldiers bear trauma from killing even when it’s justified. Diminishing that trauma by using LAWS seems intrinsically good.


Bibliography


Augustine. 1994. Augustine: Political Writings. Trans Michael W. Tkacz and Douglas Kries. Hackett Publishing.


Frowe, Helen. 2015. The Ethics of War and Peace: An Introduction. 2nd ed. London: Routledge. https://doi.org/10.4324/9781315671598.


Leveringhaus, Alex. 2018. “What’s so Bad About Killer Robots?” Journal of Applied Philosophy 35 (2): 341–58. https://doi.org/10.1111/japp.12200.


Müller, Vincent C. 2016. “Autonomous Killer Robots Are Probably Good News.” In Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons, edited by Ezio Di Nucci and Filippo Santonio de Sio, 67–81. Ashgate. https://philarchive.org/rec/MLLAKR.


Robinson, Paul. 2017. Just War in Comparative Perspective. Routledge.


Roff, Heather. 2015. “Lethal Autonomous Weapons and Jus Ad Bellum Proportionality.” Case Western Reserve Journal of International Law 47 (1): 37.


Sharkey, Noel. 2010. “Saying ‘No!’ To Lethal Autonomous Targeting.” Journal of Military Ethics 9 (4): 369–83. https://doi.org/10.1080/15027570.2010.537903.

Walzer, Michael. 1977. Just and Unjust Wars. Basic Books.

33 views0 comments

Recent Posts

See All

Comments


bottom of page