DOSSIER: MILITARY TECHNOLOGY AND THE ARMED FORCES
By Alexander Leveringhaus
Abstract: The exponential expansion and advancement of wartime technology has the potential to wipe out ‘war’ as a meaningful category. Assuming that the creation of new wartime technologies continues to accelerate, it could soon be the case that there will no longer be wars, but rather mass killings, slaughters, or genocides. This is because the concept of ‘war’ entails that opposing sides either will, or are able to, fight back against one another to some recognizable degree. In fact, this is one of the differences between war and wholesale killing, slaughter, or genocide. With the asymmetric proliferation of killing and maiming wartime technologies, there may soon no longer be even the possibility of a fair, or somewhat fair, fight; there will only be scorched earth.
Abstract: As aerial weapons become more accurate and precise, they paradoxically expose civilians to greater harm. They make the use of military force feasible where previously it had not been. While these weapons are subject to legal review to certify that they are capable of being deployed in a discriminate manner, weapons review practice in the US and UK lends cursory approval to weapons that are as likely to harm civilians as enemy combatants. This article argues that a robust contextualized review of weapon’s effects on civilians and combatants is both legally required and in states’ strategic security interests.
Alessandro De Cesaris
Abstract: There is a wide debate concerning cyberwar and the new dangers of the Internet, but this debate focuses too often on practical issues, while the conceptual and somehow strictly “philosophical” dimension remains unquestioned. In this article, I will try to show that a better understanding of what we mean when we speak about weapons, or at least a better understanding of the new difficulties entailed by digital technologies in the field of military devices, can help us to provide a better analysis of the risks and of the ethical issues connected to contemporary fighting. In particular, I will argue that the so-called “digital turn” entails a blurring of the distinction between weapons and non-weapons, because in what I will call our “hypermodern era” the criteria we traditionally used in order to make this distinction have become obsolete.
Abstract: In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener’s warning that “machines can and do transcend some of the limitations of their designers and that in doing so they may be both effective and dangerous,” I argue that in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency.
Massimiliano L. Cappuccio, Jai C. Galliott & Eduardo B. Sandoval
Abstract: We spontaneously tend to project animacy and sensitivity to inanimate objects and sometimes we attribute distinctively human features like intelligence, goals, and reasons to certain artificial devices. This phenomenon is called “anthropomorphism” and has been long studied by researchers in human-robot interaction and social-robotics. These studies are particularly important from the perspective of recent developments in military technology, as autonomous systems controlled by AI are expected to play a greater and greater role in the future of warfare. Anthropomorphistic effects can play a critical role in tactical operations involving hybrid human-robot teams, where service members and autonomous agents need to quickly coordinate relying almost exclusively on fast, cognitively parsimonious, natural forms of communication. These forms rely importantly on anthropomorphism to allow human soldiers read the behavior of machines in terms of goals and intentions. Understanding the cognitive mechanisms that underpin anthropomorphistic attributions is hence potentially crucial to increase the accuracy and efficacy of human-machine interaction in military operations. However, this question is largely philosophical, as numerous models compete in the space of social cognition theory to explain behavior reading and mental states attribution. This paper aims to offer an initial exploration of these mechanisms from a perspective of philosophical psychology and cognitive philosophy, reviewing the theories in social cognition that are most promising to explain anthropomorphism and predict how it can enable and improve natural communication between soldiers and autonomous military technologies.
Abstract: Focusing on existing ‘autonomous’ weapons systems and their uses replaces speculations about future developments and about what robots will or will not be able to do, with attention to the way these weapons are changing and have already changed warfare. The aspects of these transformations that will interest me in this paper are some of the political, organizational and social consequences of the introduction and deployment of various automatic and autonomous weapons systems. Beyond the questions of responsibility and legality, I want to look at the ways in which these weapons change countries’ ability to project power, on how they affect the composition of armed forces, the power relationships within them, and their relations with other major political actors.
Abstract: This paper critically examines the implications of technology for the ethics of intervention and vice versa, especially regarding (but not limited to) the concept of military humanitarian intervention (MHI). To do so, it uses two recent pro-interventionist proposals as lenses through which to analyse the relationship between interventionism and technology. These are A. Altman and C.H. Wellman’s argument for the assassination of tyrannical leaders, and C. Fabre’s case for foreign electoral subversion. Existing and emerging technologies, the paper contends, play an important role in realising these proposals. This illustrates the potential of technology to facilitate interventionist practices that transcend the traditional concept of MHI, with its reliance on kinetic force and large-scale military operations. The question, of course, is whether this is normatively desirable. Here, the paper takes a critical view. While there is no knockdown argument against either assassination or electoral subversion for humanitarian purposes, both approaches face similar challenges, most notably regarding public accountability, effectiveness, and appropriate regulatory frameworks. The paper concludes by making alternative suggestions for how technology can be utilised to improve the protection of human rights. Overall, the paper shows that an engagement with technology is fruitful and necessary for the ethics of intervention.
Abstract: Beginning with a brief outline of the ethical contradictions inherent to nuclear deterrence, this paper highlights the flaws of commonly acknowledged theories regarding the efficiency of nuclear threats. The paper concludes that a theory of “existential deterrence” is the only way to somewhat safeguard the rationality of nuclear deterrence. The backbone of this contention is a metaphysics of time according to which the actual and the potential coincide, and future events necessarily occur. In that framework nuclear deterrence appears to be an ethical abomination.
Jai Galliott & John Forge
Abstract: In this philosophical debate on the ethics of developing AI for Lethal Autonomous Weapons, Jai Galliott argues that a “blanket prohibition on ‘AI in weapons,’ or participation in the design and engineering of artificially intelligent weapons, would have unintended consequences due to its lack of nuance.” In contrast to Galliott, John Forge contends that “the only course of action for a moral person is not to engage in weapons research.”