PJCV 5/1 - Military Technology and the Armed Forces
Philosophical Journal of Conflict and Violence
ISSN 2559-9798
Editor-in-Chief: Andreas Wilmes
Guest Editor: Alexander Leveringhaus
Vol. V (Issue 1/2021, May)
Pages 1-148
DOI: 10.22618/TP.PJCV.20215.1
You can read this issue in open access
DOSSIER: MILITARY TECHNOLOGY AND THE ARMED FORCES
Foreword
By Alexander Leveringhaus
Not Even Close to a (Fair) Fight: Technology and the Future of War
Jennifer Kling
Abstract: The exponential expansion and advancement of wartime technology has the potential to wipe out ‘war’ as a meaningful category. Assuming that the creation of new wartime technologies continues to accelerate, it could soon be the case that there will no longer be wars, but rather mass killings, slaughters, or genocides. This is because the concept of ‘war’ entails that opposing sides either will, or are able to, fight back against one another to some recognizable degree. In fact, this is one of the differences between war and wholesale killing, slaughter, or genocide. With the asymmetric proliferation of killing and maiming wartime technologies, there may soon no longer be even the possibility of a fair, or somewhat fair, fight; there will only be scorched earth.
The Paradox of Precision and the Weapons Review Regime
Joshua Andresen
Abstract: As aerial weapons become more accurate and precise, they paradoxically expose civilians to greater harm. They make the use of military force feasible where previously it had not been. While these weapons are subject to legal review to certify that they are capable of being deployed in a discriminate manner, weapons review practice in the US and UK lends cursory approval to weapons that are as likely to harm civilians as enemy combatants. This article argues that a robust contextualized review of weapon’s effects on civilians and combatants is both legally required and in states’ strategic security interests.
What is a Digital Weapon? Towards a Functional Approach to Hypermodern Warfare Media
Alessandro De Cesaris
Abstract: There is a wide debate concerning cyberwar and the new dangers of the Internet, but this debate focuses too often on practical issues, while the conceptual and somehow strictly “philosophical” dimension remains unquestioned. In this article, I will try to show that a better understanding of what we mean when we speak about weapons, or at least a better understanding of the new difficulties entailed by digital technologies in the field of military devices, can help us to provide a better analysis of the risks and of the ethical issues connected to contemporary fighting. In particular, I will argue that the so-called “digital turn” entails a blurring of the distinction between weapons and non-weapons, because in what I will call our “hypermodern era” the criteria we traditionally used in order to make this distinction have become obsolete.
Autonomous Weapons Systems, Artificial Intelligence, and the Problem of Meaningful Human Control
Elke Schwarz
Abstract: In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener’s warning that “machines can and do transcend some of the limitations of their designers and that in doing so they may be both effective and dangerous,” I argue that in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency.
Mapping Meaning and Purpose in Human-Robot Teams: Anthropomorphic Agents in Military Operations
Massimiliano L. Cappuccio, Jai C. Galliott & Eduardo B. Sandoval
Abstract: We spontaneously tend to project animacy and sensitivity to inanimate objects and sometimes we attribute distinctively human features like intelligence, goals, and reasons to certain artificial devices. This phenomenon is called “anthropomorphism” and has been long studied by researchers in human-robot interaction and social-robotics. These studies are particularly important from the perspective of recent developments in military technology, as autonomous systems controlled by AI are expected to play a greater and greater role in the future of warfare. Anthropomorphistic effects can play a critical role in tactical operations involving hybrid human-robot teams, where service members and autonomous agents need to quickly coordinate relying almost exclusively on fast, cognitively parsimonious, natural forms of communication. These forms rely importantly on anthropomorphism to allow human soldiers read the behavior of machines in terms of goals and intentions. Understanding the cognitive mechanisms that underpin anthropomorphistic attributions is hence potentially crucial to increase the accuracy and efficacy of human-machine interaction in military operations. However, this question is largely philosophical, as numerous models compete in the space of social cognition theory to explain behavior reading and mental states attribution. This paper aims to offer an initial exploration of these mechanisms from a perspective of philosophical psychology and cognitive philosophy, reviewing the theories in social cognition that are most promising to explain anthropomorphism and predict how it can enable and improve natural communication between soldiers and autonomous military technologies.
Lethal Autonomous Weapons Systems: Organizational and Political Consequences
Paul Dumouchel
Abstract: Focusing on existing ‘autonomous’ weapons systems and their uses replaces speculations about future developments and about what robots will or will not be able to do, with attention to the way these weapons are changing and have already changed warfare. The aspects of these transformations that will interest me in this paper are some of the political, organizational and social consequences of the introduction and deployment of various automatic and autonomous weapons systems. Beyond the questions of responsibility and legality, I want to look at the ways in which these weapons change countries’ ability to project power, on how they affect the composition of armed forces, the power relationships within them, and their relations with other major political actors.
Beyond Military Humanitarian Intervention: From Assassination to Election Hacking?
Alexander Leveringhaus
Abstract: This paper critically examines the implications of technology for the ethics of intervention and vice versa, especially regarding (but not limited to) the concept of military humanitarian intervention (MHI). To do so, it uses two recent pro-interventionist proposals as lenses through which to analyse the relationship between interventionism and technology. These are A. Altman and C.H. Wellman’s argument for the assassination of tyrannical leaders, and C. Fabre’s case for foreign electoral subversion. Existing and emerging technologies, the paper contends, play an important role in realising these proposals. This illustrates the potential of technology to facilitate interventionist practices that transcend the traditional concept of MHI, with its reliance on kinetic force and large-scale military operations. The question, of course, is whether this is normatively desirable. Here, the paper takes a critical view. While there is no knockdown argument against either assassination or electoral subversion for humanitarian purposes, both approaches face similar challenges, most notably regarding public accountability, effectiveness, and appropriate regulatory frameworks. The paper concludes by making alternative suggestions for how technology can be utilised to improve the protection of human rights. Overall, the paper shows that an engagement with technology is fruitful and necessary for the ethics of intervention.
On the Rationality and Ethics of Nuclear Deterrence
Jean-Pierre Dupuy
Abstract: Beginning with a brief outline of the ethical contradictions inherent to nuclear deterrence, this paper highlights the flaws of commonly acknowledged theories regarding the efficiency of nuclear threats. The paper concludes that a theory of “existential deterrence” is the only way to somewhat safeguard the rationality of nuclear deterrence. The backbone of this contention is a metaphysics of time according to which the actual and the potential coincide, and future events necessarily occur. In that framework nuclear deterrence appears to be an ethical abomination.
Debate on the Ethics of Developing AI for Lethal Autonomous Weapons
Jai Galliott & John Forge
Abstract: In this philosophical debate on the ethics of developing AI for Lethal Autonomous Weapons, Jai Galliott argues that a “blanket prohibition on ‘AI in weapons,’ or participation in the design and engineering of artificially intelligent weapons, would have unintended consequences due to its lack of nuance.” In contrast to Galliott, John Forge contends that “the only course of action for a moral person is not to engage in weapons research.”
Alex Leveringhaus, PhD (LSE), is Lecturer in Political Theory at the University of Surrey, UK, where he co-directs the Centre for International Intervention (cii). He is also an affiliate of the Surrey Centre for Law and Philosophy (SCLP), based in the Surrey School of Law. Working primarily within analytical political philosophy, he researches contemporary just war theory and the ethics of armed conflict. He is particularly interested in military intervention, as well as the ethical ramifications of (emerging) technologies for armed conflict. He is the author of Ethics and Autonomous Weapons (Palgrave, 2016).
You might also like
Clients say
I am most pleased to recommend Trivent publishing to anyone who is at the “publish a book” stage of life. The team is professional, outstanding, supportive, they are intelligent editors who will direct, not coddle, an author on his/her journey to publication.
I have been working with the Philosophical Journal of Conflict and Violence since its debut in 2017. It has been a pleasant experience to see how the papers take shape in the process from submission to peer review to publication. The editor-in-chief together with the Trivent team do a tremendous job and are always seeking quality above all!
I’ve participated in two conferences organized by Trivent in the last few years and both were a nice experience – went smoothly and had decent talks in good spirit. When it came to managing manuscripts, I was surprised by the professionalism by which they helped my text get published. I definitely enjoyed working with them!