Skip to main content
Log in

The Ethics of Military AI: Responsibility, Control and Moral Judgement

Participating journal: Ethics and Information Technology

CfP: Special Issue The Ethics of Military AI: Responsibility, Control and Moral Judgement

Deadline submissions of papers (4000-6000 words) by 15 December 2024

Envisaged Online First Publication by 15 February 2025

Deadline of short abstracts (max 300 words) by 15 October 2024 to Managing Editor Jonne Maas: j.j.c.maas@tudelft.nl

On October 5, 2023, the Secretary-General of the United Nations, António Guterres, and the President of the International Committee of the Red Cross, Mirjan Spoljaric Egger, called on political leaders to agree before the end of 2026 on international rules for the development and use of autonomous weapon systems. They consider it to be “an urgent humanitarian priority” to have clarity and consensus globally on how international law and principles of ethical acceptability apply to autonomous weapon systems and on how their deployment ought to be constrained.

There seems to be a growing consensus internationally that advanced military AI applications involved in the use of force must be and must remain under meaningful human control. In the words of the UN Secretary-General and the President of the ICRC: We must act now to preserve human control over the use of force. Human control must be retained in life and death decisions. The autonomous targeting of humans by machines is a moral line that we must not cross. Machines with the power and discretion to take lives without human involvement should be prohibited by international law.

There is an increasing number of high contracting parties who echo this call. They have endorsed the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy that was launched at the Military AI conference REAIM 2023 in The Hague and which forms a vantage point for REAIM 2024 in Seoul and future REAIM conferences. The political declaration stated among other things that the military use of AI can and should be ethical, responsible, and enhance international security and in compliance with applicable international law. In particular, use of AI in armed conflict must be in accord with States’ obligations under international humanitarian law, including its fundamental principles. Military use of AI capabilities needs to be accountable, including through such use during military operations within a responsible human chain of command and control. A principled approach to the military use of AI should include careful consideration of risks and benefits, and it should also minimize unintended bias and accidents. States should take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous functions and systems. These measures should be implemented at relevant stages throughout the life cycle of military AI capabilities.

Guterres and Spoljaric suggest in a pragmatic spirit in their 2023 call to action that we could start to consider setting limits on where, when and for how long autonomous weapons are used, the types of targets they strike, and the scale of force used, and by ensuring effective human supervision, and the possibility of timely intervention and deactivation. In a similar vein, NATO has published a revised Responsible AI strategy in July 2024 that also calls our attention to the importance of operationalisation of principles of Responsible Military AI in terms of standards, requirements and use cases. These more focused proposals and specific constraints will of course have to be assessed for their robustness and effectiveness and will have to be justified in conversations and negotiations among parties who may have different points of view on the morality and legality of lethal autonomous weapons—and also vis-à-vis the larger application of AI in the military domain in general, beyond AI-assisted weapon systems. The tenuous lines between militarizing and weaponizing AI also need attention and call for globally agreed rules of the road and norms of behaviour.

Detailed conceptual analysis and empirical studies are therefore needed at this stage to shed light on the question of whether the demands of principles of just war, military ethics, use of force law, international humanitarian law, and human rights law can be effectively and demonstrably respected and implemented or not—given the nature and the state of play of the AI technology at a given time. Our ethical and legal considerations, values, principles and norms require the world to be a certain way. Our valuing of what matters from a moral point of view is ‘design consequential’. This means that our serious value commitments imply that we care about and (pro)actively create and seize opportunities to realize our values. For Military AI it implies in any case that we need to ascertain whether the design of the relevant systems—in the broadest possible sense—can meet relevant ethical and legal requirements. If we would have to conclude that systems cannot meet these requirements or that we cannot ascertain whether they meet these requirements that would have moral and practical implications for the use and deployment of the technology.

Key concerning questions need urgent answers and call for concrete global governance mechanisms:

- Is Meaningful Human Control a pipe dream when it comes to the AI we will have available on the battlefield in the coming years?

- Are there morally significant spaces of human agency in the loop, on the loop or related to the loop?

Are explainability and transparency of Machine Learning systems feasible to the extent that relevant military agents are able to morally and legally justify their actions when advanced AI systems assist them?

- Will military agents be able to identify in the right way the mechanisms through which AI systems form their intentions and arrive at their decisions?

- Are accountability and responsibility gaps unavoidable in command and control hierarchies, given the speed and scale of machine learning assisted targeting procedures?

- Can safety in operations be guaranteed given the complexity of acting and decision-making with AI based decision support systems in theatres of war?

- Are we pushing these systems and their human users to the brittle limits of their man-machine interaction capabilities?

- Which lessons have we learned from safety science, human factors research, and risk studies regarding complex adaptive systems?

AI is gaining in power every day and is finding its way to battlefields all around the world. Research, venture capital and investments in defence related AI are at an all-time high and have given a new meaning to the expression ‘military industrial complex’, exemplified in the New York Times Op-Ed of July 2023 by Palantir CEO Alex Karp entitled “Our Oppenheimer Moment: The Creation of A.I. Weapons”.

This special issue aims to bring together cutting edge research papers in applied moral philosophy of Responsible AI in the Military that discuss evidence based, novel, multidisciplinary, and design oriented answers to these and related questions. As the Secretary General responsible for upholding Article 1 of the UN Charter was right to point out: “Machines with the power and discretion to take lives without human involvement is a moral line we must not cross.” We expect that this special issue with novel, design oriented and multidisciplinary research papers about the core ethical issues of Military AI will help to bring the debates at UN, GC REAIM and among the signatories of The Political Declaration to the much needed next level.

Participating journal

Submit your manuscript to this collection through the participating journal.

Editors

  • Jeroen van den Hoven

    Jeroen van den Hoven

    Jeroen van den Hoven is university professor and full professor of Ethics and Technology at Delft University of Technology and editor in chief of Ethics and Information Technology. He is currently the scientific director of the Delft Design for Values Institute. He was the founding scientific director of 4TU.Centre for Ethics and Technology (2007-2013). In 2009, he won the World Technology Award for Ethics as well as the IFIP prize for ICT and Society for his work in Ethics and ICT. Jeroen van den Hoven was founder, and until 2016 Programme Chair, of the program of the Dutch Research Council on Responsible Innovation.

  • Jonne Maas

    Jonne Maas

    Jonne Maas is a PhD candidate at TU Delft involved with the HumanE AI project, supervised by dr. Juan Duran and Prof. dr. Jeroen van den Hoven from the TPM faculty and Prof. dr. Catholijn Jonker from the EWI faculty. I obtained my BSc in physical and social sciences (major Artificial Intelligence) at the University of Amsterdam, after which I continued my studies at the University of Twente in Philosophy of Technology. During my studies, I became especially interested in ethical and political concerns of AI-related technologies.

Articles

Navigation