You are currently viewing Natural and Artificial Intelligence in Armed Conflict: Exploring Settled and Open Legal Questions with Dustin A. Lewis

Natural and Artificial Intelligence in Armed Conflict: Exploring Settled and Open Legal Questions with Dustin A. Lewis

Written by: Sara Loria

Edited by: Dimitra Pateraki

Supervised by: Riccardo Angelo Grassi

Introduction

Based in The Hague, the T.M.C. Asser Instituut is a distinguished organization funded in 1965 that conducts independent, policy-oriented research in International and European law. The centre promotes expertise through different initiatives, including two noteworthy lecture series, “Designing International Law and Ethics into Military and Artificial Intelligence” (DILEMA), and “Hague Initiative for Law and Armed Conflict” (HILAC). At the intersection of the two educational programs, on Thursday July 11th, researcher Dustin A. Lewis gave a lecture on ‘Natural and artificial intelligence in armed conflict’. The DILEMA-HILAC lecture, held by Dustin A. Lewis, explored and analysed fundamental settled and open legal questions related to natural and artificial intelligence in armed conflict.

Dustin A. Lewis is the current Research Director of the Harvard Law School Program on International Law and Armed Conflict (HLS PILAC), as well as an Associate Senior Researcher in the Governance of Artificial Intelligence Program at the Stockholm International Peace Research Institute (SIPRI). On Thursday’s lecture, the researcher presented a wide range of potential applications of artificial intelligence (AI) in the military domain, exploring the legal relationship between natural and artificial intelligence in armed conflict. In a timely discussion, Mr. Lewis reflected on the need for regulations concerning the military use of AI, States’ legal obligations and the importance of accountability.

Encouraging Sustained Conversation

Regarding the current state of the multilateral debate on the use of AI in armed conflict, Mr Lewis argued that it would be beneficial to formulate properties and conditions of legality regulating the development of such technologies in the military context. These properties and conditions may help frame and encourage sustained conversation among stakeholders. The legal attributes concerning the use of AI in armed conflict which Dustin A. Lewis recommended convey the core idea that the performance of obligations by a natural person cannot be satisfied solely by relying on algorithms and data-shaped information or recommendations. Mr. Lewis remarked that under the existing legal framework on State responsibility, only natural persons, namely humans, are capable of performing obligations on behalf of legal subjects. Therefore, the researcher provided a theoretical framework where the requirement for the performance of an obligation by a natural person or a group of people entails the exercise of what he calls ‘cognitive agency’, meaning an operation of mind or intelligence by one or more humans. Dustin A. Lewis noted that this notion is currently of an extra-legal character, as it is not of widespread usage.

Natural and Artificial Intelligence

Dustin A. Lewis’ aspiration is to reach a legal system of meaningful protection concerning war and AI. His framework opens up new opportunities to conceptualise connections and distinctions between exercises of natural and artificial intelligence. Due to the multidimensional nature of this issue, Mr. Lewis stressed the importance of focusing first on the questions of how, when, where, and why natural or artificial intelligence is exercised and how that relates to specific legal requirements and prohibitions. The researcher noted that currently there are no specific rules governing the use of natural or artificial intelligence in armed conflict. Therefore, at this stage of the debate, it would be useful to inquire how the existing rules account for, or fail to account for, the use of cognitive agency on a case-by-case basis.

The Multilateral Policy Debate

The speaker emphasised how concerns surrounding the use of autonomous weapons have been dominating the multilateral policy debate on the use of AI in armed conflict. In fact, there is an extensive catalogue of other areas related to various uses of AI in armed conflict, such as the development and implementation of military strategy, or the administration of logistics and humanitarian services. Recently, a growing number of States have been relying on AI-enabled tools and methods for certain aspects of those exercises of intelligence. As Dustin A. Lewis highlighted, the concern is that the deadlock experienced in the multilateral dialogue related to the regulation of autonomous weapons may be replicated in the new discourse around more broad uses of AI in the military context.

Conclusion

The normative impasse regarding the use of AI in armed conflict stems from differing perspectives among States. Some States believe that the existing law suffices and tend to emphasize the importance of monitoring any advancements (Lewis et al., 2016). On the other hand, other States advocate for new regulations to be developed while some entirely ban such technologies, as per the case of Belgium and lethal autonomous weapons systems (LAWS) (Amies, 2023). Consequently, the future normative framework regulating the use of AI in armed conflict is likely to result in relatively restrictive or permissive norms. From Dustin A. Lewis’ perspective, one way to seek legal stability at the universality of the legal framework is to define criteria and conditions of legality that States may be inclined to endorse.

Bibliography

Amies, Nick. (2023, January 15). Belgium upholds decision to ban ‘killer robots’. The Brussels Times. https://www.brusselstimes.com/350980/belgium-upholds-decision-to-ban-killer-robots.

Lewis, D. A., Blum, G. & Modirzadeh, N. K. (2016, August). War-Algorithm Accountability. Harvard Law School Program on International Law and Armed Conflict. https://pilac.law.harvard.edu/aws.

T.M.C. Asser Instituut. (2024). About the Asser Institute. Asser Institute Centre for International and European Law. https://www.asser.nl/about-the-asser-institute/.

T.M.C. Asser Instituut. (2024). DILEMA Lecture Series. Asser Institute Centre for International and European Law. https://www.asser.nl/dilema/events/dilema-lecture-series/.

United Nations General Assembly. (2023, December 22). Resolution adopted by the General Assembly on 22 December 2023. A/RES/78/241.

Verdiesen, I., Santoni de Sio, F. & Dignum, V. (2020, August 1) Accountability and Control Over Autonomous Weapon Systems: A Framework for Comprehensive Human Oversight. Minds & Machines. 31:137–163. https://link.springer.com/article/10.1007/s11023-020-09532-9.