You are currently viewing Artificial Intelligence in the Military
Pixabay

Artificial Intelligence in the Military

27 January 2021

Advances in data, computer processing power, and machine learning have enabled the rapid development of Artificial Intelligence (AI) over the last two decades [1]. Consequently, AI technologies are becoming ubiquitous in daily life. Biometric authentication, mobile mapping and navigation systems, natural language processing, and targeted online marketing are a few of the many ways that this technology has been incorporated into daily life. It is little wonder, then, that AI also offers great military promise.

Although military AI technology is still in its infancy, it is not unreasonable to assume that it will impact the future of warfare. According to Work and Brimley (2014), AI is a “potentially disruptive technology that may create sharp discontinuities in the conduct of warfare” and “produce dramatic improvements in military effectiveness and combat potential”. Aleen and Husain (2017), however, strikes a more revolutionary note by asserting that military applications of AI will cause “a seismic shift on the field of battle” and “fundamentally transform the way war is waged”. Russian President Vladimir Putin further echoed this idea in a speech in 2017; according to him, “Artificial intelligence is the future of not only Russia but of all mankind” and “whoever becomes the leader in this sphere will become the ruler of the world” (Gigova, 2017).  

The race to become the global leader in AI has already begun (Dutton, 2018). Several countries, such as the United States, China, and Russia have released plans and strategies to promote the use and development of AI technology, including military applications (Scharre, 2019). Intelligent systems and platforms which are faster and more efficient than human operators are no longer far-fetched. Clearly, today’s AI technology is a far cry from that depicted in science fiction. Nonetheless, as technology moves quickly from research labs to the physical world, stakeholders, military personnel, and policymakers must better understand the potential benefits, as well as ethical and legal implications, of the militarization of AI.

For the military, AI has broad potential which extends beyond autonomous weapon systems. Considered the sheer volume of data being collected today, this is more than any group of humans could analyse. AI applications offer a means to tackle information overload, improving both the performance and speed of military decision-making processes (Forrest et al., 2020). Certainly, there are situations in which choosing the best course of action more rapidly is beneficial. Yet, it is imperative not to overstate the importance of accelerating the decision-making process since it might introduce new risks or exacerbate existing ones. Additionally, these technologies allow for improvements in operational accuracy and precision reducing the chances of human error.

AI-enabled systems and platforms could potentially mitigate manpower shortages. There is currently an assortment of unmet needs and demands for which there are simply not enough military personnel (Forrest et al., 2020). For example, due to the growing number of cameras being used in surveillance, there is a clear need for professional video and imagery analysis. It is well-settled that automated systems excel in image-recognition and object-detection, surpassing human ability in most cases (Cummings, 2017). Should these systems be applied, analysts would be free to engage in activities that require the human mind’s creativity. That said, AI technologies could resolve other manpower issues, allowing armed forces to maintain or even expand warfighting capabilities without increasing their workforce. Moreover, replacing manual workers with automated systems would benefit the military by reducing labour costs while increasing productivity and optimising processes (Sisson, 2019).

AI has already made remarkable and essential improvements to Intelligence, Surveillance, and Reconnaissance (ISR) systems. Now it is possible to use unmanned aerial vehicles and sensors to capture information in environments that are inaccessible or hazardous to humans, increasing the quality and accuracy of the intelligence available in the decision-making process. Considered to be a tool for “dull, dirty, and dangerous” tasks, AI applications offer the opportunity to avoid putting lives at risk. With the development of technology, unmanned vehicles and autonomous weapon systems will allow militaries to operate in remote locations or anti-access/area-denial (A2/D2) environments, which are lethal to humans.

It is not realistic to assume that the introduction of AI into the military is risk-free; quite the opposite, with this phenomenon comes various implications and concerns that must be addressed by both governments and armed forces. Certainly, one complicating factor is that AI systems and platforms are increasingly sophisticated, making them less transparent to human users. This means that it may be more difficult to know whether the system is operating as expected or intended, which could be potentially dangerous.

Perhaps a more pressing issue is the use of AI military technologies, especially autonomous weapon systems in armed conflicts. According to Article 36 of the Additional Protocol I to the Geneva Conventions, states have to assess whether these new technologies comply with International Humanitarian Law (IHL) or any other rule of international law [2]. Therefore, the question that needs to be addressed is whether or not AI systems are capable of complying with the current international legal framework. Even though such systems “do not inherently violate the longstanding and accepted rules of warfare”, they do call IHL rules into question (Anderson, 2014). It helps to remember that these rules create legal obligations for human combatants, which “cannot be transferred to a machine, computer program or weapon system” (Davison, 2017). As stated by Christof Heyns (2014), the former UN Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, “a machine, bloodless and without morality or mortality, cannot fathom the significance of the killing or maiming of a human being”. Therefore, a legal accountability gap is created, allowing states to violate the rules of armed conflicts via automated weapons.

As with other emerging technologies, AI is a double-edged sword. To mitigate the risks of its militarisation, many governance approaches will be necessary, including transparency and confidence-building measures, as well as collaboration between governments, armed forces and the private sector. Additionally, although AI systems are likely to dramatically change warfare, humans will continue to play a pivotal role in the man versus machine dichotomy. After all, human cognition and emotions should “be viewed as central to restraint in armed conflicts rather than as irrational influences and obstacles to reason” (Human Rights Watch, 2012). From a military standpoint, therefore, the way to go seems to be optimally combining human and machine intelligence, the so-called ‘centaur approach’.

[1] One threshold issue in discussing this matter involves the definition of AI itself, a question that clearly runs through all articles in this issue. Although it has no common definition, AI has been generally described as “the capability of computer systems to perform tasks that normally require human intelligence” (Defence Science Board, 2016). However, this description is oversimplified and generates an apparent puzzle: what exactly constitutes intelligent behaviour? Given that, this InfoFlash uses the definition provided by the European Defence Agency (2020), which consider AI as “the capability of algorithms to select optimal or quasi-optimal choices to achieve specific goals”.

[2] IHL, also referred as the laws of armed conflict, is a set of rules that seek to regulate the conduct of war and minimise harm to civilians.


Written by Leandro PEREIRA MENDES , Legal Researcher at Finabel – European Army Interoperability Centre

Sources

Allen, J. and Husain, A. (2017). On Hyperwar. U.S. Naval Institute, Proceedings, July. Available at: https://www.usni.org/magazines/proceedings/2017/july/hyperwar.

Anderson, K., Reisner, D. and Waxman, M. (2014). Adapting the Law of Armed Conflict to Autonomous Weapon Systems. International Legal Studies vol. 90, issue 1. Available at: https://digital-commons.usnwc.edu/ils/vol90/iss1/3/.

Bundeswehr. (2019). Artificial Intelligence in Land Forces: A position paper developed by the German Army Concepts and Capabilities Development Centre. Army Concepts and Capabilities Development Centre. Available at: https://www.bundeswehr.de/resourse/blob/156026/3f03afe6a20c35d07b0ff56aa8d04878/download-positionpapier-englische-version-data.pdf.

China Arms Control and Disarmament Association. (2019). Artificial Intelligence and Its Military Implications. In: The Militarization of Artificial Intelligence, United Nations, New York, NY. Available at: https://www.un.org/disarmament/the-militarization-of-artificial-intelligence/.

Congressional Research Service. (2020). Intelligence and National Security. Congressional Research Service, November. Available at: https://fas.org/sgp/crs/natsec/R45178.pdf.

Cummings, M. L. (2017). Artificial Intelligence and the Future of Warfare. Chatham House, The Royal Institute of International Affairs, January. Available at: https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf.

Davidson, N. (2017). A legal perspective: Autonomous weapon systems under international humanitarian law. In UNODA Occasional Papers No. 30, November. Available at: https://www.un-ilibrary.org/content/books/9789213628942c005.

Defence Science Board. (2016). Summer Study on Autonomy. Office of the Under Secretary of Defence for Acquisition, Technology and Logistics, Washington, D.C. Available at: https://www.hsdl.org/?view&did=794641.

Dutton, T. (2018). An Overview of National AI Strategies. Medium. Available at: https://www.medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd.

European Defence Agency. (2020). Enhancing Interoperability: train together, deploy together. European Defence Matters issue 19. Available at: https://eda.europa.eu/docs/default-source/eda-magazine/edm19_web.pdf.

Forrest E. et al. (2020). Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. RAND Corporation. Available at: https://www.rand.org/pubs/research_reports/RR3139-1.html.

Gigova, R. (2017). Who Vladimir Putin Thinks Will Rule the World. CNN, September. Available at: https://edition.cnn.com/2017/09/01/world/putin-artificial-intelligence-will-rule-world/index.html.

Heyns, C. (2014). Autonomous Weapons Systems and Human Rights Law. Presentation Made at the Informal Expert Meeting Organized by the State Parties to the Convention on Certain Conventional Weapons, Geneva, Switzerland. Available at: https://www.unog.ch/80256EDD006B8954/(httpAssets)/DDB079530E4FFDDBC1257CF3003FFE4D/$file/Heyns_LAWS_otherlegal_2014.pdf.

Human Rights Watch and International Human Rights Clinic. (2012). Losing Humanity: The Case Against Killer Robots. Available at: https://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf.

Scharre, P. (2019). Military Applications of Artificial Intelligence: Potential Risks to International Peace and Security. In: The Militarization of Artificial Intelligence, United Nations, New York, NY. Available at: https://www.un.org/disarmament/the-militarization-of-artificial-intelligence/.

Sisson, M. (2019). Multistakeholder Perspective on the Potential Benefits, Risks, and Governance Options for Military Application of Artificial Intelligence. In: The Militarization of Artificial Intelligence, United Nations, New York, NY. Available at: https://www.un.org/disarmament/the-militarization-of-artificial-intelligence/.

Work, R. and Brimley, S. (2014). Preparing for War in the Robotic Age. Centre for a New American Security, January. Available at: https://www.cnas.org/publications/reports/20yy-preparing-for-war-in-the-robotic-age.