You are currently viewing Artificial Intelligence and Legal Policy Framework in the Western Countries
Pixabay

Artificial Intelligence and Legal Policy Framework in the Western Countries

25 February 2021

Artificial Intelligence (AI) is growing progressively thanks to the evolution of technology. In the military sector, AI has potential applications in every task in which human cognition is needed. As the European Parliament states, “it should only be used as a last resort and be deemed lawful only if subject to human control”. From a legal perspective, the following step is the regulation of a legal framework for AI based on standardisation, governance arrangements, and legislation. However, regulation must be made carefully, as time and calibration are the main factors for a successful change when proceeding with regulations.

Edward Christie suggests that “regulating too fast or too much can lead to obstructing innovation, or if regulating too little can lead to an investment fall” (Christie, 2021). He proposes recommendations regarding an AI policy framework in coordination with western nations, the EU, and NATO, as well as the OECD. The first proposal is to facilitate sharing and transfers of technologies and data between partners, which would be particularly useful in furthering cooperation. The second proposal encompasses the financial resources for development of AI, since it requires a considerable increase in public expenditure to undertake scientific and technological research. Finally, he proposes a strategic objective of western nations of collectively remaining ahead of any potential rivals.

In the military field, regulation means the settlement of policies and frameworks to specify the development and the use of AI. This includes concepts, doctrines, and standards that would ensure interoperability and the principle of responsible use. This principle refers to, but is not limited to, compliance with international law and international humanitarian law (IHL).

The most important international document concerning this is the consultation under the United Nations by the Group of Governmental Experts on Lethal Autonomous Weapon Systems (GGE LAWS). The GGE has the mandate to examine emerging technologies in LAWS and propose a Convention on Certain Conventional Weapons. In a nutshell, the GGE LAWS discuss: (i) promotion of common concepts; (ii) characteristics of LAWS for the Convention on Certain Conventional Weapons (CCW) and potential challenges regarding LAWS in compliance with IHL; (iii) the human element and the use of lethal force; (iv) potential military implications; and (v) and options regarding humanitarian and international security challenges.

The European Parliament recently voted on a report titled: Guidelines for military and non-military use of Artificial Intelligence. This report seeks to boost innovation, ethical standards, and trust in technology. It calls for creating an EU legal framework on AI with definitions and ethical principles, including their military use, paving the way for the EU to become a world leader in AI development. It also calls on the EU and its member states to ensure that AI and related technologies are human-centric. Members of the European Parliament (MEPs) have stressed the need to respect human rights in EU defence activities. In addition, AI systems must include responsibility and accountability for their use. MEPs recalled the IHL principles of proportionality and its necessity regarding “killer robots”. They noted that these weapons must always have human control and judgment, since it is established that “AI systems could accelerate the pace of combat to a point in which machine actions surpass the rate of human decision making, potentially resulting in a loss of human control in warfare”. Finally, the report discusses a potential leading role for the EU by creating and promoting a global framework governing the military use of AI in the United Nations (UN) and within the international community.

The regulation of AI will be a big challenge in the coming years, and it is foreseeable that it will play an even more important role as time progresses. To achieve sufficient regulation, the international community is working on the Convention while the EU has set out the principles in this area, establishing a solid legal framework containing ethical principles based on human rights and regulating military purposes as long as “the AI systems are subject to meaningful human control, allowing humans to correct or disable them in case of unforeseen behaviour.”


Written by Christian DI MENNA, Legal Researcher at Finabel – European Army Interoperability Centre

Sources

Congressional Research Service (2020), “Artificial Intelligence and National Security”. Available at: https://fas.org/sgp/crs/natsec/R45178.pdf

Christie, Edward Hunter, “Artificial Intelligence and Western Defence Policy: a conceptual note”, 2021 Wilfried Martens Centre.

European Parliament “Guidelines for military and non-military use of Artificial Intelligence”, https://www.europarl.europa.eu/news/en/press-room/20210114IPR95627/guidelines-for-military-and-non-military-use-of-artificial-intelligence

European Parliament (2020), “Parliament leads the way on the first set of EU rules for Artificial Intelligence”. Available at: https://www.europarl.europa.eu/news/en/press-room/20201016IPR89544/parliament-leads-the-way-on-first-set-of-eu-rules-for-artificial-intelligence

Geneva Internet Platform, “GGE on lethal autonomous weapons systems”. Available at: https://dig.watch/process/gge-laws