20 December 2019
As peer and near-peer competitors continue to invest in capabilities that offset US legacy systems and processes, the Department of Defense’s (DOD) latest artificial intelligence (AI) strategy reflects an urgency for the US military to harness capabilities to achieve dominance and overmatch in the cognitive and virtual domains. In other words, there is a heightened sense within the Pentagon and among policymakers that, because US forces will be expected to fight in a more high-pressure environment, supremacy and superiority of legacy systems in the traditional physical domains – air, land, and sea – will be temporary at best. As a result, the operator’s “OODA” loop cycle (Observe, Orient, Decide, Act) must be compressed in the short-term to maximise RDA (Recognise, Decide, Act) potential.
In recent years, machine learning and AI technology have made huge splashes as the DOD accelerates the development of next-generation technologies and concepts to ensure continued competitive advantage as part of its Third Offset Strategy. The goal is to enable faster access to information and making sense of data at machine speed that is essential for the US military to retain its overmatch. Furthermore, it also allows ground personnel to decrease their reliance on air platforms for intelligence, reconnaissance, and surveillance (ISR) when operating in high-pressure environments that could potentially undermine those assets. This point was emphasised by Air Force Colonel Matthew D. Atkins, chief of the intelligence capabilities and requirements division at SOCOM. According to Atkins, “Eighty per cent of [US] portfolio is geared toward the air… We’ll be putting our efforts into ways to expand ground-based and maritime-based ISR. That’s not only to gain dominance in those domains but also to buy down the dependency on airborne, which is the most costly.”
Among the components of the US military that have been experimenting with AI to increase efficiency, precision, and mobility of their operations and ISR capabilities, the US Special Operations Command (SOCOM) has had a long tradition of innovation. With a limited budget of $14 billion and plans to achieve “Small Unit Dominance”, SOCOM has embraced the use of smart and intelligent networked sensor technologies that come in smaller packages.
In recent months, researchers at SOCOM demonstrated how AI and sensors could be combined to enhance the performance of Special Operations Forces (SOF). For instance, the SOFWERX research facility tested a customised glove that enables hands-free, gesture-based manipulation of a virtual tactical air control map. Additionally, prototypes that boasted a capability to capture biometric signals that would enhance ISR capabilities were also showcased at the Genius Machines event that took place on December 10. One of them was a “physiological analysis tool” designed to allow operators to analyse the sentiments of the local population they come into contact with, thereby improving the decision-making process. In one simulation that featured an operator in a village that the Islamic State (IS) has been recruiting, the sensors would indicate whether village elders perceived the operator favourably, regardless of what they were saying.
The momentum behind SOCOM’s venture into machine learning and AI, as these experiments highlight, fits with their plans to translate the concept of the “Hyper-Enabled Operator” (HEO) into reality. The HEO concept, which was born out of the Tactical Assault Light Operator Suit (TALOS) program after the prototype was deemed not suitable for close combat, seeks to place a new emphasis on data and information technology to create new cognitive abilities for the individual SOF operator. In addition, to improve social network visualisations and sentiment analysis, HEO capabilities have tremendous potential to improve the SOF’s specially designed toolkits for “sensitive site exploitation.”
One of these toolkits include capabilities that provide a higher probability of identification and characterisation of high-value targets based on captured biometric data. Earlier in October, SOCOM issued a Request for Information to identify potential qualify industry partners and capabilities for next-generation multi-modal biometrics that are capable of collecting and managing on-board storage of a person’s physiological traits such as fingerprints and even voice. Existing SOCOM biometric technologies fielded for “sensitive site exploitation” were used by the SOF during the raid on IS’ leader Abu Bakr al-Baghdadi to identify and confirm his presence in the compound.
Notwithstanding these developments, there are still considerable challenges ahead for using AI to enhance the effectiveness of special operations. Among these is gaining absolute precision and narrowing the margin of error. As Atkin aptly notes, “When we’re trying to personally ID an individual, there is zero margin of error… When we’re trying to count kids that might be in a compound before we go assault it, that’s something that you have to default back to.” Second, today’s investments could be rendered obsolete in a short time span due to rapidly changing battlefield scenarios, thereby creating enormous pressure for defence technology industries to consistently keep up and design products that help the US military achieve and maintain its cognitive overmatch. Third, there are ethical concerns surrounding the utilisation of AI technologies and systems. Although the DOD recently released a draft ethical guideline in October, there are concerns regarding the extent to which would be applied and integrated into the actual decision-making cycle for building and buying AI.
As the US defence community is actively steering efforts to harness capabilities in the virtual and cognitive domains, the application AI for military purposes in Europe is relatively in its embryonic stage. Some member states have recently begun stimulating the development of AI solutions to address gaps in their own national defence. In April 2019, Florence Parley, France’s Minister of Defence, unveiled a new AI strategy that advocates for responsible and controlled use of AI in the defence sector. However, the debate has not achieved the same level of fervour at the regional level. The EU currently does not have the necessary foundations for AI to address capability gaps in the union, especially a solid conceptual framework to help underline the benefits they bring to the armed forces. Moreover, it will be an extremely tall order for member states to integrate AI solutions into the complex web of legacy systems. Perhaps a starting point requires the EU to share data across military services, missions, and domains. As Heiko Borchert and Christian Brandlhuber suggest, this could stimulate efforts to use data to test AI in a simulation that mirrors real-life battlefield scenarios. In turn, AI developers would able to follow-up on the capability gaps that have been identified by the military community. Given that not all member states have sufficient resources to invest in the cognitive capabilities of their elite units and ground personnel, it is more realistic for the EU to take up a collaborative approach toward the military application of AI so that a less-capable member state could still contribute to the collective EU defence project within their level of ambition and capacity.
Written by Chonlawit Sirikupt, European Defence Researcher at Finabel – European Army Interoperability Centre
Balestrieri, Steve. “SPEC OPS Forensics: Everything You Need to Know About Sensitive Site Exploitation (SSE).” SOFREP. November 9, 2019. https://sofrep.com/news/spec-ops-forensics-everything-you-need-to-know-about-sensitive-site-exploitation-sse/ (accessed December 19, 2019).
Borchert, Heiko, and Christian Brandlhuber. “Jump-starting Europe’s work on military artificial intelligence.” Defense News. September 8, 2019. https://www.defensenews.com/opinion/2019/09/09/jump-starting-europes-work-on-military-artificial-intelligence/ (accessed December 19, 2019).
White Jr., Samuel R.. “The Urgency of the Third Offset.” In Closer Than Your Think: The Implications of the Third Offset Strategy for the U.S. Army, by Samuel R. White Jr., 15-28. Carlisle Barracks: The Strategic Studies Institute, 2017.
Freedberg Jr., Sydney J. “Artificial Intelligence: Will Special Operators Lead The Way?” Breaking Defense. February 13, 2019. https://www.cnas.org/press/in-the-news/artificial-intelligence-will-special-operators-lead-the-way (accessed December 17, 2019).
Kimery, Anthony. “USSOCOM set to test, evaluate commercial next generation biometrics.” Biometric Update. October 28, 2019. https://www.biometricupdate.com/201910/ussocom-set-to-test-evaluate-commercial-next-generation-biometrics (accessed December 18, 2019).
MacCalman, Alex, Jeff Grubb, Joe Register, and Mike McGuire. “The Hyper-Enabled Operator.” Small Wars Journal. June 7, 2019. https://smallwarsjournal.com/jrnl/art/hyper-enabled-operator (accessed December 17, 2019)
Machi, Vivienne. “SOCOM Seeks Lighter, More Flexible Technologies for Small Unit Dominance.” National Defense. May 16, 2019. https://www.nationaldefensemagazine.org/articles/2017/5/16/socom-seeks-lighter-more-flexible-technologies-for-small-unit-dominance (accessed December 17, 2019).
Pellerin, Cheryl. “Deputy Secretary: Third Offset Strategy Bolsters America’s Military Deterrence.” US Department of Defense . October 31, 2019. https://www.defense.gov/Explore/News/Article/Article/991434/deputy-secretary-third-offset-strategy-bolsters-americas-military-deterrence/ (accessed December 17, 2019).
Permanent Representation of France to NATO. “The Ministry of Armed Forces presents its new strategy for artificial intelligence.” Permanent Representation of France to NATO. April 5, 2019. https://otan.delegfrance.org/The-Ministry-of-Armed-Forces-presents-its-new-strategy-for-artificial (accessed December 18, 2019).
Tucker, Patrick. “How Special Operators Are Taking Artificial Intelligence To War.” Defense One. May 28, 2019. https://www.defenseone.com/technology/2015/05/how-special-operators-are-taking-artificial-intelligence-war/113872/ (accessed December 17, 2019).
——. “The Pentagon’s AI Ethics Draft Is Actually Pretty Good.” Defense One. October 31, 2019. https://www.defenseone.com/technology/2019/10/pentagons-ai-ethics-draft-actually-pretty-good/161005/ (accessed December 18, 2019)