Military Technology 06/2020

Artificial intelligence (AI) is one of the fastest- moving technology thrusts across the US DoD and other military organizations – with good reasons. In short, AI refers to the ability of machines to per- form tasks that normally require human intelligence. The Pentagon and other military departments are increasingly looking to AI-powered systems to perform an extensive range of missions routinely covered in MilTech , from weapons employment through predictive maintenance to ISR and auto- mation. Until recently, US military-industry teams advanced AI’s technology baseline within the very broad tenets of the US Constitution, Title 10 of the US Code, and the Law of War. US DoD doctrine on the topic was a Department directive updated on 8 May 2017. The document, in essence, said a human being must have veto power over any action an autonomous system might take in combat. The directive failed to touch on other uses for AI, for instance, decision support and predictive analytics, which are accelerating at firms like Microsoft, Amazon and elsewhere in the global environment. Until recently, AI’s evolution contrasted with the more rigorous ethical and legal-based foundations governing the decision-making calculus for other common defence activities. A short list of these myriad actions includes the imperative to use precision munitions to eliminate civilian casualties in contested urban environments and seeking best value for the military customer and taxpayer in acquisition programmes. On cue, this February, DoD announced it officially adopted a series of ethical principles for the use of AI. Of significance, these new guidelines were developed in consultation with leading AI experts in commercial industry, government, academia and the American public. A Pentagon document noted, “The adoption of AI ethical principles aligns with the DoD AI strategy objective directing the US military lead in AI ethics and the lawful use of AI systems.” This action was none too soon. These Pentagon pronouncements compel the military to use technology more responsibly and protect civil liberties, privacy and American values – including the rule of law. Lt Cmdr Arlo Abrahamson, a public affairs officer at DoD’s Joint Artificial Intelligence Center, added some context to the US military’s sharpening focus on AI ethics and principles. “AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast record of honorable military service or our military’s commitment to lawful and ethical behavior. Our focus on AI follows from our long history of making investments to preserve our most precious asset, our people, and to limit danger to civilians and collateral damage to civilian infrastructure. All of the AI systems that we field will have compliance with the law of war as a key priority from the first moment of requirements setting through each step of rigorous testing and continuous evaluation.” As noted earlier, AI is not the exclusive purview of the US military. A background document from the UK MoD stated AI supports a number of diverse activities in its services. In one instance, that depart- ment is in the demonstration phase for a variety of autonomous systems across all domains, in areas such as maritime mine countermeasures, logistics resupply and hazardous scene assessment. The MoD also presented a more complete business case to invest in AI, pointing out another key priority is improving business performance across the enterprise. Accordingly, the department reported an active programme around business process automation, looking at areas such as human resources, finance, medical and IT operations, and was said to be starting projects around error and fraud in both its finance and inventory areas. To be clear, much like its counterparts across the Atlantic, the British MoD is not only following, but is also leading efforts to conform with ethics and the rule of law as it elevates its AI competence. Indeed, the background document emphasized MoD is committed to developing and employing its AI solutions in an ethical manner and within international norms. Citing a specific example, it was noted the MoD already has a clear position regarding Lethal Autonomous Weapons Systems – that the UK does not possess fully autonomous weapon systems and has no intention of developing them. Further, the MoD was said to be at the forefront of discussions on this topic, particularly at the UN Certain Conventional Weapons Convention in Geneva. The US DoD is among the military departments around the globe identifying more suitable user cases for AI across the scope of their organizations and accelerating the adoption of this enabling technology. So, let us efficiently and effectively unleash this capability within operators’ requirements. But at the same time, this may be a propitious time at which to tap lightly on the brakes and permit rapidly-evolving AI programmes to integrate and conform with current and emerging ethics and laws. Marty Kauchak Gently Tap the Brakes on AI 32 · MT 6/2020 Letter from America After a 23-year career in the US Navy, from which he retired as a Captain, Marty Kauchak regularly covers a wide range of topics for Mönch and is MT’s North American Bureau Chief.

RkJQdWJsaXNoZXIy MTM5Mjg=