Home » Artificial Intelligence » Artificial Intelligence and Military Operations

Artificial Intelligence and Military Operations

September 17, 2021

supplu-chain

Artificial intelligence (AI) is affecting almost every aspect of our lives, including national security. Although I normally write about AI in a business context, we should keep in mind that AI is being used in other ways as well. Editors from the Bulletin of the Atomic Scientists note, “Advances in artificial intelligence, deep-learning, and robotics are enabling new military capabilities that will have a disruptive impact on military strategies. The effects of these capabilities will be felt across the spectrum of military requirements — from intelligence, surveillance, and reconnaissance to offense/defense balances and even on to nuclear weapons systems themselves.”[1] Sam Tangredi (@tangredi_j), the Leidos Chair of Future Warfare Studies and Director of the Institute for Future Warfare Studies at the U.S. Naval War, and George Galdorisi (@GeorgeGaldorisi), Director of Strategic Assessments and Technical Futures at the Naval Information Warfare Center Pacific College, add, “In the public perception of military affairs, the term AI almost universally conjures up images of ‘killer robots’ running amok and deliberately attacking non-combatants, possibly destroying the entirety of the human race.”[2]

 

The use of AI in military operations raises myriad questions about how it is used as well as the limits of that use. Tangredi and Galdorisi believe the fear stoked by science fiction movies has resulted in some kneejerk reactions. They write, “This fear exists to the extent that certain scientists and scholars have conflated artificial intelligence with unmanned military systems (these so-called killer robots). A coalition of non-governmental organizations has established a ‘campaign to ban killer robots,’ which they define as an effort to ‘ban fully autonomous weapons and thereby retain meaningful human control over the use of force.’ This coalition defines autonomous weapons as weapons that ‘would be able to select and engage targets without human intervention.'” While most people would probably agree that having “meaningful human control over the use of force” sounds reasonable, Tangredi and Galdorisi note that the above definition of autonomous weapons is “so broad that it could conceivably cover such existing weapons as heat-seeking missiles (originating in the 1950s) or stationary naval mines (the first recorded, verifiably successful mine attack was in 1855).”

 

The larger point is this: The AI genie is out of the bottle and it’s not going back in. As a result, Tangredi and Galdorisi suggest a more thoughtful approach to AI in the military is required. They explain, “This lack of understanding is the result of the fact that there are very few open, public sources — articles, books, published studies — that discuss in any detail the specific, functional applications of AI to the discrete elements that constitute preparation for, deterrence of, and conduct of military operations.”

 

Ethical Use of AI in Military Operations

 

Thomas Creely, Director of the Ethics & Emerging Military Technology Graduate Program at the U.S. Naval War College, observes, “Artificial intelligence crosses biotechnology, neurotechnology, information technology, robotics, and other emerging technologies. … Ethics of technology, and AI in particular, has come to the forefront of our national security strategy, due primarily to concerns with China as a near-peer competitor and the risks AI poses to Americans.”[3] He adds, “Looking forward to emerging ethical challenges requires looking into future ethical issues for solutions.” In a year-long study concerning the ethical use of AI in military operations, analysts from the Rand Corporation reached the following conclusions:[4]

 

1. A steady increase in the integration of AI in military systems is likely. “The various forms of AI have serious ramifications for warfighting applications.
AI will present new ethical questions in war, and deliberate attention can potentially mitigate the most-extreme risks. Despite ongoing United Nations discussions, an international ban or other regulation on military AI is not likely in the near term.”

 

2. The United States faces significant international competition in military AI. “Both China and Russia are pursuing militarized AI technologies. The potential proliferation of military AI to other state and nonstate actors is another area of concern.”

 

3. The development of military AI presents a range of risks that need to be addressed. “Ethical risks are important from a humanitarian standpoint. Operational risks arise from questions about the reliability, fragility, and security of AI systems. Strategic risks include the possibility that AI will increase the likelihood of war, escalate ongoing conflicts, and proliferate to malicious actors.”

 

4. The U.S. public generally supports continued investment in military AI. “Support depends in part on whether the adversary is using autonomous weapons, the system is necessary for self-defense, and other contextual factors. Although perceptions of ethical risks can vary according to the threat landscape, there is broad consensus regarding the need for human accountability. The locus of responsibility should rest with commanders. Human involvement needs to take place across the entire life cycle of each system, including its development and regulation.”

 

The analysts came up with six specific recommendations concerning the use of artificial intelligence in military operations. They are:

 

• Organize, train, and equip forces to prevail in a world in which military systems empowered by AI are prominent in all domains.

 

• Understand how to address the ethical concerns expressed by technologists, the private sector, and the American public.

 

• Conduct public outreach to inform stakeholders of the U.S. military’s commitment to mitigating ethical risks associated with AI to avoid a public backlash and any resulting policy limitations for Title 10 action.

 

• Follow discussions of the Group of Governmental Experts involved in the UN Convention on Certain Conventional Weapons and track the evolving positions held by stakeholders in the international community.

 

• Seek greater technical cooperation and policy alignment with allies and partners regarding the development and employment of military AI.

 

• Explore confidence-building and risk-reduction measures with China, Russia, and other states attempting to develop military AI.

 

About the same time Rand analysts were beginning their study, Peter Asaro (@PeterAsaro), an associate professor of media studies at the New School, was encouraging an international effort to regulate the use of artificial intelligence in autonomous weapons. “If machines that autonomously target and kill humans are fielded by one country,” he wrote, “it could be quickly followed by others, resulting in destabilizing global arms races. And that’s only a small part of the problem.”[5] He focused on autonomous weapons because he believes trying to address AI in a broader military context is a Sisyphean task. He explained, “[An ‘AI arms race’ can] mean very different and even incompatible things, ranging from economic competition, to automated cyberwarfare, to embedding AI in weapons.” He concludes, “The kind of regulation sought by civil society groups in regard to autonomous weapons — killer robots, if you will — is largely without precedent. Rather than specify a particular type of munition or weapon with a particular effect or mode of action, what is needed is a regulation of the manner in which weapons are used so as to ensure that advancements in technology do not fundamentally undermine international humanitarian law itself.”

 

Concluding Thoughts

 

Rand Analysts conclude, “The world may be on the verge of a significant change in the character of war, or it may not. … In any case, AI here to stay.” They note, “U.S. leaders will be confronted with tensions between competing demands: the imperative to prepare U.S. forces to fight and prevail against adversaries with military AI capabilities versus the need to manage the strategic risks and potential costs of arms races; the need to develop military AI with enough capability to defeat enemy systems versus the need to harness these capabilities to protect noncombatants; the need to grant AI-empowered weapons enough autonomy to protect U.S. forces and penetrate enemy defenses versus the need to manage risks that these systems could get out of control and escalate crises or conflicts to potentially catastrophic levels. How successful the United States is in maintaining military leadership in an increasingly dangerous world, while also preserving its fundamental identity as a responsible and ethical world leader, will depend on how adroitly U.S. leaders manage these tensions.” There are many considerations and concerns that must be addressed in the days, weeks, months, and years ahead. There is some comfort in knowing that schools, like the U.S. Naval War College, and think tanks, like the Rand Corporation, are actively pursuing research and seeking to educate future warriors about the ethical use of artificial intelligence in military operations.

 

Footnotes
[1] Editors, “Military Applications of AI,” Bulletin of the Atomic Scientists.
[2] Sam Tangredi and George Galdorisi, “Understanding Artificial Intelligence and Its Military Applications,” The Bridge, Summer 2021.
[3] Thomas Creely, “Ethics of Artificial Intelligence: A National Security Imperative,” The Bridge, Summer 2021 (printed edition).
[4] Forrest E. Morgan, Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman, “Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World,” Rand Corporation, March 2020.
[5] Peter Asaro, “Why the world needs to regulate autonomous weapons, and soon,” Bulletin of the Atomic Scientists, 27 April 2018.

Related Posts: