Opening a Conversation About Lethal Autonomous Weapons Systems (LAWS)
Artificial intelligence, machine learning, robotics, and related emerging technologies are reshaping how policymakers and strategists think about national security, strategic stability, and conflict mitigation. These emerging technologies offer the prospect of exponentially faster and more accurate analysis and decision-making, more secure communications, more resilient networks, more precise and cost-effective application of resources, and more comprehensive understanding of the environment in which an individual, company, or government operates. Mastering the development and deployment of these technologies would dramatically enhance a country’s military, economic, and diplomatic capabilities, which is why most countries are rushing to deepen their understanding of AI and expand research and development in AI-related applications.
While some of these technologies are thought to be years away from widespread practical application -– for example, quantum technology or biotechnology for human enhancement – others like AI and machine learning are already being used today in almost all facets of life, including national security. One of the most significant new technologies impacting U.S. national security and the U.S. Department of Defense’s assessment of current and future threats is AI-based autonomous decision-making and its potential use in lethal autonomous weapon systems (LAWS), defined as weapons that are designed to independently select and engage targets without the need for human control. [1] LAWS present us with a thought-provoking array of problem-sets, as the technology is advancing far faster than U.S. policymaking and international diplomatic negotiations can keep pace. Several dozen countries and over 100 non-governmental organizations are demanding that the international community agree under UN auspices to ban LAWS because of ethical concerns over algorithms making lethal decisions autonomously. But U.S. state competitors like China and Russia, not to mention potential non-state adversaries, are already developing and exporting potentially autonomous weapon systems. The incoming Biden Administration will soon be confronted with important decision points about the USG’s R&D, deployment, and potential use of LAWS in combat, the implications of U.S. competitors and adversaries doing the same, and whether this category of weapons and technologies should face more rigorous scrutiny and oversight by an international treaty or agreement.
As a social enterprise deeply committed to mitigating conflict and enhancing stability and sustainability around the globe, Motive International recognizes the urgency of helping policymakers and strategists better understand both the opportunities and threats that LAWS pose to US national security and to regional and global conflict mitigation. Motive is launching a new initiative to help U.S. policymakers, strategists, and other stakeholders better understand the technical, policy, operational, diplomatic, and ethical dimensions that LAWS present, identify gaps in understanding, and frame the policy and operational choices decision-makers will need to consider in the months and years ahead to ensure these technologies promote rather than undermine global peace and stability.
As we develop this initiative, Motive is pleased to recommend to our community of interest the following key reports and resources related to LAWS and related technologies that are shaping how we are thinking about this important topic: