Skip to main content

Research Repository

Advanced Search

The duty to take precautions in hostilities, and the disobeying of orders: Should robots refuse?

Pollard, Mike; Grimal, Francis

Authors

Mike Pollard

Francis Grimal



Abstract

This Article not only questions whether an embodied artificial intelligence (“EAI”) could give an order to a human combatant, but controversially, examines whether it should also refuse one. A future EAI may be capable of refusing to follow an order, for example, where an order appeared to be manifestly unlawful, was otherwise in breach of International Humanitarian Law (“IHL”), national Rules of Engagement (“ROE”) or, even, where they appeared to be immoral or unethical. Such an argument has traction in the strategic realm in terms of “system of systems”—the premise that more advanced technology can potentially help overcome Clausewitzian “friction” or “fog of war.” An aircraft’s anti-stall mechanism, which takes over, and corrects human error, is seen as nothing less than “positive.”

As part of opening this much-needed discussion, the Authors examine the legal parameters, and by way of a solution provide a framework for overriding and disobeying. Central to this discussion, are state specific ROEs within the concept of “duty to take precautions.” At present, the guidelines relating to a human combatant’s right to disobey orders are contained within such doctrine, but vary widely. For example, in the United States, a soldier may disobey an order but only when the act in question is clearly unlawful. In direct contrast, however, Germany’s “state practice” requires orders to be compatible with the much wider concept of human dignity, and to be of “use for service.”

By way of a solution, the Authors propose the crafting of a test referred to as “robot rules of engagement” (“RROE”) with specific regard to the disobeying of orders. These RROE ensure (via a multi-stage verification process) that an EAI can discount human “traits” and minimize errors that lead to breaches of IHL. In the broader sense, the Authors question whether warfare should remain an utterly human preserve—where human error is an unintended but unfortunate consequence—or, whether the duty to take all feasible precautions in attack in fact require a human commander to utilize available AI systems to routinely question human decision-making, and where applicable, prevent mistakes. In short, the Article examines whether human error can be corrected and overridden, but for the better, rather than for the worse.

Journal Article Type Article
Acceptance Date Sep 26, 2020
Publication Date Mar 5, 2021
Deposit Date Apr 13, 2023
Journal Fordham International Law Journal
Peer Reviewed Peer Reviewed
Volume 44
Issue 3
Pages 671-734
Keywords International Law; International Humanitarian Law; Artificial Intelligence; Defusal of Orders; Combatants Duty.
Public URL https://uwe-repository.worktribe.com/output/10623078
Publisher URL https://ir.lawnet.fordham.edu/ilj/vol44/iss3/3/