Skip to main content

Research Repository

Advanced Search

Governing AI safety through independent audits

Falco, Gregory; Shneiderman, Ben; Badger, Julia; Carrier, Ryan; Dahbura, Anton; Danks, David; Eling, Martin; Goodloe, Alwyn; Gupta, Jerry; Hart, Christopher; Jirotka, Marina; Johnson, Henric; LaPointe, Cara; Llorens, Ashley J.; Mackworth, Alan K.; Maple, Carsten; P�lsson, Sigur�ur Emil; Pasquale, Frank; Winfield, Alan; Yeong, Zee Kin

Governing AI safety through independent audits Thumbnail


Gregory Falco

Ben Shneiderman

Julia Badger

Ryan Carrier

Anton Dahbura

David Danks

Martin Eling

Alwyn Goodloe

Jerry Gupta

Christopher Hart

Marina Jirotka

Henric Johnson

Cara LaPointe

Ashley J. Llorens

Alan K. Mackworth

Carsten Maple

Sigur�ur Emil P�lsson

Frank Pasquale

Zee Kin Yeong


Highly automated systems are becoming omnipresent. They range in function from self-driving vehicles to advanced medical diagnostics and afford many benefits. However, there are assurance challenges that have become increasingly visible in high-profile crashes and incidents. Governance of such systems is critical to garner widespread public trust. Governance principles have been previously proposed offering aspirational guidance to automated system developers; however, their implementation is often impractical given the excessive costs and processes required to enact and then enforce the principles. This Perspective, authored by an international and multidisciplinary team across government organizations, industry and academia, proposes a mechanism to drive widespread assurance of highly automated systems: independent audit. As proposed, independent audit of AI systems would embody three ‘AAA’ governance principles of prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements. Independent audit of AI systems serves as a pragmatic approach to an otherwise burdensome and unenforceable assurance challenge.


Falco, G., Shneiderman, B., Badger, J., Carrier, R., Dahbura, A., Danks, D., …Yeong, Z. K. (2021). Governing AI safety through independent audits. Nature Machine Intelligence, 3(7), 566-571.

Journal Article Type Article
Acceptance Date Jun 8, 2021
Online Publication Date Jul 20, 2021
Publication Date Jul 20, 2021
Deposit Date Jul 21, 2021
Publicly Available Date Jan 21, 2022
Journal Nature Machine Intelligence
Print ISSN 2522-5839
Electronic ISSN 2522-5839
Publisher Nature Research
Peer Reviewed Peer Reviewed
Volume 3
Issue 7
Pages 566-571
Public URL
Publisher URL


You might also like

Downloadable Citations