Governing AI safety through independent audits
Falco, Gregory; Shneiderman, Ben; Badger, Julia; Carrier, Ryan; Dahbura, Anton; Danks, David; Eling, Martin; Goodloe, Alwyn; Gupta, Jerry; Hart, Christopher; Jirotka, Marina; Johnson, Henric; LaPointe, Cara; Llorens, Ashley J.; Mackworth, Alan K.; Maple, Carsten; Pálsson, Sigurður Emil; Pasquale, Frank; Winfield, Alan; Yeong, Zee Kin
Ashley J. Llorens
Alan K. Mackworth
Sigurður Emil Pálsson
Alan Winfield Alan.Winfield@uwe.ac.uk
Professor in Robotics
Zee Kin Yeong
Highly automated systems are becoming omnipresent. They range in function from self-driving vehicles to advanced medical diagnostics and afford many benefits. However, there are assurance challenges that have become increasingly visible in high-profile crashes and incidents. Governance of such systems is critical to garner widespread public trust. Governance principles have been previously proposed offering aspirational guidance to automated system developers; however, their implementation is often impractical given the excessive costs and processes required to enact and then enforce the principles. This Perspective, authored by an international and multidisciplinary team across government organizations, industry and academia, proposes a mechanism to drive widespread assurance of highly automated systems: independent audit. As proposed, independent audit of AI systems would embody three ‘AAA’ governance principles of prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements. Independent audit of AI systems serves as a pragmatic approach to an otherwise burdensome and unenforceable assurance challenge.
Falco, G., Shneiderman, B., Badger, J., Carrier, R., Dahbura, A., Danks, D., …Yeong, Z. K. (2021). Governing AI safety through independent audits. Nature Machine Intelligence, 3(7), 566-571. https://doi.org/10.1038/s42256-021-00370-7
|Journal Article Type||Article|
|Acceptance Date||Jun 8, 2021|
|Online Publication Date||Jul 20, 2021|
|Deposit Date||Jul 21, 2021|
|Publicly Available Date||Jan 21, 2022|
|Journal||Nature Machine Intelligence|
|Peer Reviewed||Peer Reviewed|
This file is under embargo until Jan 21, 2022 due to copyright reasons.
Contact Alan.Winfield@uwe.ac.uk to request a copy for personal use.
You might also like
Examining profiles for robotic risk assessment: Does a robot's approach to risk affect user trust?
The ARE Robot Fabricator: How to (Re)produce Robots that Can Evolve in the Real World
Toward controllable morphogenesis in large robot swarms
Machine ethics: The design and governance of ethical AI and autonomous systems