Tom Bridgwater Tom.Bridgwater@uwe.ac.uk
Research Associate - Robot Behaviour Risk Assessment
Examining profiles for robotic risk assessment: Does a robot's approach to risk affect user trust?
Bridgwater, Tom; Giuliani, Manuel; van Maris, Anouk; Baker, Greg; Winfield, Alan; Pipe, Tony
Authors
Manuel Giuliani Manuel.Giuliani@uwe.ac.uk
Co- Director Bristol Robotics Laboratory
Anouk Van Maris Anouk.Vanmaris@uwe.ac.uk
Research Fellow Responsible Robotics
Greg Baker
Alan Winfield Alan.Winfield@uwe.ac.uk
Professor in Robotics
Tony Pipe Anthony.Pipe@uwe.ac.uk
Professor
Abstract
© 2020 Association for Computing Machinery. As autonomous robots move towards ubiquity, the need for robots to make decisions under risk that are trustworthy becomes increasingly significant; both to aid acceptance and to fully utilise their autonomous capabilities. We propose that incorporating a human approach to risk assessment into a robot's decision making process will increase user trust. This work investigates four robotic approaches to risk and, through a user study, explores the levels of trust placed in each. These approaches are: Risk averse, risk seeking, risk neutral and a human approach to risk. Risk is artificially stimulated through performance-based compensation, in line with previous studies. The study was conducted in a virtual nuclear environment created using the Unity games engine. Forty participants were asked to complete a robot supervision task, in which they observed a robot making risk based decisions and were able to question the robot, question the robot further and ultimately accept or alter the robot's decision. It is shown that a robot that is risk seeking is significantly less trusted than a risk averse robot, a risk neutral robot and a robot utilising human approach to risk. There was found to be no significant difference between the levels of trust placed in the risk averse, risk neutral and human approach to risk. It is also found that the level to which participants question a robot's decisions does not form an accurate measure of trust. The results suggest that when designing a robot that must make risk based decisions during teleoperation in a hazardous environment, an engineer should avoid a risk seeking robot. However, that same engineer may choose whichever of the remaining risk profiles best suits the implementation, with knowledge that the trust in their system is unlikely to be significantly affected.
Presentation Conference Type | Conference Paper (published) |
---|---|
Conference Name | ACM/IEEE International Conference on Human-Robot Interaction |
Start Date | Mar 24, 2020 |
Acceptance Date | Mar 24, 2020 |
Online Publication Date | Mar 31, 2020 |
Publication Date | Mar 31, 2020 |
Deposit Date | Apr 8, 2020 |
Pages | 23-31 |
Series Title | HRI ’20 |
Book Title | Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction |
ISBN | 9781450367462 |
DOI | https://doi.org/10.1145/3319502.3374804 |
Keywords | hri, prospect theory, decision making, performance, trust, risk, nuclear |
Public URL | https://uwe-repository.worktribe.com/output/5866121 |
Publisher URL | https://doi.org/10.1145/3319502.3374804 |
You might also like
Self-adaptive context aware audio localization
(2017)
Book Chapter
Toward Bio-Inspired Tactile Sensing Capsule Endoscopy for Detection of Submucosal Tumors
(2016)
Journal Article
Single motor actuated peristaltic wave generator for a soft bodied worm robot
(2016)
Presentation / Conference Contribution
Downloadable Citations
About UWE Bristol Research Repository
Administrator e-mail: repository@uwe.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search