Skip to main content

Research Repository

Advanced Search

Examining profiles for robotic risk assessment: Does a robot's approach to risk affect user trust?

Bridgwater, Tom; Giuliani, Manuel; van Maris, Anouk; Baker, Greg; Winfield, Alan; Pipe, Tony

Authors

Tom Bridgwater Tom.Bridgwater@uwe.ac.uk
Research Associate - Robot Behaviour Risk Assessment

Greg Baker



Abstract

© 2020 Association for Computing Machinery. As autonomous robots move towards ubiquity, the need for robots to make decisions under risk that are trustworthy becomes increasingly significant; both to aid acceptance and to fully utilise their autonomous capabilities. We propose that incorporating a human approach to risk assessment into a robot's decision making process will increase user trust. This work investigates four robotic approaches to risk and, through a user study, explores the levels of trust placed in each. These approaches are: Risk averse, risk seeking, risk neutral and a human approach to risk. Risk is artificially stimulated through performance-based compensation, in line with previous studies. The study was conducted in a virtual nuclear environment created using the Unity games engine. Forty participants were asked to complete a robot supervision task, in which they observed a robot making risk based decisions and were able to question the robot, question the robot further and ultimately accept or alter the robot's decision. It is shown that a robot that is risk seeking is significantly less trusted than a risk averse robot, a risk neutral robot and a robot utilising human approach to risk. There was found to be no significant difference between the levels of trust placed in the risk averse, risk neutral and human approach to risk. It is also found that the level to which participants question a robot's decisions does not form an accurate measure of trust. The results suggest that when designing a robot that must make risk based decisions during teleoperation in a hazardous environment, an engineer should avoid a risk seeking robot. However, that same engineer may choose whichever of the remaining risk profiles best suits the implementation, with knowledge that the trust in their system is unlikely to be significantly affected.

Citation

Bridgwater, T., Giuliani, M., van Maris, A., Baker, G., Winfield, A., & Pipe, T. (2020). Examining profiles for robotic risk assessment: Does a robot's approach to risk affect user trust?. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. , (23-31). https://doi.org/10.1145/3319502.3374804

Conference Name ACM/IEEE International Conference on Human-Robot Interaction
Conference Location Cambridge, UK
Start Date Mar 24, 2020
Acceptance Date Mar 24, 2020
Online Publication Date Mar 31, 2020
Publication Date Mar 31, 2020
Deposit Date Apr 8, 2020
Pages 23-31
Series Title HRI ’20
Book Title Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction
ISBN 9781450367462
DOI https://doi.org/10.1145/3319502.3374804
Keywords hri, prospect theory, decision making, performance, trust, risk, nuclear
Public URL https://uwe-repository.worktribe.com/output/5866121
Publisher URL https://doi.org/10.1145/3319502.3374804