Skip to main content

Research Repository

Advanced Search

To err is robot: How humans assess and act toward an erroneous social robot

Mirnig, Nicole; Stollnberger, Gerald; Miksch, Markus; Stadler, Susanne; Giuliani, Manuel; Tscheligi, Manfred

To err is robot: How humans assess and act toward an erroneous social robot Thumbnail


Authors

Nicole Mirnig

Gerald Stollnberger

Markus Miksch

Susanne Stadler

Manuel Giuliani Manuel.Giuliani@uwe.ac.uk
Co- Director Bristol Robotics Laboratory

Manfred Tscheligi



Abstract

© 2017 Mirnig, Stollnberger, Miksch, Stadler, Giuliani and Tscheligi. We conducted a user study for which we purposefully programmed faulty behavior into a robot's routine. It was our aim to explore if participants rate the faulty robot different from an error-free robot and which reactions people show in interaction with a faulty robot. The study was based on our previous research on robot errors where we detected typical error situations and the resulting social signals of our participants during social human-robot interaction. In contrast to our previous work, where we studied video material in which robot errors occurred unintentionally, in the herein reported user study, we purposefully elicited robot errors to further explore the human interaction partners' social signals following a robot error. Our participants interacted with a human-like NAO, and the robot either performed faulty or free from error. First, the robot asked the participants a set of predefined questions and then it asked them to complete a couple of LEGO building tasks. After the interaction, we asked the participants to rate the robot's anthropomorphism, likability, and perceived intelligence. We also interviewed the participants on their opinion about the interaction. Additionally, we video-coded the social signals the participants showed during their interaction with the robot as well as the answers they provided the robot with. Our results show that participants liked the faulty robot significantly better than the robot that interacted flawlessly. We did not find significant differences in people's ratings of the robot's anthropomorphism and perceived intelligence. The qualitative data confirmed the questionnaire results in showing that although the participants recognized the robot's mistakes, they did not necessarily reject the erroneous robot. The annotations of the video data further showed that gaze shifts (e.g., from an object to the robot or vice versa) and laughter are typical reactions to unexpected robot behavior. In contrast to existing research, we assess dimensions of user experience that have not been considered so far and we analyze the reactions users express when a robot makes a mistake. Our results show that decoding a human's social signals can help the robot understand that there is an error and subsequently react accordingly.

Journal Article Type Article
Acceptance Date May 10, 2017
Publication Date May 1, 2017
Deposit Date May 31, 2017
Publicly Available Date May 31, 2017
Journal Frontiers Robotics AI
Electronic ISSN 2296-9144
Publisher Frontiers Media
Peer Reviewed Peer Reviewed
Volume 4
Issue MAY
Pages 21
DOI https://doi.org/10.3389/frobt.2017.00021
Keywords social human–robot interaction, robot errors, user experience, social signals, likeability, faulty robots, error situations, Pratfall Effect
Public URL https://uwe-repository.worktribe.com/output/886930
Publisher URL http://dx.doi.org/10.3389/frobt.2017.00021
Additional Information Additional Information : This document is protected by copyright and was first published by Frontiers. All rights reserved. It is reproduced with permission.
Contract Date May 31, 2017

Files






You might also like



Downloadable Citations