Skip to main content

Research Repository

Advanced Search

What's up?-Resolving interaction ambiguity through non-visual cues for a robotic dressing assistant

Chance, Greg; Caleb-Solly, Praminda; Jevtic, Aleksandar; Dogramadzi, Sanja

What's up?-Resolving interaction ambiguity through non-visual cues for a robotic dressing assistant Thumbnail


Authors

Greg Chance

Praminda Caleb-Solly

Aleksandar Jevtic



Abstract

© 2017 IEEE. Robots that can assist in activities of daily living (ADL) such as dressing assistance, need to be capable of intuitive and safe interaction. Vision systems are often used to provide information on the position and movement of the robot and user. However, in a dressing context, technical complexity, occlusion and concerns over user privacy pushes research to investigate other approaches for human-robot interaction (HRI). We analysed verbal, proprioceptive and force feedback from 18 participants during a human-human dressing experiment where users received dressing assistance from a researcher mimicking robot behaviour. This paper investigates the occurrence of deictic speech in an assisted-dressing task and how any ambiguity could be resolved to ensure safe and reliable HRI. We focus on one of the most frequently occurring deictic words up, which was captured over 300 times during the experiments and is used as an example of an ambiguous command. We attempt to resolve the ambiguity of these commands through predictive models. These models were used to predict end effector choice and the direction in which the garment should move. The model for predicting end effector choice resulted in 70.4% accuracy based on the user's head orientation. For predicting garment direction, the model used the angle of the user's arm and resulted in 87.8% accuracy. We also found that additional categories such as the starting position of the user's arms and end-effector height may improve the accuracy of a predictive model. We present suggestions on how these inputs may be attained through non-visual means, for example through haptic perception of end-effector position, proximity sensors and acoustic source localisation.

Citation

Chance, G., Caleb-Solly, P., Jevtic, A., & Dogramadzi, S. (2017). What's up?-Resolving interaction ambiguity through non-visual cues for a robotic dressing assistant. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (284-291). https://doi.org/10.1109/ROMAN.2017.8172315

Conference Name RO-MAN 2017 - 26th IEEE International Symposium on Robot and Human Interactive Communication
Acceptance Date May 30, 2017
Publication Date Dec 8, 2017
Deposit Date Jan 9, 2018
Publicly Available Date Feb 28, 2018
Volume 2017-January
Pages 284-291
Book Title 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
DOI https://doi.org/10.1109/ROMAN.2017.8172315
Keywords assistive robotics, predictive models, robot sensing systems
Public URL https://uwe-repository.worktribe.com/output/901471
Publisher URL http://dx.doi.org/10.1109/ROMAN.2017.8172315
Additional Information Additional Information : (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
Title of Conference or Conference Proceedings : 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)

Files





You might also like



Downloadable Citations