Jim Smith James.Smith@uwe.ac.uk
Professor in Interactive Artificial Intelligence
Predicting user confidence during visual decision making
Smith, Jim; Legg, Phil; Matovis, Milos; Kinsey, Kris
Authors
Professor Phil Legg Phil.Legg@uwe.ac.uk
Professor in Cyber Security
Milos Matovis
Kris Kinsey Kris.Kinsey@uwe.ac.uk
Senior Lecturer in Psychology
Abstract
© 2018 ACM People are not infallible consistent “oracles”: their confidence in decision-making may vary significantly between tasks and over time. We have previously reported the benefits of using an interface and algorithms that explicitly captured and exploited users’ confidence: error rates were reduced by up to 50% for an industrial multi-class learning problem; and the number of interactions required in a design-optimisation context was reduced by 33%. Having access to users’ confidence judgements could significantly benefit intelligent interactive systems in industry, in areas such as intelligent tutoring systems and in health care. There are many reasons for wanting to capture information about confidence implicitly. Some are ergonomic, but others are more “social”—such as wishing to understand (and possibly take account of) users’ cognitive state without interrupting them. We investigate the hypothesis that users’ confidence can be accurately predicted from measurements of their behaviour. Eye-tracking systems were used to capture users’ gaze patterns as they undertook a series of visual decision tasks, after each of which they reported their confidence on a 5-point Likert scale. Subsequently, predictive models were built using “conventional” machine learning approaches for numerical summary features derived from users’ behaviour. We also investigate the extent to which the deep learning paradigm can reduce the need to design features specific to each application by creating “gaze maps”—visual representations of the trajectories and durations of users’ gaze fixations—and then training deep convolutional networks on these images. Treating the prediction of user confidence as a two-class problem (confident/not confident), we attained classification accuracy of 88% for the scenario of new users on known tasks, and 87% for known users on new tasks. Considering the confidence as an ordinal variable, we produced regression models with a mean absolute error of ≈0.7 in both cases. Capturing just a simple subset of non-task-specific numerical features gave slightly worse, but still quite high accuracy (e.g., MAE ≈ 1.0). Results obtained with gaze maps and convolutional networks are competitive, despite not having access to longer-term information about users and tasks, which was vital for the “summary” feature sets. This suggests that the gaze-map-based approach forms a viable, transferable alternative to handcrafting features for each different application. These results provide significant evidence to confirm our hypothesis, and offer a way of substantially improving many interactive artificial intelligence applications via the addition of cheap non-intrusive hardware and computationally cheap prediction algorithms.
Journal Article Type | Article |
---|---|
Acceptance Date | Feb 1, 2018 |
Online Publication Date | Jun 1, 2018 |
Publication Date | Jun 1, 2018 |
Deposit Date | Jan 26, 2018 |
Publicly Available Date | Mar 6, 2018 |
Journal | ACM Transactions on Interactive Intelligent Systems |
Print ISSN | 2160-6455 |
Electronic ISSN | 2160-6463 |
Publisher | Association for Computing Machinery (ACM) |
Peer Reviewed | Not Peer Reviewed |
Volume | 8 |
Issue | 2 |
Article Number | 10 |
DOI | https://doi.org/10.1145/3185524 |
Keywords | human-centred machine learning, confidence |
Public URL | https://uwe-repository.worktribe.com/output/867441 |
Publisher URL | https://doi.org/10.1145/3185524 |
Additional Information | Additional Information : © ACM, 2018. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Interactive Intelligent Systems, 8 (2), June 2018 |
Contract Date | Jan 26, 2018 |
Files
Predicting User's Confidence During Visual Decision Making- repository version.pdf
(1.8 Mb)
PDF
You might also like
Visual analytics of e-mail sociolinguistics for user behavioural analysis
(2014)
Journal Article
Visualizing the insider threat: Challenges and tools for identifying malicious user activity
(2015)
Presentation / Conference Contribution
Quasi-Hamming distances: An overarching concept for measuring glyph similarity
(2015)
Presentation / Conference Contribution
Understanding insider threat: A framework for characterising attacks
(2014)
Presentation / Conference Contribution
Glyph sorting: Interactive visualization for multi-dimensional data
(2013)
Journal Article
Downloadable Citations
About UWE Bristol Research Repository
Administrator e-mail: repository@uwe.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search