Skip to main content

Research Repository

Advanced Search

Visual analytics for collaborative human-machine confidence in human-centric active learning tasks

Legg, Phil; Smith, Jim; Downing, Alexander

Visual analytics for collaborative human-machine confidence in human-centric active learning tasks Thumbnail


Authors

Profile Image

Jim Smith James.Smith@uwe.ac.uk
Professor in Interactive Artificial Intelligence

Alexander Downing



Abstract

Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human ‘oracle’ when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human ‘oracle’: humans are not all-knowing, untiring oracles. A human’s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors - a person and a machine - that leads to an improved outcome?

In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections.

Citation

Legg, P., Smith, J., & Downing, A. (2019). Visual analytics for collaborative human-machine confidence in human-centric active learning tasks. Human-Centric Computing and Information Sciences, 9, Article 5. https://doi.org/10.1186/s13673-019-0167-8

Journal Article Type Article
Acceptance Date Jan 28, 2019
Online Publication Date Feb 14, 2019
Publication Date Feb 14, 2019
Deposit Date Jan 30, 2019
Publicly Available Date Feb 14, 2019
Journal Human-centric Computing and Information Sciences
Electronic ISSN 2192-1962
Publisher SpringerOpen
Peer Reviewed Peer Reviewed
Volume 9
Article Number 5
DOI https://doi.org/10.1186/s13673-019-0167-8
Keywords visual knowledge discovery, data clustering, active machine learning, human-machine collaboration
Public URL https://uwe-repository.worktribe.com/output/854795
Publisher URL https://doi.org/10.1186/s13673-019-0167-8

Files





You might also like



Downloadable Citations