Skip to main content

Research Repository

Advanced Search

Visual analytics for collaborative human-machine confidence in human-centric active learning tasks

Legg, Phil; Smith, Jim; Downing, Alexander


Profile Image

Dr Phil Legg
Associate Professor in Cyber Security

Jim Smith
Professor in Interactive Artificial Intelligence

Alexander Downing


Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human ‘oracle’ when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human ‘oracle’: humans are not all-knowing, untiring oracles. A human’s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors - a person and a machine - that leads to an improved outcome?

In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections.


Legg, P., Smith, J., & Downing, A. (2019). Visual analytics for collaborative human-machine confidence in human-centric active learning tasks. Human-Centric Computing and Information Sciences, 9,

Journal Article Type Article
Acceptance Date Jan 28, 2019
Online Publication Date Feb 14, 2019
Publication Date Dec 1, 2019
Publicly Available Date Nov 30, -0001
Journal Human-centric Computing and Information Sciences
Electronic ISSN 2192-1962
Publisher SpringerOpen
Peer Reviewed Peer Reviewed
Volume 9
Article Number 5
Keywords visual knowledge discovery, data clustering, active machine learning, human-machine collaboration
Public URL
Publisher URL


You might also like

Downloadable Citations