Skip to main content

Research Repository

Advanced Search

Visualising state space representations of LSTM networks

Smith, Emmanuel M.; Smith, Jim; Legg, Phil; Francis, Simon

Authors

Emmanuel M. Smith

Profile Image

Jim Smith James.Smith@uwe.ac.uk
Professor in Interactive Artificial Intelligence

Simon Francis



Abstract

Long Short-Term Memory (LSTM) networks have proven to be one of the most effective models for making predictions on sequence-based tasks. These models work by capturing, remembering, and forgetting information relevant to their future predictions. The non-linear complexity of the mechanisms involved in this process means we currently lack tools for achieving interpretability. Ideally, we want these models to provide an explanation of why they make a particular prediction, given a specific input. Researchers have explored the idea of interpreting LSTMs in specific contexts such as natural language processing or classification, but they put minimal focus on approaches which are generalisable across different applications. To alleviate this, in this work, we demonstrate a method which enables the interpretation and comparison of LSTM states during time series predictions. We show that by reducing the dimensionality of network states one can scalably visualise patterns and explain model behaviours.

Citation

Smith, E. M., Smith, J., Legg, P., & Francis, S. Visualising state space representations of LSTM networks. Presented at Workshop on Visualization for AI Explainability, Berlin, Germany

Presentation Conference Type Other
Conference Name Workshop on Visualization for AI Explainability
Conference Location Berlin, Germany
Acceptance Date Aug 2, 2018
Deposit Date Sep 19, 2018
Publicly Available Date Sep 19, 2018
Peer Reviewed Not Peer Reviewed
Public URL https://uwe-repository.worktribe.com/output/863245
Additional Information Title of Conference or Conference Proceedings : Workshop on Visualization for AI Explainability