Skip to main content

Research Repository

Advanced Search

Multimodal Representation Learning for Place Recognition Using Deep Hebbian Predictive Coding

Pearson, Martin J.; Dora, Shirin; Struckmeier, Oliver; Knowles, Thomas C.; Mitchinson, Ben; Tiwari, Kshitij; Kyrki, Ville; Bohte, Sander; Pennartz, Cyriel M.A.

Multimodal Representation Learning for Place Recognition Using Deep Hebbian Predictive Coding Thumbnail


Authors

Shirin Dora

Oliver Struckmeier

Thomas C. Knowles

Ben Mitchinson

Kshitij Tiwari

Ville Kyrki

Sander Bohte

Cyriel M.A. Pennartz



Abstract

Recognising familiar places is a competence required in many engineering applications that interact with the real world such as robot navigation. Combining information from different sensory sources promotes robustness and accuracy of place recognition. However, mismatch in data registration, dimensionality, and timing between modalities remain challenging problems in multisensory place recognition. Spurious data generated by sensor drop-out in multisensory environments is particularly problematic and often resolved through adhoc and brittle solutions. An effective approach to these problems is demonstrated by animals as they gracefully move through the world. Therefore, we take a neuro-ethological approach by adopting self-supervised representation learning based on a neuroscientific model of visual cortex known as predictive coding. We demonstrate how this parsimonious network algorithm which is trained using a local learning rule can be extended to combine visual and tactile sensory cues from a biomimetic robot as it naturally explores a visually aliased environment. The place recognition performance obtained using joint latent representations generated by the network is significantly better than contemporary representation learning techniques. Further, we see evidence of improved robustness at place recognition in face of unimodal sensor drop-out. The proposed multimodal deep predictive coding algorithm presented is also linearly extensible to accommodate more than two sensory modalities, thereby providing an intriguing example of the value of neuro-biologically plausible representation learning for multimodal navigation.

Citation

Pearson, M. J., Dora, S., Struckmeier, O., Knowles, T. C., Mitchinson, B., Tiwari, K., …Pennartz, C. M. (2021). Multimodal Representation Learning for Place Recognition Using Deep Hebbian Predictive Coding. Frontiers in Robotics and AI, 8, Article 732023. https://doi.org/10.3389/frobt.2021.732023

Journal Article Type Article
Acceptance Date Nov 19, 2021
Online Publication Date Dec 13, 2021
Publication Date Dec 13, 2021
Deposit Date Dec 13, 2021
Publicly Available Date Mar 29, 2024
Journal Frontiers in Robotics and AI
Electronic ISSN 2296-9144
Publisher Frontiers Media
Peer Reviewed Peer Reviewed
Volume 8
Article Number 732023
DOI https://doi.org/10.3389/frobt.2021.732023
Keywords Artificial Intelligence; Computer Science Applications
Public URL https://uwe-repository.worktribe.com/output/8260284

Files






You might also like



Downloadable Citations