Journal of Data Science,
Statistics, and Visualisation

Permutation-based visualisation of input and encoded space in autoencoders

Authors

  • Alan Inglis Maynooth University
  • Andrew Parnell Maynooth University

DOI:

https://doi.org/10.52933/jdssv.v5i2.118

Keywords:

Model visualisation, autoencoders, neural networks, variable importance

Abstract

Autoencoders, known for their capability to compress and reconstruct data, play a pivotal role in unsupervised learning tasks. Deciphering the importance of input features and encoded dimensions can enhance the interpretability of the black-box nature of autoencoders. This paper introduces a permutation importance method tailored for evaluating the importance of raw pixel values and encoded dimensions in image data processed by autoencoders. We apply permutation importance in two stages: first, on the original image data to assess the impact of each pixel on the encoded representations; and second, on the encoded space to determine the importance of each dimension in the reconstruction process for different image classes. Our approach reveals how variations in input feature importance affect the encoded representations, shedding light on the encoder's focus and potential biases. Experimental results on benchmark image datasets and on a larger case study, concerning audio samples, demonstrate the efficacy of our method, providing a novel perspective on evaluating feature importance in unsupervised learning scenarios and offering greater interpretability of the inner workings of autoencoders.
Our approach is implemented in the R package aim (Autoencoder Importance Mapping).

Downloads

Published

2025-04-07

How to Cite

Inglis, A., & Parnell, A. (2025). Permutation-based visualisation of input and encoded space in autoencoders. Journal of Data Science, Statistics, and Visualisation, 5(2). https://doi.org/10.52933/jdssv.v5i2.118
Journal of Data Science,
Statistics, and Visualisation
Pages