Advanced search
1 file | 2.67 MB Add to list

Model reduction through progressive latent space pruning in deep active inference

Author
Organization
Project
Abstract
Although still not fully understood, sleep is known to play an important role in learning and in pruning synaptic connections. From the active inference perspective, this can be cast as learning parameters of a generative model and Bayesian model reduction, respectively. In this article, we show how to reduce dimensionality of the latent space of such a generative model, and hence model complexity, in deep active inference during training through a similar process. While deep active inference uses deep neural networks for state space construction, an issue remains in that the dimensionality of the latent space must be specified beforehand. We investigate two methods that are able to prune the latent space of deep active inference models. The first approach functions similar to sleep and performs model reduction post hoc. The second approach is a novel method which is more similar to reflection, operates during training and displays "aha" moments when the model is able to reduce latent space dimensionality. We show for two well-known simulated environments that model performance is retained in the first approach and only diminishes slightly in the second approach. We also show that reconstructions from a real world example are indistinguishable before and after reduction. We conclude that the most important difference constitutes a trade-off between training time and model performance in terms of accuracy and the ability to generalize, via minimization of model complexity.
Keywords
SLEEP, DIMENSIONALITY, active inference, free energy, deep learning, model reduction, generative modeling

Downloads

  • 8068.pdf
    • full text (Published version)
    • |
    • open access
    • |
    • PDF
    • |
    • 2.67 MB

Citation

Please use this url to cite or link to this publication:

MLA
Wauthier, Samuel, et al. “Model Reduction through Progressive Latent Space Pruning in Deep Active Inference.” FRONTIERS IN NEUROROBOTICS, vol. 16, 2022, doi:10.3389/fnbot.2022.795846.
APA
Wauthier, S., De Boom, C., Catal, O., Verbelen, T., & Dhoedt, B. (2022). Model reduction through progressive latent space pruning in deep active inference. FRONTIERS IN NEUROROBOTICS, 16. https://doi.org/10.3389/fnbot.2022.795846
Chicago author-date
Wauthier, Samuel, Cedric De Boom, Ozan Catal, Tim Verbelen, and Bart Dhoedt. 2022. “Model Reduction through Progressive Latent Space Pruning in Deep Active Inference.” FRONTIERS IN NEUROROBOTICS 16. https://doi.org/10.3389/fnbot.2022.795846.
Chicago author-date (all authors)
Wauthier, Samuel, Cedric De Boom, Ozan Catal, Tim Verbelen, and Bart Dhoedt. 2022. “Model Reduction through Progressive Latent Space Pruning in Deep Active Inference.” FRONTIERS IN NEUROROBOTICS 16. doi:10.3389/fnbot.2022.795846.
Vancouver
1.
Wauthier S, De Boom C, Catal O, Verbelen T, Dhoedt B. Model reduction through progressive latent space pruning in deep active inference. FRONTIERS IN NEUROROBOTICS. 2022;16.
IEEE
[1]
S. Wauthier, C. De Boom, O. Catal, T. Verbelen, and B. Dhoedt, “Model reduction through progressive latent space pruning in deep active inference,” FRONTIERS IN NEUROROBOTICS, vol. 16, 2022.
@article{8754597,
  abstract     = {{Although still not fully understood, sleep is known to play an important role in learning and in pruning synaptic connections. From the active inference perspective, this can be cast as learning parameters of a generative model and Bayesian model reduction, respectively. In this article, we show how to reduce dimensionality of the latent space of such a generative model, and hence model complexity, in deep active inference during training through a similar process. While deep active inference uses deep neural networks for state space construction, an issue remains in that the dimensionality of the latent space must be specified beforehand. We investigate two methods that are able to prune the latent space of deep active inference models. The first approach functions similar to sleep and performs model reduction post hoc. The second approach is a novel method which is more similar to reflection, operates during training and displays "aha" moments when the model is able to reduce latent space dimensionality. We show for two well-known simulated environments that model performance is retained in the first approach and only diminishes slightly in the second approach. We also show that reconstructions from a real world example are indistinguishable before and after reduction. We conclude that the most important difference constitutes a trade-off between training time and model performance in terms of accuracy and the ability to generalize, via minimization of model complexity.}},
  articleno    = {{795846}},
  author       = {{Wauthier, Samuel and De Boom, Cedric and Catal, Ozan and Verbelen, Tim and Dhoedt, Bart}},
  issn         = {{1662-5218}},
  journal      = {{FRONTIERS IN NEUROROBOTICS}},
  keywords     = {{SLEEP,DIMENSIONALITY,active inference,free energy,deep learning,model reduction,generative modeling}},
  language     = {{eng}},
  pages        = {{16}},
  title        = {{Model reduction through progressive latent space pruning in deep active inference}},
  url          = {{http://doi.org/10.3389/fnbot.2022.795846}},
  volume       = {{16}},
  year         = {{2022}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: