Openly accessible

Markov information bottleneck to improve information flow in stochastic neural networks

Nguyen, Thanh Tang and Choi, Jaesik 2019, Markov information bottleneck to improve information flow in stochastic neural networks, Entropy, vol. 21, no. 10, doi: 10.3390/e21100976.

Attached Files
Name Description MIMEType Size Downloads

Title Markov information bottleneck to improve information flow in stochastic neural networks
Author(s) Nguyen, Thanh Tang
Choi, Jaesik
Journal name Entropy
Volume number 21
Issue number 10
Article ID 976
Total pages 21
Publisher M D P I
Place of publication Basel, Switzerland
Publication date 2019-10-01
ISSN 1099-4300
Keyword(s) Science & Technology
Physical Sciences
Physics, Multidisciplinary
Physics
information bottleneck
stochastic neural networks
variational inference
machine learning
Summary While rate distortion theory compresses data under a distortion constraint, information bottleneck (IB) generalizes rate distortion theory to learning problems by replacing a distortion constraint with a constraint of relevant information. In this work, we further extend IB to multiple Markov bottlenecks (i.e., latent variables that form a Markov chain), namely Markov information bottleneck (MIB), which particularly fits better in the context of stochastic neural networks (SNNs) than the original IB. We show that Markov bottlenecks cannot simultaneously achieve their information optimality in a non-collapse MIB, and thus devise an optimality compromise. With MIB, we take the novel perspective that each layer of an SNN is a bottleneck whose learning goal is to encode relevant information in a compressed form from the data. The inference from a hidden layer to the output layer is then interpreted as a variational approximation to the layer’s decoding of relevant information in the MIB. As a consequence of this perspective, the maximum likelihood estimate (MLE) principle in the context of SNNs becomes a special case of the variational MIB. We show that, compared to MLE, the variational MIB can encourage better information flow in SNNs in both principle and practice, and empirically improve performance in classification, adversarial robustness, and multi-modal learning in MNIST.
Language eng
DOI 10.3390/e21100976
Indigenous content off
Field of Research 01 Mathematical Sciences
02 Physical Sciences
HERDC Research category C1 Refereed article in a scholarly journal
Copyright notice ©2019, The Authors
Free to Read? Yes
Persistent URL http://hdl.handle.net/10536/DRO/DU:30131346

Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in TR Web of Science
Scopus Citation Count Cited 0 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 17 Abstract Views, 1 File Downloads  -  Detailed Statistics
Created: Wed, 06 Nov 2019, 11:00:09 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.