Learning to remember more with less memorization

Le, Hung, Tran, Truyen and Venkatesh, Svetha 2019, Learning to remember more with less memorization, in ICLR 2019: Proceedings of the 7th International Conference on Learning Representations, ICLR, [New Orleans, Louisiana].

Attached Files
Name Description MIMEType Size Downloads

Title Learning to remember more with less memorization
Author(s) Le, Hung
Tran, TruyenORCID iD for Tran, Truyen orcid.org/0000-0001-6531-8907
Venkatesh, SvethaORCID iD for Venkatesh, Svetha orcid.org/0000-0001-8675-6631
Conference name Learning Representations. International Conference (7th : 2019 : New Orleans, Louisiana)
Conference location New Orleans, Louisiana
Conference dates 2019/05/06 - 2019/05/09
Title of proceedings ICLR 2019: Proceedings of the 7th International Conference on Learning Representations
Publication date 2019
Total pages 20
Publisher ICLR
Place of publication [New Orleans, Louisiana]
Summary © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Memory-augmented neural networks consisting of a neural controller and an external memory have shown potentials in long-term sequential learning. Current RAM-like memory models maintain memory accessing every timesteps, thus they do not effectively leverage the short-term memory held in the controller. We hypothesize that this scheme of writing is suboptimal in memory utilization and introduces redundant computation. To validate our hypothesis, we derive a theoretical bound on the amount of information stored in a RAM-like system and formulate an optimization problem that maximizes the bound. The proposed solution dubbed Uniform Writing is proved to be optimal under the assumption of equal timestep contributions. To relax this assumption, we introduce modifications to the original solution, resulting in a solution termed Cached Uniform Writing. This method aims to balance between maximizing memorization and forgetting via overwriting mechanisms. Through an extensive set of experiments, we empirically demonstrate the advantages of our solutions over other recurrent architectures, claiming the state-of-the-arts in various sequential modeling tasks.
Language eng
Indigenous content off
HERDC Research category E1 Full written paper - refereed
Copyright notice ©2019, 7th International Conference on Learning Representations, ICLR 2019
Persistent URL http://hdl.handle.net/10536/DRO/DU:30129580

Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in TR Web of Science
Scopus Citation Count Cited 0 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 41 Abstract Views, 1 File Downloads  -  Detailed Statistics
Created: Thu, 05 Sep 2019, 11:52:14 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.