File(s) under permanent embargo
Faster training of very deep networks via p-norm gates
conference contribution
posted on 2016-12-08, 00:00 authored by Truyen TranTruyen Tran, Trang Thi Minh Pham, Quoc-Dinh Phung, Svetha VenkateshSvetha VenkateshA major contributing factor to the recent advances in deep neural networks is structural units that let sensory information and gradients to propagate easily. Gating is one such structure that acts as a flow control. Gates are employed in many recent state-of-the-art recurrent models such as LSTM and GRU, and feedforward models such as Residual Nets and Highway Networks. This enables learning in very deep networks with hundred layers and helps achieve record-breaking results in vision (e.g., ImageNet with Residual Nets) and NLP (e.g., machine translation with GRU). However, there is limited work in analysing the role of gating in the learning process. In this paper, we propose a flexible p-norm gating scheme, which allows usercontrollable flow and as a consequence, improve the learning speed. This scheme subsumes other existing gating schemes, including those in GRU, Highway Networks and Residual Nets as special cases. Experiments on large sequence and vector datasets demonstrate that the proposed gating scheme helps improve the learning speed significantly without extra overhead.
History
Event
Pattern Recognition. International Conference (23rd : 2016 : Cancun, Mexico)Pagination
3542 - 3547Publisher
IEEELocation
Cancun, MexicoPlace of publication
Piscataway, N.J.Publisher DOI
Start date
2016-12-04End date
2016-12-08ISBN-13
9781509048472Language
engPublication classification
E Conference publication; E1 Full written paper - refereedCopyright notice
2016, IEEETitle of proceedings
ICPR 2016: Proceedings of the 23rd International Conference on Pattern RecognitionUsage metrics
Categories
No categories selectedKeywords
Licence
Exports
RefWorks
BibTeX
Ref. manager
Endnote
DataCite
NLM
DC