Capsule Networks (CapsNets) are recently introduced to overcome some of the shortcomings of traditional Convolutional Neural Networks (CNNs). CapsNets replace neurons in CNNs with vectors to retain spatial relationships among the features. In this paper, we propose a CapsNet architecture that employs individual video frames for human action recognition without explicitly extracting motion information. We also propose weight pooling to reduce the computational complexity and improve the classification accuracy by appropriately removing some of the extracted features. We show how the capsules of the proposed architecture can encode temporal information by using the spatial features extracted from several video frames. Compared with a traditional CNN of the same complexity, the proposed CapsNet improves action recognition performance by 12.11% and 22.29% on the KTH and UCF-sports datasets, respectively.
History
Pagination
3867-3871
Location
Brighton, Eng.
Start date
2019-05-12
End date
2019-05-17
ISSN
1520-6149
ISBN-13
9781479981311
Language
eng
Publication classification
E1 Full written paper - refereed
Copyright notice
2019, IEEE
Editor/Contributor(s)
[Unknown]
Title of proceedings
ICASSP 2019 : Proceedings of the 2019 44th IEEE International Conference on Acoustics, Speech and Signal Processing