Recently, deep learning methodologies have become popular to analyse physiological signals in multiple modalities via hierarchical architectures for human emotion recognition. In most of the state-of-the-arts of human
emotion recognition, deep learning for emotion classification was used. However, deep learning is mostly effective for deep feature extraction. Therefore, in this research, we applied unsupervised deep belief network (DBN)
for depth level feature extraction from fused observations of Electro-Dermal Activity (EDA), Photoplethysmogram (PPG) and Zygomaticus Electromyography (zEMG) sensors signals. Afterwards, the DBN produced features
are combined with statistical features of EDA, PPG and zEMG to prepare a feature-fusion vector. The prepared
feature vector is then used to classify five basic emotions namely Happy, Relaxed, Disgust, Sad and Neutral. As
the emotion classes are not linearly separable from the feature-fusion vector, the Fine Gaussian Support Vector
Machine (FGSVM) is used with radial basis function kernel for non-linear classification of human emotions. Our
experiments on a public multimodal physiological signal dataset show that the DBN, and FGSVM based model significantly increases the accuracy of emotion recognition rate as compared to the existing state-of-the-art emotion
classification techniques.