Openly accessible

Acoustic features extraction for emotion recognition

Rong, J., Chen, Yi-Ping Phoebe, Chowdhury, Morshed and Li, Gang 2007, Acoustic features extraction for emotion recognition, in 6th IEEE/ACIS International Conference on Computer and Information Science : (ICIS 2007) in conjunction with 1st IEEE/ACIS International Workshop on e-Activity (IWEA 2007) : proceedings : 11-13 July, 2007, Melbourne, Australia, IEEE Xplore, Piscataway, N.J., pp. 419-424.

Attached Files
Name Description MIMEType Size Downloads
li-acousticfeatures-2007.pdf Published version application/pdf 311.22KB 433

Title Acoustic features extraction for emotion recognition
Author(s) Rong, J.
Chen, Yi-Ping Phoebe
Chowdhury, Morshed
Li, Gang
Conference name International Conference on Computer and Information Science (6th : 2007 : Melbourne, Australia)
Conference location Melbourne, Australia
Conference dates 11-13 July 2007
Title of proceedings 6th IEEE/ACIS International Conference on Computer and Information Science : (ICIS 2007) in conjunction with 1st IEEE/ACIS International Workshop on e-Activity (IWEA 2007) : proceedings : 11-13 July, 2007, Melbourne, Australia
Editor(s) Lee, Roger
Chowdhury, Morshed
Ray, Sid
Lee, Thuy
Publication date 2007
Conference series International Conference on Computer and Information Science
Start page 419
End page 424
Publisher IEEE Xplore
Place of publication Piscataway, N.J.
Keyword(s) feature extraction
machine learning
ensemble learning
twice learning
random forest
decision tree
Summary In the last decade, the efforts of spoken language processing have achieved significant advances, however, the work with emotional recognition has not progressed so far, and can only achieve 50% to 60% in accuracy. This is because a majority of researchers in this field have focused on the synthesis of emotional speech rather than focusing on automating human emotion recognition. Many research groups have focused on how to improve the performance of the classifier they used for emotion recognition, and few work has been done on data pre-processing, such as the extraction and selection of a set of specifying acoustic features instead of using all the possible ones they had in hand. To work with well-selected acoustic features does not mean to delay the whole job, but this will save much time and resources by removing the irrelative information and reducing the high-dimension data calculation. In this paper, we developed an automatic feature selector based on a RF2TREE algorithm and the traditional C4.5 algorithm. RF2TREE applied here helped us to solve the problems that did not have enough data examples. The ensemble learning technique was applied to enlarge the original data set by building a bagged random forest to generate many virtual examples, and then the new data set was used to train a single decision tree, which selects the most efficient features to represent the speech signals for the emotion recognition. Finally, the output of the selector was a set of specifying acoustic features, produced by RF2TREE and a single decision tree.
Notes This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
ISBN 0769528414
9780769528410
Language eng
Field of Research 080107 Natural Language Processing
HERDC Research category E1 Full written paper - refereed
ERA Research output type E Conference publication
Copyright notice ┬ęThis material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Persistent URL http://hdl.handle.net/10536/DRO/DU:30008042

Document type: Conference Paper
Collections: School of Engineering and Information Technology
Open Access Collection
Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.

Versions
Version Filter Type
Access Statistics: 546 Abstract Views, 433 File Downloads  -  Detailed Statistics
Created: Mon, 29 Sep 2008, 09:03:51 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.