venkatesh-distributedaudio-2009.pdf (565.23 kB)
Distributed audio network for speech enhancement in challenging noise backgrounds
conference contribution
posted on 2009-01-01, 00:00 authored by T Kuhnapfel, T Tan, Svetha VenkateshSvetha Venkatesh, B IgelThis paper presents a new approach to enhance speech based on a distributed microphone network. Each microphone is used to simultaneously classify the input into either one of the noise types or as speech. For enhancing the speech signal a modified spectral subtraction approach is used that utilise the sound information of the entire network to update the noise model even during speech. This improves the reduction of the ambient noise, especially for non-stationary noise types such as street or beach noise. Experiments demonstrate the effectiveness of the proposed system.
History
Event
IEEE International Conference on Advanced Video and Signal Based Surveillance (6th : 2009 : Genoa, Italy)Pagination
308 - 313Publisher
IEEELocation
Genoa, ItalyPlace of publication
Washington, D. C.Start date
2009-09-02End date
2009-09-04ISBN-13
9781424447558ISBN-10
1424447550Language
engNotes
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.Publication classification
E1.1 Full written paper - refereedCopyright notice
2009, IEEETitle of proceedings
AVSS 2009 : Proceedings of the 6th IEEE International Conference on Advanced Video and Signal Based SurveillanceUsage metrics
Categories
No categories selectedKeywords
Licence
Exports
RefWorks
BibTeX
Ref. manager
Endnote
DataCite
NLM
DC