Extending from limited domain to a new domain is crucial for Natural Language Generation in Dialogue, especially when there are sufficient annotated data in the source domain, but there is little labeled data in the target domain. This paper studies the performance and domain adaptation of two different Neural Network Language Generators in Spoken Dialogue Systems: a gating-based Recurrent Neural Network Generator and an extension of an Attentional Encoder-Decoder Generator. We found in model fine-tuning scenario that by separating slot and value parameterizations, the attention-based generators, in comparison to the gating-based generators, show ability to not only prevent semantic repetition in generated outputs and obtain better performance across all domains, but also adapt faster to a new, unseen domain by leveraging existing data. The empirical results show that the attention-based generator can adapt to an open domain when only a limited amount of target domain data is available.
History
Pagination
19-24
Location
Hanoi, Vietnam
Start date
2017-11-24
End date
2017-11-25
ISBN-13
978-1-5386-3210-9
Language
eng
Publication classification
E1.1 Full written paper - refereed
Copyright notice
2017, IEEE
Editor/Contributor(s)
Vinh LS, Hoang TA, Hai DT
Title of proceedings
NICS 2017 : Proceedings of the 2017 4th NAFOSTED Conference on Information and Computer Science
Event
National Foundation for Science and Technology Development. Conference (4th : 2017 : Hanoi, Vietnam)
Publisher
Institute of Electrical and Electronics Engineers
Place of publication
Piscataway, N.J.
Series
National Foundation for Science and Technology Development Conference