Deakin University
Browse

Natural language generation for spoken dialogue system using RNN encoder-decoder networks

Download all (1.2 MB)
conference contribution
posted on 2017-01-01, 00:00 authored by Van Khanh Tran, Le-Minh Nguyen
Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets.

History

Pagination

442-451

Location

Vancouver, B.C.

Open access

  • Yes

Start date

2017-08-03

End date

2017-08-04

ISBN-13

978-1-945626-54-8

Language

eng

Publication classification

E1.1 Full written paper - refereed

Copyright notice

2017, Association for Computational Linguistics

Editor/Contributor(s)

[Unknown]

Title of proceedings

CoNLL 2017 : Proceedings of the 21st Conference on Computational Natural Language Learning

Event

Association for Computational Linguistics. Conference (21st : 2017 : Vancouver, B.C.)

Publisher

Association for Computational Linguistics

Place of publication

Stroudsburg, Pa.

Series

Association for Computational Linguistics. Conference

Usage metrics

    Research Publications

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC