Deakin University
Browse

Neural-based natural language generation in dialogue using RNN encoder-decoder with semantic aggregation

conference contribution
posted on 2017-01-01, 00:00 authored by Van Khanh Tran, Le-Minh Nguyen, Satoshi Tojo
Natural language generation (NLG) is an important component in spoken dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder which is an extension of an Recurrent Neural Network based Encoder-Decoder architecture. The proposed Semantic Aggregator consists of two components: an Aligner and a Refiner. The Aligner is a conventional attention calculated over the encoded input information, while the Refiner is another attention or gating mechanism stacked over the attentive Aligner in order to further select and aggregate the semantic elements. The proposed model can be jointly trained both text planning and text realization to produce natural language utterances. The model was extensively assessed on four different NLG domains, in which the results showed that the proposed generator consistently outperforms the previous methods on all the NLG domains.

History

Pagination

231-240

Location

Saarbrücken, Germany

Open access

  • Yes

Start date

2017-08-15

End date

2017-08-17

ISBN-13

978-1-945626-82-1

Language

eng

Publication classification

E1.1 Full written paper - refereed

Copyright notice

2017, Association for Computational Linguistics

Editor/Contributor(s)

[Unknown]

Title of proceedings

SIGDIAL 2017 : Proceedings of the 18th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Event

Association for Computational Linguistics. Conference (18th : 2017 : Saarbrücken, Germany)

Publisher

Association for Computational Linguistics

Place of publication

Stroudsburg, Pa.

Series

Association for Computational Linguistics Conference

Usage metrics

    Research Publications

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC