Extending from limited domain to a new domain is crucial for Natural Language Generation in Dialogue, especially when there are sufficient annotated data in the source domain, but there is little labeled data in the target domain. This paper studies the performance and domain adaptation of two different Neural Network Language Generators in Spoken Dialogue Systems: a gating-based Recurrent Neural Network Generator and an extension of an Attentional Encoder-Decoder Generator. We found in model fine-tuning scenario that by separating slot and value parameterizations, the attention-based generators, in comparison to the gating-based generators, show ability to not only prevent semantic repetition in generated outputs and obtain better performance across all domains, but also adapt faster to a new, unseen domain by leveraging existing data. The empirical results show that the attention-based generator can adapt to an open domain when only a limited amount of target domain data is available.