The task of natural language generation for spoken dialogue systems is to convert dialogue acts (consisting of speech acts, such as "inform" or "request", and a list of domain-specific attributes and their values) into fluent and relevant natural language senteces.
We present three of our recent experiments with applying sequence-to-sequence (seq2seq) neural network models to this problem:
First, we compare direct sentence generation with two-step generation via deep syntax trees and show that it is possible to train seq2seq generators from very little data.
Second, we enhance the seq2seq model so that it takes previous dialogue context into account and produces contextually appropriate responses.
And finally, we evaluate a few simple extensions to the model designed for generating morphologically rich languages, such as Czech.