Machine translation (MT) is perhaps the king discipline of computational linguistics. Despite the
abundant data available and great speed-ups for professionals translators, reliable fully-automatic
translation is still an elusive goal.
The field of MT has seen an unprecedented switch of paradigms in 2016 moving from "classic"
statistical methods, which decompose the input sentence into a set of translation units, to neural
machine translation (NMT, Zhang and Zong (2015)) representing words and whole sentences in high-
dimensional continuous spaces. In 2016, NMT was successfully trained on large corpora for the first
time. The exact understanding of what NMT models are actually learning from the data is not yet
available.
At the same time, very promising results were achieved by neural multilingual translation, i.e.
translation from one of a number of languages to a single target language or even to several target
languages. The benefits of having a single system for several language pairs are appealing for reasons
purely practical (maintenance and computing costs), as well as linguistic (thanks to sharing knowledge
among the languages, less training data should be sufficient).
There are two main aims of the project: to gure out which NMT architectures are best for multi-
lingual translation and to learn more about the properties and behaviour of deep neural networks in
multilingual NMT.