MT Marathon 2016 Keynote Talks
Monday through Friday, MT Marathon includes keynote talks.
For slides and later also the videorecordings, see the programme.
Confirmed Speakers
Holger Schwenk (Facebook AI Research)
Neural Networks in MT: Past, Present and Future
Research in machine translation started more than 60 years ago with a rule-based approach to translate from Russian into English, in the framework of the cold war. More than 40 years later, statistical machine translation has evolved and it was the dominant approach for more than a decade. Neural networks and the fundamental learning algorithms are known since more than 30 years, but it was only more recently that ``deep architecture'' have shown impressive results in many areas, in particular computer vision and speech recognition.
In this talk, I will present how neural networks have been used in machine translation over the time: from rescoring n-best lists with neural network to fully neural machine translation systems which outperform complicated phrase-based systems.
Adrià de Gispert (SDL Research and Cambridge University)
Directed MT Research for Commercial Settings
Successfully deploying MT systems in commercial settings offers challenging problems not usually encountered in academic research. Customer and use case requirements need to be considered along with general translation quality. When training and optimizing MT systems, factors like decoding speed, memory and disk footprint, usability, robustness, ability to train with relevant data, or training time, are key to success. In this talk I will present recent work done at SDL Research to bring MT to users, and discuss other aspects of doing research in industry.
Orhan Firat (Middle East Technical University), Kyunghyun Cho (New York University)
Future Directions in Neural Machine Translation
Deep (recurrent) neural networks have been shown to successfully learn complex mappings between arbitrary length input and output sequences, within the effective framework of encoder-decoder networks. We will explore the recent advances and future directions of these sequence to sequence models application to Neural Machine Translation. How far we can extend the existing approaches? Can we remove the assumptions about input and output structures? Can we handle multiple input and output sequences within the same model? Are we bound by sentence level one-to-one mapping only and are there any ways to do many-to-one or many-to-many mappings? What if we don't have any parallel text between pairs of sequences? How can we make use of larger-context (document level) information? And finally, where are we standing in terms of our ultimate goal, artificial general intelligence?
Susanne Weber (BBC News Labs)
Presentation on the SUMMA project and the BBC's ALTO video translation tool
International broadcasters who reach out to their audiences in multiple languages are increasingly turning towards language technology to assist with their work. Machine translation is one of the technologies that is trialled in innovative workflows and content creation tools. I will present two projects to illustrate the use of machine translation in the broadcast environment. First, SUMMA (an EU H2020 collaboration), which aims to build a monitoring platform across 9 languages with the help of machine translation. Second, ALTO (developed by the BBC), an innovative video translation tool which uses computer-assisted translation and speech synthesis to create spoken content.
Philipp Koehn (Johns Hopkins University)
Recent Trends in Computer-Assisted Translation
The talk presents an overview of work in computer aided translation which has the goal to aid human translators to be more productive. Topics include: various types of assistance, including the use of neural methods for interactive translation prediction; user studies and cognitive models; and the open source CASMACAT workbench.