MT Marathon 2022 Talks

Monday through Saturday, MT Marathon includes keynote talks.

For slides and later also the videorecordings, see the programme.

Confirmed Speakers

Nick Bogoychev (University of Edinburgh)

Efficient Machine Translation

Neural networks are notorious for their high computational intensity and energy usage. However, recent advances in the field have made it possible to reduce their computational load to the point where machine translation systems can be run on a mobile phone.

In this talk, we will a take kaleidoscopic view of neural network optimisation, focusing on neural machine translation as a case study. We will cover model improvements, neural machine translation specific improvements and software improvements both on the GPU and the CPU. Combining all improvements we manage to decrease inference time by a factor of ~600 with a tiny drop in BLEU.

Ryan Cotterell (ETH Zürich)

TBA (decoding)

TBA

Markus Freitag (Google)

A journey of MT research - Why it is crucial to work on evaluation

I will drive you through some of our research projects of the last 3 years. I will start with a project that tries to improve the naturalness of our machine translation output and how both human and automatic evaluation disagreed with our impression of the quality of the more natural looking output. Based on that project, we started to revolutionize both human and automatic evaluation. We spent some time defining a new human evaluation methodology grounded on error annotation to finally replace beam search decoding with Minimum Bayes Risk decoding based on neural-based automatic metrics. I will showcase how important the updated evaluation methodologies and the insight of these studies were to yield significant improvements in translation quality.

Věra Kloudová (Charles University)

What could (or maybe should) MT researchers know about translation (theory)?

Translation Studies (TS) and MT research have many common goals, the most important among them being the study of translation between two or more languages: be it human or machine translation. However, exchange between the two disciplines has been rather rare so far. In TS, we have been making use of technologies, the integration of advanced quantitative research methodologies, and MT outputs in recent years, e.g., for post-editing research. Similarly, MT could benefit from what has been explored and what has played an important role in TS in the past decades. The talk will focus on several key phenomena and basic approaches and theories that TS works with: 1. a concept of functional translation theory, e.g., how different types of texts differ and how their translation for different purposes varies, 2. typical features of translational language as a linguistic variant (what is „translationese“?), 3. potential sources of translation difficulties: what should (not only) MT researchers be aware of, and 4. translation quality, evaluation, and criticism in TS.

Tom Kocmi (Microsoft)

TBA (evaluation)

TBA

Ricardo Rei (Unbabel)

TBA (quality estimation)

TBA

Holger Schwenk (Facebook AI Research)

Massively Multilingual Text and Speech Mining

Felix Stahlberg (Google)

Tackling Intrinsic Uncertainty with SCONES

In many natural language processing (NLP) tasks the same input can have multiple possible outputs. In machine translation (MT), for example, a source sentence may have multiple acceptable translations. We describe how this kind of ambiguity (also known as intrinsic uncertainty) shapes the distributions learned by neural sequence models, and how it impacts various aspects of search such as the inductive biases in beam search and the complexity of exact search. We show that well-known pathologies such as a high number of beam search errors, the inadequacy of the mode, and the drop in system performance with large beam sizes (the beam search curse) apply to tasks with high level of ambiguity such as MT but not to less uncertain tasks such as grammatical error correction. The second part of the talk discusses an attempt to mitigate the negative effects of intrinsic uncertainty to sequence models called SCONES, which frames MT as a multi-label classification problem. We demonstrate that SCONES can be tuned to either improve the translation quality or runtime of traditional softmax-based models, or to fix model pathologies like the beam search curse that are connected with intrinsic uncertainty.