[ Skip to the content ]

Institute of Formal and Applied Linguistics

at Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic

[ Back to the navigation ]


Year 2016
Type in proceedings without ISBN
Status published
Language English
Author(s) Bojar, Ondřej Federmann, Christian Haddow, Barry Koehn, Philipp Post, Matt Specia, Lucia
Title Ten Years of WMT Evaluation Campaigns: Lessons Learnt
Czech title Deset let vyhodnocovacích kampaní WMT: Získané zkušenosti
Proceedings 2016: Portorož, Slovenia: LREC 2016 workshop: Translation Evaluation: From Fragmented Tools and Data Sets to an Integrated Ecosystem
Pages range 27-34
How published online
Supported by 2015-2018 H2020-ICT-2014-1-645452 (QT21: Quality Translation 21) 2015-2017 H2020-ICT-2014-1-645357 (CRACKER) 2012-2016 PRVOUK P46 (Informatika)
Czech abstract Článek shrnuje deset ročníků soutěží ve strojovém překladu a ve vyhodnocování jeho kvality: WMT.
English abstract The WMT evaluation campaign (http://www.statmt.org/wmt16) has been run annually since 2006. It is a collection of shared tasks related to machine translation, in which researchers compare their techniques against those of others in the field. The longest running task in the campaign is the translation task, where participants translate a common test set with their MT systems. In addition to the translation task, we have also included shared tasks on evaluation: both on automatic metrics (since 2008), which compare the reference to the MT system output, and on quality estimation (since 2012), where system output is evaluated without a reference. An important component of WMT has always been the manual evaluation, wherein human annotators are used to produce the official ranking of the systems in each translation task. This reflects the belief of theWMTorganizers that human judgement should be the ultimate arbiter of MT quality. Over the years, we have experimented with different methods of improving the reliability, efficiency and discriminatory power of these judgements. In this paper we report on our experiences in running this evaluation campaign, the current state of the art in MT evaluation (both human and automatic), and our plans for future editions of WMT.
Specialization linguistics ("jazykověda")
Confidentiality default – not confidential
Open access no
Editor(s)* Ondřej Bojar; Aljoscha Burchardt; Christian Dugast; Marcello Federico; Josef Genabith; Barry Haddow; Jan Hajič; Kim Harris; Philipp Koehn; Matteo Negri; Martin Popel; Georg Rehm; Lucia Specia; Marco Turchi; Hans Uszkoreit
Address* Portorož, Slovenia
Month* May
Publisher* LREC
Institution* http://www.cracking-the-language-barrier.eu/
Creator: Common Account
Created: 10/24/16 1:24 AM
Modifier: Almighty Admin
Modified: 2/25/17 10:06 PM

Content, Design & Functionality: ÚFAL, 2006–2016. Page generated: Wed Jul 18 04:51:36 CEST 2018

[ Back to the navigation ] [ Back to the content ]

100% OpenAIRE compliant