Neural Monkey is a universal toolkit for training neural models for sequence-to-sequence tasks. The system has been successfully tested on machine translation, multimodal machine translation, or automatic post-editing. It can be used, however, for many other tasks, including image captioning, part-of-speech tagging, sequence classification, etc.

Neural Monkey's primary goal is to allow for fast prototyping and easy extension, which makes it a toolkit-of-choice for researchers who want to implement and/or modify recently published techniques.

Neural Monkey is written in Python 3 and built on the TensorFlow library. It supports training on GPUs with a minimum required effort. 

If you want to start using Neural Monkey, clone it from its GitHub page. There is a bunch of tutorials and plenty of other useful information either in the package README, or in the documentation.

 

References

Neural Monkey was used in the following publications:

  • Libovický, J., Helcl, J., Tlustý, M., Bojar, O., & Pecina, P. (2016, August). CUNI System for WMT16 Automatic Post-Editing and Multimodal Translation Tasks. In Proceedings of the First Conference on Machine Translation (p. 646)
  • Avramidis, E., Macketanz, V., Burchardt, A., Helcl, J., & Uszkoreit, H. (2016, October). Deeper Machine Translation and Evaluation for German. In 2nd Deep Machine Translation Workshop (p. 29).
  • Bojar, O., Sudarikov, R., Kocmi, T., Helcl, J., & Cıfka, O. (2016). UFAL Submissions to the IWSLT 2016 MT Track. In Proceedings of the ninth International Workshop on Spoken Language Translation (IWSLT), Seattle, WA.

How to cite

The paper will hopefully be available soon. If you make use of Neural Monkey and are reading this notice, please send us an inquiry about how to cite.