In the talk, I will summarize the major shift of paradigms that happened in machine translation (MT) in 2015-2017, leading to neural MT (NMT). Morphologically rich languages no longer seem to cause a distinct challenge and the errors made by current best MT systems are getting closer to errors made by humans. I will present the current best approaches, their shortcomings and research topics relevant for the near future.
On a general note, the relation between linguistics and engineering in MT has been disrupted by NMT and, to some extent, "we will again have show the importance of linguistics" for MT. Conversely, I am trying to find out what NMT can offer to linguistics. I would like to argue that NMT provides a great object of study, a 'dissectable language learner'.