Extracting syntactic trees from NMT encoder self-attentions. In our presentation, we are going to talk about analysis of self-attentions in the encoder of the Neural Machine Translation using the Transformer architectute (Vaswani et al, 2017). The attentions will be analysed with repsect to dependency and constituency tree structures. The syntactic trees are inferred only based on the attention mechanism and then compared with trees parsed by supervised parsers. We are going to show how the attention mechanisms are related to the sentence structures as described in different linguistic theories about syntax.