Monday, 30 November, 2020 - 14:00
Room: 

Hidden in the Layers: Interpretation of Neural Networks for Natural Language Processing

David Mareček
Jindřich Libovický
Tomáš Musil
Rudolf Rosa
Tomasz Limisiewicz (ÚFAL MFF UK)
In recent years, deep neural networks dominated the area of Natural Language Processing (NLP). End-to-end-trained models can do tasks as skillfully as never before and develop their own language representations. However, they act as black boxes that are very hard to interpret. This calls for an inspection to what extent the linguistic conceptualizations are consistent with what the models learn. Do neural networks use morphology and syntax the way people do when they talk about language? Or do they develop their own way?
In our talk, we will half-open the neural black-box and analyze the internal representations of input sentences with respect to their morphological, syntactic, and semantic properties. We will focus on word embeddings as well as contextual embeddings and self-attentions of the Transformer models (BERT, NMT). We will show both supervised and unsupervised analysis approaches.
 

***The talk will be streamed via Zoom. For details how to join the Zoom meeting, please write to sevcikova et ufal.mff.cuni.cz***