Project Manager (ÚFAL): 
Provider: 
Grant id: 
338521
Duration: 
2021 - 2023

The latent representations of Neural Networks trained on large corpora to perform language modeling or machine translation (also known as language embeddings) were proven to encode various linguistic features. The vectors computed by various models such as ELMo [1] or BERT [2] are now routinely used as input to neural models solving specific down-stream language tasks and often achieve state-of-the-art results.

Nevertheless, the aforementioned methods of computing reusable representations have drawbacks. The models act as black boxes, where only the input and the output have obvious linguistic interpretation while the processing performed inside remains opaque. The related problem is a flawed generalization: once a model is trained on certain data distribution, it is hard to use the learned information when the distribution (or the task itself) changes.

The project tackles the issues of limited explainability and poor generalization by developing new ways of a) providing insight into the pre-trained representations; b) transforming the representations so that the information encoded within is more accessible to both human interpreters and/or other neural models; c) improving embeddings by providing additional linguistic signal during training. Our analysis will use representations of the models trained on many languages, which lie in a shared cross-lingual space.

 

[1] Deep contextualized word representation, Peters et al. 2018
[2] Bert: Pre-training of deep bidirectional transformers for language understanding, Devlin et al. 2019