Significant in contemporary language processing applications, Large Language Models (LLMs) have become indispensable across diverse domains, ranging from improving chatbot interactions and language translation to code generation and the creation of imaginative content. The question of whether LLMs truly comprehend language has been posed repeatedly, often eliciting a resolute negative response. Nevertheless, recent literature presents nuanced perspectives on the notion of understanding within LLMs. In this presentation, we will explore these alternative viewpoints, illustrating how varying concepts of understanding and meaning can reshape our perception of the functioning of LLMs. Ultimately, we will demonstrate how this philosophical exploration informs our empirical research into the interpretation of vector language representations in LLMs.
*** The talk will be delivered in person (MFF UK, Malostranské nám. 25, 4th floor, room S1) and will be streamed via Zoom. For details how to join the Zoom meeting, please write to sevcikova et ufal.mff.cuni.cz ***