The written language bias addresses the fact that written and spoken language use are quite distinct, although intricately intertwined modalities of verbal behaviour. Hence, imposing the description of one of these systems onto another should be avoided, for the sake of adequacy. As the nature of spoken language in everyday conversations is naturally interactional and multimodal, i.e. incorporating other semiotic signals to reach the communicative success, such as prosody and various non-verbal means (gesture, mimics, body movements etc.), a fully-fledged research of this kind of language use has to rely on multimodal corpora which belong to a specialized small-size corpora that are, sadly, both time-consuming and costly to build. In my talk, after presenting some evidence in support of the multimodal nature of spontaneous interactions, I will introduce a design of a multimodal corpus of Czech that is being developed at the Faculty of Arts of Charles University. With the support of these data from Czech, I will then argue that non-verbal information, namely co-speech gestures, tend to be systematically distributed across speakers and communicative events, helping the speakers to convey the message and facilitate the comprehension. Specifically, I will show how co-speech gestures contribute to the multimodal marking of aspectual contours of events, information structure as well as temporal relations.
***The talk will be delivered in person (MFF UK, Malostranské nám. 25, 4th floor, room S1) and will be streamed via Zoom. For details how to join the Zoom meeting, please write to sevcikova et ufal.mff.cuni.cz***