14:00
Tomáš Nekvinda: Multi-domain dialogue systems
Abstract:
Task-oriented dialogue systems are typically handcrafted or trained from data for a small set of domains. Non-task-oriented chit-chat systems are trained to respond to open-domain utterances, but they have limited understanding and their responses are largely uncontrolled. My dissertation topic focuses on the exploration of novel and efficient ways of building multi-domain dialogue models that are able to jointly serve task-oriented and non-task-oriented dialogues. This may include, for instance, improving task detection, open-domain understanding, adding chit-chat capabilities to end-to-end task-oriented systems, etc. I will talk about the motivation, main obstacles, the recent research in this field, and my initial experiments.
14:25
Mateusz Krubiński: Multimodal Summarization
Abstract:
The goal of Automatic Summarization is to produce a concise summary of a given document. The output should contain the most relevant information within the original content, while also being short and easy to process.
In recent years there is a growing interest in multimodal approaches, which combine several sources of information, i.e. videos, texts and images. The aim is to produce a multimodal summary consisting of e.g. short textual summary and a cover photo. The area is still missing established benchmarking datasets and a single metric to measure effectiveness of such systems.
In this talk I will present several variants of this challenge, point out some important obstacles and briefly mention recent advances in other Vision+Language tasks such as Video Captioning or Visual Question Answering that may be useful for Multimodal Summarization.
--------------------------------------
***The talks will be streamed via Zoom. For details how to join the Zoom meeting, please write to sevcikova et ufal.mff.cuni.cz***