NG-NLG

Next-Generation Natural Language Generation

This project aims to overcome the major hurdles that prevent current state-of-the-art models for natural language generation (NLG) from real-world deployment. While deep learning and neural networks brought considerable progress in many areas of natural language processing, neural approaches to NLG remain confined to experimental use and production NLG systems are handcrafted. The reason for this is that despite the very natural and fluent outputs of recent neural systems, neural NLG still has major drawbacks:

  1. the behavior of the systems is not transparent and hard to control (the internal representation is implicit), which leads to incorrect or even harmful outputs,
  2. the models require a lot of training data and processing power do not generalize well, and are mostly English-only.

On the other hand, handcrafted models are safe, transparent and fast, but produce less fluent outputs and are expensive to adapt to new languages and domains (topics). As a result, usefulness of NLG models in general is limited. In addition, current methods for automatic evaluation of NLG outputs are unreliable, hampering system development.

The main aims of this project, directly addressing the above drawbacks, are:

  1. Develop new approaches for NLG that combine neural approaches with explicit symbolic semantic representations, thus allowing greater control over the outputs and explicit logical inferences over the data.
  2. Introduce approaches to model compression and adaptation to make models easily portable across domains and languages.
  3. Develop reliable neural-symbolic approaches for evaluation of NLG systems.

We will test our approaches on multiple NLG applications – data-to-text generation (e.g., weather or sports reports), summarization, and dialogue response generation. For example, our approach will make it possible to deploy a new data reporting system for a given domain based on a few dozen example input-output pairs, compared to thousands needed by current methods.

Publications

2024

  • Simone Balloccu, Patrícia Schmidtová, Mateusz Lango, Ondrej Dusek. Leak, Cheat, Repeat: Data Contamination and Evaluation Malpractices in Closed-Source LLMs, in: Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers). [Anthology]
  • Mateusz Lango, Patrícia Schmidtová, Simone Balloccu, Ondřej Dušek. ReproHum #0043-4: Evaluating Summarization Models: Investigating the Impact of Education and Language Proficiency on Reproducibility, in: The 4th Workshop on Human Evaluation of NLP Systems (HumEval’24). [coming soon]

2023

  • Zdeněk Kasner, Ioannis Konstas, Ondrej Dusek. Mind the Labels: Describing Relations in Knowledge Graphs With Pretrained Models, in: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. [Anthology]
  • Zixiu Wu, Simone Balloccu, Ehud Reiter, Rim Helaoui, Diego Reforgiato Recupero, Daniele Riboni. Are Experts Needed? On Human Evaluation of Counselling Reflection Generation, in: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). [Anthology]
  • Mateusz Lango, Ondrej Dusek. Critic-Driven Decoding for Mitigating Hallucinations in Data-to-text Generation, in: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. [Anthology]
  • Zdeněk Kasner, Ekaterina Garanina, Ondrej Platek, Ondrej Dusek. TabGenie: A Toolkit for Table-to-Text Generation, in: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations). [Anthology]
  • Vojtěch Hudeček, Ondrej Dusek. Are Large Language Models All You Need for Task-Oriented Dialogue?, in: Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue. [Anthology]
  • Sourabrata Mukherjee, Ondrej Dusek. Leveraging Low-resource Parallel Data for Text Style Transfer, in: Proceedings of the 16th International Natural Language Generation Conference. [Anthology]
  • Saad Obaid ul Islam, Iza Škrjanec, Ondrej Dusek, Vera Demberg. Tackling Hallucinations in Neural Chart Summarization, in: Proceedings of the 16th International Natural Language Generation Conference. [Anthology]
  • Mateusz Woźny, Mateusz Lango. Generating clickbait spoilers with an ensemble of large language models, in: Proceedings of the 16th International Natural Language Generation Conference. [Anthology]
  • Jakub Raczyński, Mateusz Lango, Jerzy Stefanowski. The Problem of Coherence in Natural Language Explanations of Recommendations, in: Proceedings of the 26th European Conference on Artificial Intelligence (ECAI 2023). [Paper text]
  • Emiel van Miltenburg, Miruna Clinciu, Ondřej Dušek, Dimitra Gkatzia, Stephanie Inglis, Leo Leppänen, Saad Mahamood, Stephanie Schoch, Craig Thomson, Luou Wen. Barriers and enabling factors for error analysis in NLG research, in: Northern European Journal of Language Technology. [Paper text]
  • Ondřej Plátek, Ondrej Dusek. MooseNet: A Trainable Metric for Synthesized Speech with a PLDA Module, in: 12th ISCA Speech Synthesis Workshop (SSW2023). [Paper text]
  • Ondrej Platek, Mateusz Lango, Ondrej Dusek. With a Little Help from the Authors: Reproducing Human Evaluation of an MT Error Detector, in: Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems. [Anthology]
  • Sourabrata Mukherjee, Akanksha Bansal, Pritha Majumdar, Atul Kr. Ojha, Ondřej Dušek. Low-Resource Text Style Transfer for Bangla: Data & Models, in: Proceedings of the First Workshop on Bangla Language Processing (BLP-2023). [Anthology]
  • Sourabrata Mukherjee, Vojtěch Hudeček, Ondřej Dušek. Polite Chatbot: A Text Style Transfer Application, in: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop. [Anthology]
  • Nalin Kumar, Saad Obaid Ul Islam, Ondrej Dusek. Better Translation + Split and Generate for Multilingual RDF-to-Text (WebNLG 2023), in: Proceedings of the Workshop on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge (MM-NLG 2023). [Anthology]
  • Ondřej Plátek, Vojtech Hudecek, Patricia Schmidtova, Mateusz Lango, Ondrej Dusek. Three Ways of Using Large Language Models to Evaluate Chat, in: Proceedings of The Eleventh Dialog System Technology Challenge. [Anthology]
  • František Trebuňa, Ondrej Dusek. VisuaLLM: Easy Web-based Visualization for Neural Language Generation, in: Proceedings of the 16th International Natural Language Generation Conference: System Demonstrations. [Anthology]
  • Patricia Schmidtova. Semantic Accuracy in Natural Language Generation: A Thesis Proposal, in: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop). [Anthology]

2022

  • Zdeněk Kasner, Ondřej Dušek. Neural Pipeline for Zero-Shot Data-to-Text Generation, in: ACL [Anthology] [Github] [Poster]
  • Tomáš Nekvinda, Ondřej Dušek. AARGH! End-to-end Retrieval-Generation for Task-Oriented Dialog, in: SIGdial. [arXiv] [video] [Github]
  • Sourabrata Mukherjee, Zdeněk Kasner, Ondřej Dušek. Balancing the Style-Content Trade-Off in Sentiment Transfer Using Polarity-Aware Denoising, in: Text, Speech and Dialogue. [SpringerLink]
  • Rudali Huidrom, Ondřej Dušek, Zdeněk Kasner, Thiago Castro Ferreira, Anya Belz. Two Reproductions of a Human-Assessed Comparative Evaluation of a Semantic Error Detection System, in: INLG GenChal [Anthology]
  • Vojtěch Hudeček, Ondřej Dušek. Learning Interpretable Latent Dialogue Actions With Less Supervision, in: AACL-IJCNLP [arXiv] [Github]

Media

  • Ondřej Dušek: Problémy dnešních generátorů jazyka. Vesmír 101, Sep 2022 [URL]
  • Natural language generation research wins ERC grant. Forum Charles University Magazine. Jan 2022 [URL]