Monday, 29 March, 2021 - 14:00

Disentangling 20 years of confusion: quo vadis, human evaluation?

Human assessment remains the most trusted form of evaluation in natural language generation, but there is huge variation in terms of both what is assessed and how it is assessed. We recently surveyed 20 years of publications in the NLG community to better understand this variation and conclude that we need to work together to develop clear standards for human evaluations.


***The talk will be streamed via Zoom. For details how to join the Zoom meeting, please write to sevcikova et***