This is the new course for the '23/24 Fall semester. You can find slides from last year on the archived old page.
This course presents advanced problems and current state-of-the-art in the field of dialogue systems, voice assistants, and conversational systems (chatbots). After a brief introduction into the topic, the course will focus mainly on the application of machine learning – especially deep learning/neural networks – in the individual components of the traditional dialogue system architecture as well as in end-to-end approaches (joining multiple components together).
This course is a follow-up to the course NPFL123 Dialogue Systems, but can be taken independently – important basics will be repeated. All required deep learning concepts will be explained, but only briefly, so some machine learning background is recommended.
The course will be taught in English, but we're happy to explain in Czech, too.
Lectures and labs take place in the room S10 (Malá Strana, 1st floor).
In practice, we'll start with the lectures on 9:50 and continue with the labs afterwards on even weeks. The labs will be likely shorter than 45 mins, as they mainly consist of homework assignments.
In addition, we plan to stream both lectures and lab instruction over Zoom and make the recordings available on Youtube (under a private link, on request). We'll do our best to provide a useful experience, just note that the quality might not be ideal.
If you can't access Zoom, email us or text us on Slack.
There's also a Slack workspace you can use to discuss assignments and get news about the course. Please contact us by email if you want to join and haven't got an invite yet.
To pass this course, you will need to take an exam and do lab homeworks, which will amount to training an end-to-end neural dialogue system. See more details here. Note that the assignments will be the most challenging part of the course, and will take some time to complete.
PDFs with lecture slides will appear here shortly before each lecture (more details on each lecture are on a separate tab). You can also check out last year's lecture slides.
1. Introduction Slides Questions
2. Data & Evaluation Slides Dataset Exploration Questions
3. Neural Nets Basics Slides Questions
4. Training Neural Nets Slides MultiWOZ 2.2 Loader Questions
5. Natural Language Understanding Slides Questions
6. Dialogue Management (1) Slides Finetuning GPT-2 on MultiWOZ Questions
7. Dialogue Management (2) Slides Questions
8. Language Generation Slides MultiWOZ 2.2 DB + State Questions
9. End-to-end Models Slides Questions
A list of recommended literature is on a separate tab.
10 October Slides Dataset Exploration Questions
24 October Slides MultiWOZ 2.2 Loader Questions
7 November Slides Finetuning GPT-2 on MultiWOZ Questions
21 November Slides MultiWOZ 2.2 DB + State Questions
There will be 6 homework assignments + 2 bonuses, each for a maximum of 10 points. Please see details on grading and deadlines on a separate tab.
Assignments should be submitted via Git – see instructions on a separate tab.
All deadlines are 23:59:59 CET/CEST.
Note: If you don't have a faculty Gitlab account yet, please create one as soon as possible (see the instructions). Don't wait until the deadline! It takes 5 minutes, and if you don't do it, you won't have any way of submitting.
3. Finetuning GPT-2 on MultiWOZ
Presented: 10 October, Deadline: 27 October
Your task is to select one dialogue dataset, download and explore it.
Here you can use the dataset description/paper that came out with the data. The papers are linked from the dataset webpages or from here. If you can't find a paper, ask us and we'll try to help.
Here you should use your own programming skills. If your dataset has a train/dev/test split, use the training set. If there's no clear separation between a user and a system (e.g. human-human chitchat data, or NLU-only data), provide just the overall numbers.
hw1/README.md
.hw1/analysis.py
or hw1/analysis.ipynb
.See the submission instructions here (clone your Gitlab repo and add a new merge request).
train_none_original
data)data
subdirectory)Dataset surveys (broader, but shallower than what we're aiming at):
Presented: 24 October, Deadline: 10 November
Your task is to create a component that will load the task-oriented MultiWOZ 2.2 dataset and process the data so it is prepared for model training. The component will consist of two Python classes -- one to hold the data, and one to prepare the training batches.
In later assignments, you will train a GPT-2 based model (similar to SOLOIST) using the data provided by this loader. Note that this means that the next assignments depend on this one.
We prepared some code templates for you in diallama/mw_loader.py
to guide your implementation. You should not need to modify the code already present in the templates. If you feel you need to, you can do so, but please comment on your code changes in the MR. Please do not modify the file hw2/test.py
under any circumstances (contact us if you really think you need to).
The bits that are waiting for your implementation are highlighted with # TODO:
in diallamka/mw_loader.py
.
Note that to use the provided code, you'll need to install the dependencies provided in the requirements.txt
. They can be installed easily via pip install -r requirements.txt
.
MultiWOZ 2.2 is a task-oriented conversational dataset labeled with dialogue acts. It contains around 10k conversations between the user and a Cambridge town info centre (system). The dialogues are about certain topics: restaurants, hotels, trains, taxi, tourist attractions, hospital, and police. You can find more details in the dataset repository.
You can write your own dataset loader from the original format (see the dataset) but we recommend using the Huggingface Datasets library version.
This is how the data looks like if you load it using Huggingface Datasets: Each entry in the dataset represents one dialog. The information we are interested in is contained in the field turns
, which is a dictionary with the following important keys:
speaker
: Role associated with the speaker. It's either 0 (user) or 1 (system).utterance
: String representation of the dialogue utterances.dialogue_acts
: Structured parse of the system utterances into dialog acts (only in system utterances). It contains slot names and corresponding span_info
(location of the slot in the utterance, which will come in handy later).frames
: Present only in user utterances. Structured representation of the user's belief state.Each of these keys is mapped to a list with labels for the corresponding turns, i.e. turns['speaker'][0]
contains information for the speaker of the first turn and turns['speaker'][-1]
of the last one.
The dataset contains the train, validation and test splits. Please respect them!
Note that MultiWOZ also contains a database (and you need database queries for your system to work correctly), but we'll address that later.
You need to implement the following properties for the Dataset
class:
{
'context': list[str], # list of utterances preceeding the current utterance
'utterance': str, # the string with the current response
'delex_utterance': str, # the string with the current response which is delexicalized, i.e. slot values are
# replaced by corresponding slot names in the text.
}
n
turns will yield n // 2
examples, each with progressively longer context (starting from a context of length 1, up to n-1
turns of context). We are modelling only system responses!k
last utterances, where k
is a parameter of the class.dialogue_acts
and its fields span_end
, span_start
for localizing the parts suitable for delexicalization. Replace those parts with the corresponding slot names from act_slot_name
enclosed into brackets, e.g., [name]
or [pricerange]
.The DataLoader
class, as per the template, can do the following:
yield
a batch of examples (a simple list with examples of your Dataset) of a batch size given in the constructor.You need to additionally implement the following property into DataLoader
's batch handling (see _sort_examples_to_buckets_f
):
context
+ utterance
) inside the same batch. Machine learning models usually work with numbers and matrices. That is why we also need to convert strings in our batches to integer IDs.
Therefore, inside your data loader class, you'll also need to implement a collate function (collate_fn
) that has the following properties:
It is able to work with batches coming from your data loader (lists of examples).
It uses GPT2Tokenizer
to split all strings into tokens (subwords) and assign them IDs.
It converts the whole batch to a single dictionary (output
) of the following structure:
output = {
'context': list[list[int]], # tokenized context (list of subword ids from all preceding dialogue turns,
# system turns prepended with a special `<|system|>` token and user turns with `<|user|>`)
# for all batch examples
'utterance': list[list[int]], # tokenized utterances (list of subword ids from the current dialogue turn)
# for all batch examples
'delex_utterance': list[list[int]], # tokenized and delexicalized utterances (list of subword ids
# from the current dialogue turn) for all batch examples
}
where {k : output[k][i] for k in output}
should correspond to i-th example of the original input batch.
<|system|>
, <|user|>
) into the tokenizer (check out additional_special_tokens
argument of the tokenizer)!diallama/mw_loader.py
.hw2/test.py
run on your data (test set is used by default), as hw2/results_test.txt
. Have a look at what the script is doing, that'll help you with your implementation.Presented: 7 November, Deadline: 8 December (extended!)
In this assignment, you will be fine-tuning the GPT-2 language model on the MultiWOZ dataset that you prepared. We'll ignore state tracking and database for now, that will come later on. For now, it suffices that the model will give you some reasonable answer that comes from the correct domain, it doesn't necessarily have to be true :-).
Before you start any implementation, make sure you update from upstream! We swapped hw3 & hw4 compared to last year, so things won't make sense othrewise!
You'll need to add a few more steps to your data loader:
You will work with the diallama/mw_loader.py
and modify the collate()
method in the following way:
<|endoftext|>
tokens as a delimiter and as the last token1
for context/utterance tokens only (see the example below)0
for padding and 1
for any valid tokensLoader outputs (collated) from HW2 looked like this:
<|ENDOFTEXT|> = 3320
<|USER|> = 3321
<|SYSTEM|> = 3322
contexts = [[3321, 1, 2, 3322, 3, 4, 5, 6, 3321, 7, 8, 9], [3321, 10, 11]]
delex_utterances = [[12, 13 , 14], [15, 16, 17, 18]]
What we need is to make them look like this:
input_ids = [
[[3321, 1, 2, 3322, 3, 4, 5, 6, 3321, 7, 8, 9, 3320, 12, 13, 14, 3320],
[3321, 10, 11, 3320, 15, 16, 17, 18, 3320, 0, 0, 0, 0, 0, 0, 0, 0]]
] # concatenation and padding
context_mask = [
[[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
]
utterance_mask = [
[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]
]
attention_mask = [
[[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
]
...
Notice 3320
as the <|endoftext|>
token and the zero padding in input_ids
. Check the positions of 1
and 0
for all masks with respect to input_ids
.
For the model training, we have prepared the script hw3/train.py
that uses the class Trainer
from trainer.py
.
Your task will be to fill the TODO
s there to implement the training loop and validation step.
You will also need to create an optimizer and scheduler.
Load the pre-trained GPT-2 model from the Huggingface Transformers library. More precisely, instantiate the GPT2LMHeadModel
class and load weights from pretrained model (see .from_pretrained(...)
). Use the smallest version of the model ('gpt2'
).
Fine-tune the model on the response generation task. It means that your objective is to minimize negative log-likelihood (NLL) of the training data with respect to your model. Among a couple other things, you need to use the model's forward()
method (by simply calling model()
as usual in PyTorch/HF) and feed in the proper parameters.
Feed the whole input_ids
tensors into the model, including the context.
Only train the model to generate the response, not the context, by setting the model's target labels
properly (see this note in the docs). Make use of the utterance_mask
(or context_mask
) to produce the correct labels
input.
Don't forget to use the attention_mask
, so you avoid performing attention over padding.
Feel free to experiment with the optimizer/scheduler and training parameters. A good choice might be the ones preset by Huggingface (AdamW, Linear schedule with warmup).
Use the largest batch size you can (the largest where your GPU doesn't run out of memory). It might actually be very small (1-4).
Monitor the training and validation loss and use it to determine the hyperparameters (number of training epochs, learning rate, learning rate schedule, ...).
First start debugging with very small data, just a few batches (test if the model learns something by checking outputs on the training data).
Fix your random seeds so your results are repeatable, and you can tell if you actually changed something (must be done separately for Python and Numpy and PyTorch/Tensorflow!).
Note: You may see a lot of use of HuggingFace default Trainer
class. We're not doing that, and we're building our own training loop, for two reasons: (1) we need the “feed context + only train to generate responses” function, which wouldn't be straightforward there, (2) we want you to see how it's done on the low level.
Note: Training on the CPU is usually slow, so you'll likely want to use a GPU. You can use Google Colab, which provides GPUs for free for a limited time span. You can also get an account on our in-house AIC student computing cluster (Ondrej will get your accounts created and distribute passwords soon). Before you work on AIC, make sure you read instructions. You can prepare and debug your setup even without a GPU, then only run on the full data once you have access to a GPU.
Note: If you like experimenting, you can replace the GPT-2 model with a similar model trained on conversational data only, e.g., DialoGPT
. You can find and browse all pre-trained Huggingface models here.
Huggingface provides several options for decoding the outputs of your model. Go through the tutorial and choose a decoding method of your liking (you can go with greedy as the base option). Use it to generate utterances for the first 100 contexts available in the test set.
We have prepared the class GenerationWrapper
, which you will need to complete to implement generation from the model.
Optional -- bonus points: Implement batch decoding as well. This is completely optional, if you are interested in the implementation, let us know.
Besides the training and validation loss, we want you to report the following measures on the test set:
argmax
on the predicted raw logits and compare the result with the ground-truth token ids)diallama/mw_loader.py
)diallama/trainer.py
, hw3/train.py
)
hw3/multiwoz_outputs.txt
) containing the first 100 generated validation set responses, each on a separate line.hw3/multiwoz_scores.txt
) containing your token accuracy and perplexity on the whole validation set.Presented: 21 November, Deadline: 15 December
This assignment is a continuation of HW2 and depends on it. Your task will be to extend your previously created data loader with belief state and database information.
This will allow us to train a full end-to-end dialogue model with two-step decoding (similar to SOLOIST) using the data provided by the loader you develop here. The model will first produce the current belief state based on dialogue history. The belief state will be used to query the DB, and using DB results, the system response will be generated. Note that this will be done in HW5 & 6, so these depend on HW4.
The implementation includes changes to the MultiWOZDatabase
class (database search handling), the Dataset
class (including database results and the belief state),
and the DataLoader
class (also including database results and the belief state).
The MultiWOZ dataset is task-oriented, and the database is an important part of it. The database stores entities that are available for each domain, along with their attributes. You will use the database results when modelling the conversations, and therefore you need to implement the database query API. However, some domains are specific and their database queries need to be handled in a special way. Also, the MultiWOZ dataset has a few rather annoying quirks. Therefore, we provide for you a partially implemented database class, which already handles things that would be too annoying to deal with.
You still need to implement some things, though:
3pm -> 15:00
noon -> 12:00
three forty five -> 15:45
etc.
diallama/database.py
). cThe bits that are waiting for your implementation are highlighted with # TODO:
in the code.Note that to use the provided code, you need to install fuzzywuzzy
.
It is listed in the requirements.txt
file, so if you followed the installation instructions, you probably have it already.
We recommended to use it for partial matches, e.g., it allows you to match "London" to "London King's Cross" and similar situations.
This is an extension of the class from HW2. You'll need to implement code in the same spots as for HW2, just add some more stuff.
belief_state
and database_results
fields, so each example will look like this:{
'context': list[str], # list of utterances preceeding the current utterance
'utterance': str, # the string with the current response
'delex_utterance': str, # the string with the current response which is delexicalized, i.e. slot values are
# replaced by corresponding slot names in the text.
'belief_state': dict[str, dict[str, str]], # belief state dictionary, for each domain a separate belief state dictionary,
# choose a single slot value if more than one option is available
'database_results': dict[str, int] # dictionary containing the number of matching results per domain
}
belief_state
is a dictionary that contains mapping of domains to their corresponding belief states (slot-value pairs), i.e.
{ 'restaurant': {'pricerange': 'cheap', 'area': 'north', ...}, 'hotel': {'parking': 'yes', ...}, ... }
Look into the frames
fields of user utterances in the dataset to build the belief state.database_results
represent the counts of database entities matching the current belief state for each domain.
{ 'restaurant': 101, 'hotel': 42, ... }
You need to distinguish between the cases where 0 entities are matching and where the domain was not mentioned in the belief state and thus was not queried at all! Don't mention the domain in the results in the latter case.Again, you just need to extend your previously implemented class, so all the previous features (yield
ing batches, grouping similar lengths, shuffling..) still apply.
And again, you'll need to implement code in the same spots as for HW2, just add a little more.
Here you need to extend your collate function:
The output of the function should now look like this -- note the new belief_state
and database_results
fields:
output = {
# From HW4
'context': list[list[int]], # tokenized context for all batch examples
'utterance': list[list[int]], # tokenized utterances (current dialogue turn) for all batch examples
'delex_utterance': list[list[int]], # tokenized and delexicalized utterances for all batch examples
# From HW3
'input_ids': Tensor[bs, maxlen], # concatenated ids for context and utterance (sep. by special tokens)
'attention_mask': Tensor[bs, maxlen], # mask, 1 for valid input, 0 for padding
'context_mask': Tensor[bs, maxlen], # mask, 1 for context tokens, 0 for others
'utterance_mask': Tensor[bs, maxlen], # mask, 1 for utterance tokens, 0 for others
# New -- to be added
'belief_state': list[list[int]], # belief state dictionary serialized into a string representation and prepended with
# the `<|belief|>` special token and tokenized (list of subword ids
# from the current dialogue turn) for all batch examples
'database_results': list[list[int]], # database result counts serialized into string prepended with the `<|database|>`
# special token and tokenized (list of subword ids from the current dialogue turn)
# for all batch examples
}
where {key : output[key][i] for key in output}
should correspond to i-th example of the original input batch.
additional_special_tokens
argument of the tokenizer)!<|belief|> { restaurant { area : center , pricerange : cheap } attraction { area : south } } <|database|> { restaurant 45 , attraction 23 }
diallama/mw_loader.py
.diallama/database.py
hw4/test.py
run on your data (test set is used by default), as hw4/results_test.txt
. Have a look at what the script is doing.All homework assignments will be submitted using a Git repository on MFF GitLab.
We provide an easy recipe to set up your repository below:
git remote show origin
You should see these two lines:
* remote origin
Fetch URL: git@gitlab.mff.cuni.cz:teaching/NPFL099/2023/your_username.git
Push URL: git@gitlab.mff.cuni.cz:teaching/NPFL099/2023/your_username.git
upstream
:git remote add upstream https://gitlab.mff.cuni.cz/teaching/NPFL099/base.git
git checkout master
git checkout -b hwX
Solve the assignment :)
Add new files (if applicable) and commit your changes:
git add hwX/solution.py
git commit -am "commit message"
git push origin hwX
Create a Merge request in the web interface. Make sure you create the merge request into the master branch in your own forked repository (not into the upstream).
Merge requests -> New merge request
You'll probably need to update from the upstream base repository every once in a while (most probably before you start implementing each assignment). We'll let you know when we make changes to the base repo.
To upgrade from upstream, do the following:
git checkout master
git fetch upstream master
git merge upstream/master master
You can run some basic sanity checks for homework assignments -- they are included in your repository
(make sure to upgrade from upstream first).
Note that the tests require stuff from requirements.txt
to be installed in your Python environment.
The tests assume checking in the current directory, they assume you have the correct branches set up.
For instance, to check hw1
, run:
./run_tests.py hw1
By default, this will just check your local files. If you want to check whether you have
your branches set up correctly, use the --check-git
parameter.
Note that this will run git checkout hw1
and git pull
, so be sure to save any
local changes beforehand!
Always update from upstream before running tests, we're adding checks for new assignments as we go. Some may only be available at the last minute, we're sorry for that!
This is just a short primer for the AIC wiki – better read that one, too. But definitely read at least this text before you start working with AIC.
Use the command
ssh LOGIN@aic.ufal.mff.cuni.cz
where LOGIN is your SIS username.
When you log on to AIC, you're at the cluster head node. Do not compute here – this just for launching computation jobs, copying files and such. All of your computation jobs will run on one of the CPU/GPU nodes. (You can run the terminal multiplexing program on the head node.)
There are two ways to compute on the cluster:
You should use a batch script for running longer computations. The interactive shell is useful for debugging.
Use the sbatch
command to submit your jobs (i.e. shell scripts) into a queue. For running a python command, simply create a shell script that has one line – your command with all the parameters
you need.
You can either specify the parameters in the script or on the command line.
Here are two equivalent ways of specifying a GPU job with 2 CPU cores, 1 GPU and 16G system RAM (all GPUs have 11G memory):
job_script.sh
:#!/bin/bash
#SBATCH -J hello_world # name of job
#SBATCH -p gpu # name of partition or queue (if not specified default partition is used)
#SBATCH --cpus-per-task=2 # number of cores/threads per task (default 1)
#SBATCH --gpus=1 # number of GPUs to request (default 0)
#SBATCH --mem=16G # request 16 gigabytes memory (per node, default depends on node)
# here start the actual commands
sleep 5
echo "Hello I am running on cluster!"
sbatch job_script.sh
job_script.sh
:#!/bin/bash
sleep 5
echo "Hello I am running on cluster!"
sbatch -J hello_world -p gpu -c2 -G1 --mem 16G job_script.sh
Have a look at the AIC wiki or man sbatch
for all the command-line parameters.
(Note: long / short flags can be used interchangeably for both approaches.)
You can get an interactive console using srun
.
The following command will run bash
with the same resources as in the previous example:
srun -J hello_world -p gpu -c2 -G1 --mem=16G --pty bash
exit
the console after use – you're blocking the GPU and whatever you reserve as long as the console is open!sinfo
to list the available queues.squeue --me
or squeue -u LOGIN
(where LOGIN is your username) to check your jobs.squeue
to see every job currently running on the cluster.scancel JOB_ID
to cancel a job.sftp://LOGIN@aic.ufal.mff.cuni.cz
The exam will have 10 questions from the pool below. Each question counts for 10 points. We reserve the right to make slight alterations or use variants of the same questions. Note that all of them are covered by the lectures, and they cover most of the lecture content. In general, none of them requires you to memorize formulas, but you should know the main ideas and principles. See the Grading tab for details on grading.
To pass this course, you will need to:
In case the pandemic gets worse by the exam period, there will be a remote alternative for the exam (an essay with a discussion).
The final grade for the course will be a combination of your exam score and your homework assignment score, weighted 3:1 (i.e. the exam accounts for 75% of the grade, the assignments for 25%).
Grading:
In any case, you need >50% of points from the test and 40+ points (i.e. 66%) from the homeworks to pass. If you get less than the minimum from either, even if you get more than 60% overall, you will not pass.
You should be able to pass the course just by following the lectures, but here are some hints on further reading. There's nothing ideal on the topic as this is a very active research area, but some of these should give you a broader overview.
Recommended, though slightly outdated:
Recommended, but might be a bit too brief:
Further reading: