The objective of this course is to provide a comprehensive introduction to deep reinforcement learning, a powerful paradigm that combines reinforcement learning with deep neural networks. This approach has demonstrated super-human capabilities in diverse domains, including complex games like Go and chess, optimizing real-world systems like datacenter cooling, improving chip design, automated discovery of superior algorithms and neural network architectures, and advancing robotics and large language models.
The course focuses both on the theory, spanning from fundamental concepts to recent advancements, as well as on practical implementations in Python and PyTorch (students implement and train agents controlling robots, mastering video games, and planing in complex board games). Basic programming and deep learning skills are expected (for example from the Deep Learning course).
Students work either individually or in small teams on weekly assignments, including competition tasks, where the goal is to obtain the highest performance in the class.
Optionally, you can obtain a micro-credential after passing the course.
SIS code: NPFL139
Semester: summer
E-credits: 8
Examination: 3/4 C+Ex
Guarantor: Milan Straka
All lectures and practicals will be recorded and available on this website.
1. Introduction to Reinforcement Learning Slides PDF Slides Lecture MonteCarlo Practicals Questions bandits monte_carlo
Unless otherwise stated, teaching materials for this course are available under CC BY-SA 4.0.
A micro-credential (aka micro-certificate) is a digital certificate attesting that you have gained knowledge and skills in a specific area. It should be internationally recognized and verifiable using an online EU-wide verification system.
A micro-credential can be obtained both by the university students and external participants.
If you are not a university student, you can apply to the Reinforcement Learning micro-credential course here and then attend the course along the university students. Upon successfully passing the course, a micro-credential is issued.
The price of the course is 5 000 Kč. If you require a tax receipt, please inform Magdaléna Kokešová within three business days after the payment.
The lectures run for 14 weeks from Feb 17 to May 22, with the examination period continuing until the end of September. Please note that the organization of the course and the setup instructions will be described at the first lecture; if you have already applied, you do not need to do anything else until that time.
If you have passed the course (in academic year 2025/26 or later) as a part of your study plan, you can obtain a micro-credential by paying only an administrative fee of 300 Kč; if you passed the course but it is not in your study plan, the administrative fee is 500 Kč. Detailed instructions how to get the micro-credential will be sent to the course participants during the examination period.
The lecture content, including references to study materials.
The main study material is the Reinforcement Learning: An Introduction; second edition by Richard S. Sutton and Andrew G. Barto (referred to as RLB). It is available online and also as a hardcopy.
References to study materials cover all theory required at the exam, and sometimes even more – the references in italics cover topics not required for the exam.
Feb 17 Slides PDF Slides Lecture MonteCarlo Practicals Questions bandits monte_carlo
To pass the practicals, you need to obtain at least 80 points, excluding the bonus points. Note that all surplus points (both bonus and non-bonus) will be transfered to the exam. In total, assignments for at least 120 points (not including the bonus points) will be available, and if you solve all the assignments (any non-zero amount of points counts as solved), you automatically pass the exam with grade 1.
The tasks are evaluated automatically using the ReCodEx Code Examiner.
The evaluation is performed using Python 3.11, Gymnasium, and PyTorch. You should install the exact version of these packages yourselves.
Solving assignments in teams (of size at most 3) is encouraged, but everyone has to participate (it is forbidden not to work on an assignment and then submit a solution created by other team members). All members of the team must submit in ReCodEx individually, but can have exactly the same sources/models/results. Each such solution must explicitly list all members of the team to allow plagiarism detection using this template.
Cheating is strictly prohibited and any student found cheating will be punished. The punishment can involve failing the whole course, or, in grave cases, being expelled from the faculty. While discussing assignments with any classmate is fine, each team must complete the assignments themselves, without using code they did not write (unless explicitly allowed). Of course, inside a team you are allowed to share code and submit identical solutions. Note that all students involved in cheating will be punished, so if you share your source code with a friend, both you and your friend will be punished. That also means that you should never publish your solutions.
Relying blindly on AI during learning seems to have negative¹ effect² on skill acquisition. Therefore, you are not allowed to directly copy the assignment descriptions to GenAI and you are not allowed to directly use or copy-paste source code generated by GenAI. However, discussing your manually written code with GenAI is fine.
Deadline: Mar 04, 22:00 3 points
Implement the -greedy strategy for solving multi-armed bandits.
Start with the bandits.py
template, which defines MultiArmedBandits environment, which has the following
three methods:
reset(): reset the environmentstep(action) → reward: perform the chosen action in the environment,
obtaining a rewardgreedy(epsilon): return True with probability 1-epsilonYour goal is to implement the following solution variants:
alpha: perform -greedy search, updating the estimates using
averaging.alpha: perform -greedy search, updating the estimates using
a fixed learning rate alpha.Note that the initial estimates should be set to a given value, and epsilon can
be zero, in which case purely greedy actions are used.
Note that your results may be slightly different, depending on your CPU type and whether you use a GPU.
python3 bandits.py --alpha=0 --epsilon=0.1 --initial=01.39 0.08
python3 bandits.py --alpha=0 --epsilon=0 --initial=11.48 0.22
python3 bandits.py --alpha=0.15 --epsilon=0.1 --initial=01.37 0.09
python3 bandits.py --alpha=0.15 --epsilon=0 --initial=11.52 0.04
Deadline: Mar 04, 22:00 4 points
Solve the discretized CartPole-v1 environment
from the Gymnasium library using the Monte Carlo
reinforcement learning algorithm. The gymnasium environments have the following
methods and properties:
observation_space: the description of environment observationsaction_space: the description of environment actionsreset() → new_state, info: starts a new episode, returning the new
state and additional environment-specific informationstep(action) → new_state, reward, terminated, truncated, info: perform the
chosen action in the environment, returning the new state, obtained reward,
boolean flags indicating a terminal state and episode truncation, and
additional environment-specific informationWe additionally extend the gymnasium environment by:
episode: number of the current episode (zero-based)reset(start_evaluation=False) → new_state, info: if start_evaluation is
True, an evaluation is startedOnce you finish training (which you indicate by passing start_evaluation=True
to reset), your goal is to reach an average return of 490 during 100
evaluation episodes. Note that the environment prints your 100-episode
average return each 10 episodes even during training.
Start with the monte_carlo.py template, which parses several useful parameters, creates the environment and illustrates the overall usage.
During evaluation in ReCodEx, three different random seeds will be employed, and you need to reach the required return on all of them. Time limit for each test is 5 minutes.
When submitting a competition solution to ReCodEx, you should submit a trained agent and a Python source capable of running it.
Furthermore, please also include the Python source and hyperparameters
you used to train the submitted model. But be careful that there still must be
exactly one Python source with a line starting with def main(.
Do not forget about the maximum allowed model size and time and memory limits.
Before the deadline, ReCodEx prints the exact performance of your agent, but only if it is worse than the baseline.
If you surpass the baseline, the assignment is marked as solved in ReCodEx and you immediately get regular points for the assignment. However, ReCodEx does not print the reached performance.
After the first deadline, the latest submission of every user surpassing the required baseline participates in a competition. Additional bonus points are then awarded according to the ordering of the performance of the participating submissions.
After the competition results announcement, ReCodEx starts to show the exact performance for all the already submitted solutions and also for the solutions submitted later.
What Python version to use
The recommended Python version is 3.11. This version is used by ReCodEx to evaluate your solutions. Supported Python versions are 3.11–3.13 (some dependencies do not yet provide wheels for Python 3.14).
You can find out the version of your Python installation using python3 --version.
Installing to central user packages repository
You can install all required packages to central user packages repository using
python3 -m pip install --user --no-cache-dir --extra-index-url=https://download.pytorch.org/whl/cu128 npfl139.
On Linux and Windows, the above command installs CUDA 12.8 PyTorch build, but you can change cu128 to:
cpu to get CPU-only (smaller) version,cu124 to get CUDA 12.4 build,rocm7.1 to get AMD ROCm 7.1 build (Linux only).On macOS, the --extra-index-url has no effect and the Metal support is
installed in any case.
To update the npfl139 package later, use python3 -m pip install --user --upgrade npfl139.
Installing to a virtual environment
Python supports virtual environments, which are directories containing
independent sets of installed packages. You can create a virtual environment
by running python3 -m venv VENV_DIR followed by
VENV_DIR/bin/pip install --no-cache-dir --extra-index-url=https://download.pytorch.org/whl/cu128 npfl139.
(or VENV_DIR/Scripts/pip on Windows).
Again, apart from the CUDA 12.8 build, you can change cu128 on Linux and
Windows to:
cpu to get CPU-only (smaller) version,cu124 to get CUDA 12.4 build,rocm7.1 to get AMD ROCm 7.1 build (Linux only).To update the npfl139 package later, use VENV_DIR/bin/pip install --upgrade npfl139.
Windows installation
On Windows, it can happen that python3 is not in PATH, while py command
is – in that case you can use py -m venv VENV_DIR, which uses the newest
Python available, or for example py -3.11 -m venv VENV_DIR, which uses
Python version 3.11.
If MuJoCo environments fail during construction, make sure the path of the Python site packages contains no non-ASCII characters. If it does, you can create a new virtual environment in a suitable directory to circumvent the problem.
If you encounter a problem creating the logs in the args.logdir directory,
a possible cause is that the path is longer than 260 characters, which is
the default maximum length of a complete path on Windows. However, you can
increase this limit on Windows 10, version 1607 or later, by following
the instructions.
MacOS installation
Install Certificates.command, which should be executed after installation;
see https://docs.python.org/3/using/mac.html#installation-steps.GPU support on Linux and Windows
PyTorch supports NVIDIA GPU or AMD GPU out of the box, you just need to select
appropriate --extra-index-url when installing the packages.
If you encounter problems loading CUDA or cuDNN libraries, make sure your
LD_LIBRARY_PATH does not contain paths to older CUDA/cuDNN libraries.
Is it possible to keep the solutions in a Git repository?
Definitely. Keeping the solutions in a branch of your repository, where you merge them with the course repository, is probably a good idea. However, please keep the cloned repository with your solutions private.
On GitHub, do not create a public fork containing your solutions.
If you keep your solutions in a GitHub repository, please do not create a clone of the repository by using the Fork button; this way, the cloned repository would be public.
Of course, if you want to create a pull request, GitHub requires a public fork and you need to create it, just do not store your solutions in it (so you might end up with two repositories, a public fork for pull requests and a private repo for your own solutions).
How to clone the course repository?
To clone the course repository, run
git clone https://github.com/ufal/npfl139
This creates the repository in the npfl139 subdirectory; if you want a different
name, add it as an additional parameter.
To update the repository, run git pull inside the repository directory.
How to merge the course repository updates into a private repository with additional changes?
It is possible to have a private repository that combines your solutions and the updates from the course repository. To do that, start by cloning your empty private repository, and then run the following commands in it:
git remote add course_repo https://github.com/ufal/npfl139
git fetch course_repo
git checkout --no-track course_repo/master
This creates a new remote course_repo and a clone of the master branch
from it; however, git pull and git push in this branch will operate
on the repository your cloned originally.
To update your branch with the changes from the course repository, run
git fetch course_repo
git merge course_repo/master
while in your branch (the command git pull --no-rebase course_repo master
has the same effect). Of course, it might be necessary to resolve conflicts
if both you and the course repository modified the same lines in the same files.
What files can be submitted to ReCodEx?
You can submit multiple files of any type to ReCodEx. There is a limit of 20 files per submission, with a total size of 20MB.
What file does ReCodEx execute and what arguments does it use?
Exactly one file with py suffix must contain a line starting with def main(.
Such a file is imported by ReCodEx and the main method is executed
(during the import, __name__ == "__recodex__").
The file must also export an argument parser called parser. ReCodEx uses its
arguments and default values, but it overwrites some of the arguments
depending on the test being executed; the template always indicates which
arguments are set by ReCodEx and which are left intact.
What are the time and memory limits?
The memory limit during evaluation is 1.5GB. The time limit varies, but it should be at least 10 seconds and at least twice the running time of my solution.
Do agents need to be trained directly in ReCodEx?
No, you can pre-train your agent locally (unless specified otherwise in the task description).
To pass the practicals, you need to obtain at least 80 points, excluding the bonus points. Note that all surplus points (both bonus and non-bonus) will be transfered to the exam. In total, assignments for at least 120 points (not including the bonus points) will be available, and if you solve all the assignments (any non-zero amount of points counts as solved), you automatically pass the exam with grade 1.
To pass the exam, you need to obtain at least 60, 75, or 90 points out of 100-point exam to receive a grade 3, 2, or 1, respectively. The exam consists of 100-point-worth questions from the list below (the questions are randomly generated, but in such a way that there is at least one question from every but the last lecture). In addition, you can get surplus points from the practicals and at most 10 points for community work (i.e., fixing slides or reporting issues) – but only the points you already have at the time of the exam count. You can take the exam without passing the practicals first.
Lecture 1 Questions
Derive how to incrementally update a running average (how to compute an average of numbers using the average of the first numbers). [5]
Describe multi-arm bandits and write down the -greedy algorithm for solving it. [5]
Define a Markov Decision Process, including the definition of a return. [5]
Describe how a partially observable Markov decision process extends a Markov decision process and how the agent is altered. [5]
Define a value function, such that all expectations are over simple random variables (actions, states, rewards), not trajectories. [5]
Define an action-value function, such that all expectations are over simple random variables (actions, states, rewards), not trajectories. [5]
Express a value function using an action-value function, and express an action-value function using a value function. [5]
Define optimal value function, optimal action-value function, and the optimal policy. [5]
