In recent years, reinforcement learning has been combined with deep neural networks, giving rise to game agents with super-human performance (for example for Go, chess, StarCraft II, capable of being trained solely by self-play), datacenter cooling algorithms being 50% more efficient than trained human operators, or faster code for sorting or matrix multiplication. The goal of the course is to introduce reinforcement learning employing deep neural networks, focusing both on the theory and on practical implementations.
Python programming skills and basic PyTorch/TensorFlow skills are required (the latter can be obtained on the Deep Learning course). No previous knowledge of reinforcement learning is necessary.
SIS code: NPFL139
Semester: summer
E-credits: 8
Examination: 3/4 C+Ex
Guarantor: Milan Straka
All lectures and practicals will be recorded and available on this website.
1. Introduction to Reinforcement Learning Slides PDF Slides Lecture Practicals Questions bandits monte_carlo
2. Value and Policy Iteration, Monte Carlo, Temporal Difference Slides PDF Slides Lecture TD, Q-learning Practicals Questions policy_iteration policy_iteration_exact policy_iteration_mc_estarts policy_iteration_mc_egreedy q_learning
3. Off-Policy Methods, N-step, Function Approximation Slides PDF Slides Lecture TreeBackup Practicals Questions importance_sampling td_algorithms q_learning_tiles lunar_lander
Unless otherwise stated, teaching materials for this course are available under CC BY-SA 4.0.
The lecture content, including references to study materials.
The main study material is the Reinforcement Learning: An Introduction; second edition by Richard S. Sutton and Andrew G. Barto (reffered to as RLB). It is available online and also as a hardcopy.
References to study materials cover all theory required at the exam, and sometimes even more – the references in italics cover topics not required for the exam.
Feb 19 Slides PDF Slides Lecture Practicals Questions bandits monte_carlo
Feb 26 Slides PDF Slides Lecture TD, Q-learning Practicals Questions policy_iteration policy_iteration_exact policy_iteration_mc_estarts policy_iteration_mc_egreedy q_learning
Mar 5 Slides PDF Slides Lecture TreeBackup Practicals Questions importance_sampling td_algorithms q_learning_tiles lunar_lander
To pass the practicals, you need to obtain at least 80 points, excluding the bonus points. Note that all surplus points (both bonus and non-bonus) will be transfered to the exam. In total, assignments for at least 120 points (not including the bonus points) will be available, and if you solve all the assignments (any non-zero amount of points counts as solved), you automatically pass the exam with grade 1.
The tasks are evaluated automatically using the ReCodEx Code Examiner.
The evaluation is performed using Python 3.11, Gymnasium 1.0.0, and PyTorch 2.6.0. You should install the exact version of these packages yourselves.
Solving assignments in teams (of size at most 3) is encouraged, but everyone has to participate (it is forbidden not to work on an assignment and then submit a solution created by other team members). All members of the team must submit in ReCodEx individually, but can have exactly the same sources/models/results. Each such solution must explicitly list all members of the team to allow plagiarism detection using this template.
Cheating is strictly prohibited and any student found cheating will be punished. The punishment can involve failing the whole course, or, in grave cases, being expelled from the faculty. While discussing assignments with any classmate is fine, each team must complete the assignments themselves, without using code they did not write (unless explicitly allowed). Of course, inside a team you are allowed to share code and submit identical solutions. Note that all students involved in cheating will be punished, so if you share your source code with a friend, both you and your friend will be punished. That also means that you should never publish your solutions.
Deadline: Mar 05, 22:00 3 points
Implement the -greedy strategy for solving multi-armed bandits.
Start with the bandits.py
template, which defines MultiArmedBandits
environment, which has the following
three methods:
reset()
: reset the environmentstep(action) → reward
: perform the chosen action in the environment,
obtaining a rewardgreedy(epsilon)
: return True
with probability 1-epsilon
Your goal is to implement the following solution variants:
alpha
: perform -greedy search, updating the estimates using
averaging.alpha
: perform -greedy search, updating the estimates using
a fixed learning rate alpha
.Note that the initial estimates should be set to a given value and epsilon
can
be zero, in which case purely greedy actions are used.
Note that your results may be slightly different, depending on your CPU type and whether you use a GPU.
python3 bandits.py --alpha=0 --epsilon=0.1 --initial=0
1.39 0.08
python3 bandits.py --alpha=0 --epsilon=0 --initial=1
1.48 0.22
python3 bandits.py --alpha=0.15 --epsilon=0.1 --initial=0
1.37 0.09
python3 bandits.py --alpha=0.15 --epsilon=0 --initial=1
1.52 0.04
Deadline: Mar 05, 22:00 4 points
Solve the discretized CartPole-v1 environment
from the Gymnasium library using the Monte Carlo
reinforcement learning algorithm. The gymnasium
environments have the following
methods and properties:
observation_space
: the description of environment observationsaction_space
: the description of environment actionsreset() → new_state, info
: starts a new episode, returning the new
state and additional environment-specific informationstep(action) → new_state, reward, terminated, truncated, info
: perform the
chosen action in the environment, returning the new state, obtained reward,
boolean flags indicating a terminal state and episode truncation, and
additional environment-specific informationWe additionally extend the gymnasium
environment by:
episode
: number of the current episode (zero-based)reset(start_evaluation=False) → new_state, info
: if start_evaluation
is
True
, an evaluation is startedOnce you finish training (which you indicate by passing start_evaluation=True
to reset
), your goal is to reach an average return of 490 during 100
evaluation episodes. Note that the environment prints your 100-episode
average return each 10 episodes even during training.
Start with the monte_carlo.py template, which parses several useful parameters, creates the environment and illustrates the overall usage.
During evaluation in ReCodEx, three different random seeds will be employed, and you need to reach the required return on all of them. Time limit for each test is 5 minutes.
Deadline: Mar 12, 22:00 2 points
Consider the following gridworld:
Start with policy_iteration.py, which implements the gridworld mechanics, by providing the following methods:
GridWorld.states
: return the number of states (11
)GridWorld.actions
: return the number of actions (4
)GridWorld.action_labels
: return a list with labels of the actions (["↑", "→", "↓", "←"]
)GridWorld.step(state, action)
: return possible outcomes of performing the
action
in a given state
, as a list of triples containing
probability
: probability of the outcomereward
: reward of the outcomenew_state
: new state of the outcomeImplement policy iteration algorithm, with --steps
steps of policy
evaluation/policy improvement. During policy evaluation, use the current value
function and perform --iterations
applications of the Bellman equation.
Perform the policy evaluation asynchronously (i.e., update the value function
in-place for states ). Assume the initial policy is “go North” and
initial value function is zero.
Note that your results may be slightly different, depending on your CPU type and whether you use a GPU.
python3 policy_iteration.py --gamma=0.95 --iterations=1 --steps=1
0.00↑ 0.00↑ 0.00↑ 0.00↑
0.00↑ -10.00← -10.95↑
0.00↑ 0.00← -7.50← -88.93←
python3 policy_iteration.py --gamma=0.95 --iterations=1 --steps=2
0.00↑ 0.00↑ 0.00↑ 0.00↑
0.00↑ -8.31← -11.83←
0.00↑ 0.00← -1.50← -20.61←
python3 policy_iteration.py --gamma=0.95 --iterations=1 --steps=3
0.00↑ 0.00↑ 0.00↑ 0.00↑
0.00↑ -6.46← -6.77←
0.00↑ 0.00← -0.76← -13.08↓
python3 policy_iteration.py --gamma=0.95 --iterations=1 --steps=10
0.00↑ 0.00↑ 0.00↑ 0.00↑
0.00↑ -1.04← -0.83←
0.00↑ 0.00← -0.11→ -0.34↓
python3 policy_iteration.py --gamma=0.95 --iterations=10 --steps=10
11.93↓ 11.19← 10.47← 6.71↑
12.83↓ 10.30← 10.12←
13.70→ 14.73→ 15.72→ 16.40↓
python3 policy_iteration.py --gamma=1 --iterations=1 --steps=100
74.73↓ 74.50← 74.09← 65.95↑
75.89↓ 72.63← 72.72←
77.02→ 78.18→ 79.31→ 80.16↓
Deadline: Mar 12, 22:00 2 points
Starting with policy_iteration_exact.py,
extend the policy_iteration
assignment to perform policy evaluation
exactly by solving a system of linear equations. Note that you need to
use 64-bit floats because lower precision results in unacceptable error.
Note that your results may be slightly different, depending on your CPU type and whether you use a GPU.
python3 policy_iteration_exact.py --gamma=0.95 --steps=1
-0.00↑ -0.00↑ -0.00↑ -0.00↑
-0.00↑ -12.35← -12.35↑
-0.85← -8.10← -19.62← -100.71←
python3 policy_iteration_exact.py --gamma=0.95 --steps=2
0.00↑ 0.00↑ 0.00↑ 0.00↑
0.00↑ 0.00← -11.05←
-0.00↑ -0.00↑ -0.00← -12.10↓
python3 policy_iteration_exact.py --gamma=0.95 --steps=3
-0.00↑ 0.00↑ 0.00↑ 0.00↑
-0.00↑ -0.00← 0.69←
-0.00↑ -0.00↑ -0.00→ 6.21↓
python3 policy_iteration_exact.py --gamma=0.95 --steps=4
-0.00↑ 0.00↑ 0.00↓ 0.00↑
-0.00↓ 5.91← 6.11←
0.65→ 6.17→ 14.93→ 15.99↓
python3 policy_iteration_exact.py --gamma=0.95 --steps=5
2.83↓ 4.32→ 8.09↓ 5.30↑
12.92↓ 9.44← 9.35←
13.77→ 14.78→ 15.76→ 16.53↓
python3 policy_iteration_exact.py --gamma=0.95 --steps=6
11.75↓ 8.15← 8.69↓ 5.69↑
12.97↓ 9.70← 9.59←
13.82→ 14.84→ 15.82→ 16.57↓
python3 policy_iteration_exact.py --gamma=0.95 --steps=7
12.12↓ 11.37← 9.19← 6.02↑
13.01↓ 9.92← 9.79←
13.87→ 14.89→ 15.87→ 16.60↓
python3 policy_iteration_exact.py --gamma=0.95 --steps=8
12.24↓ 11.49← 10.76← 7.05↑
13.14↓ 10.60← 10.42←
14.01→ 15.04→ 16.03→ 16.71↓
python3 policy_iteration_exact.py --gamma=0.9999 --steps=5
7385.23↓ 7392.62→ 7407.40↓ 7400.00↑
7421.37↓ 7411.10← 7413.16↓
7422.30→ 7423.34→ 7424.27→ 7425.84↓
Deadline: Mar 12, 22:00 2 points
Starting with policy_iteration_mc_estarts.py,
extend the policy_iteration
assignment to perform policy evaluation
by using Monte Carlo estimation with exploring starts. Specifically,
we update the action-value function by running a
simulation with a given number of steps and using the observed return
as its estimate.
The estimation can now be performed model-free (without the access to the full
MDP dynamics), therefore, the GridWorld.step
returns a randomly sampled
result instead of a full distribution.
Note that your results may be slightly different, depending on your CPU type and whether you use a GPU.
python3 policy_iteration_mc_estarts.py --gamma=0.95 --seed=42 --mc_length=100 --steps=1
0.00↑ 0.00↑ 0.00↑ 0.00↑
0.00↑ 0.00↑ 0.00↑
0.00↑ 0.00→ 0.00↑ 0.00↓
python3 policy_iteration_mc_estarts.py --gamma=0.95 --seed=42 --mc_length=100 --steps=10
0.00↑ 0.00↑ 0.00↑ 0.00↑
0.00↑ 0.00↑ -19.50↑
0.27↓ 0.48← 2.21↓ 8.52↓
python3 policy_iteration_mc_estarts.py --gamma=0.95 --seed=42 --mc_length=100 --steps=50
0.09↓ 0.32↓ 0.22← 0.15↑
0.18↑ -2.43← -5.12↓
0.18↓ 1.80↓ 3.90↓ 9.14↓
python3 policy_iteration_mc_estarts.py --gamma=0.95 --seed=42 --mc_length=100 --steps=100
3.09↓ 2.42← 2.39← 1.17↑
3.74↓ 1.66← 0.18←
3.92→ 5.28→ 7.16→ 11.07↓
python3 policy_iteration_mc_estarts.py --gamma=0.95 --seed=42 --mc_length=100 --steps=200
7.71↓ 6.76← 6.66← 3.92↑
8.27↓ 6.17← 5.31←
8.88→ 10.12→ 11.36→ 13.92↓
Deadline: Mar 12, 22:00 2 points
Starting with policy_iteration_mc_egreedy.py,
extend the policy_iteration_mc_estarts
assignment to perform policy
evaluation by using -greedy Monte Carlo estimation. Specifically,
we update the action-value function by running a
simulation with a given number of steps and using the observed return
as its estimate.
For the sake of replicability, use the provided
GridWorld.epsilon_greedy(epsilon, greedy_action)
method, which returns
a random action with probability of epsilon
and otherwise returns the
given greedy_action
.
Note that your results may be slightly different, depending on your CPU type and whether you use a GPU.
python3 policy_iteration_mc_egreedy.py --gamma=0.95 --seed=42 --mc_length=100 --steps=1
0.00↑ 0.00↑ 0.00↑ 0.00↑
0.00↑ 0.00→ 0.00→
0.00↑ 0.00↑ 0.00→ 0.00→
python3 policy_iteration_mc_egreedy.py --gamma=0.95 --seed=42 --mc_length=100 --steps=10
-1.20↓ -1.43← 0.00← -6.00↑
0.78→ -20.26↓ 0.00←
0.09← 0.00↓ -9.80↓ 10.37↓
python3 policy_iteration_mc_egreedy.py --gamma=0.95 --seed=42 --mc_length=100 --steps=50
-0.16↓ -0.19← 0.56← -6.30↑
0.13→ -6.99↓ -3.51↓
0.01← 0.00← 3.18↓ 7.57↓
python3 policy_iteration_mc_egreedy.py --gamma=0.95 --seed=42 --mc_length=100 --steps=100
-0.07↓ -0.09← 0.28← -4.66↑
0.06→ -5.04↓ -8.32↓
0.00← 0.00← 1.70↓ 4.38↓
python3 policy_iteration_mc_egreedy.py --gamma=0.95 --seed=42 --mc_length=100 --steps=200
-0.04↓ -0.04← -0.76← -4.15↑
0.03→ -8.02↓ -5.96↓
0.00← 0.00← 2.53↓ 4.36↓
python3 policy_iteration_mc_egreedy.py --gamma=0.95 --seed=42 --mc_length=100 --steps=500
-0.02↓ -0.02← -0.65← -3.52↑
0.01→ -11.34↓ -8.07↓
0.00← 0.00← 3.15↓ 3.99↓
Deadline: Mar 12, 22:00 4 points
Solve the discretized MountainCar-v0 environment from the Gymnasium library using the Q-learning reinforcement learning algorithm. Note that this task still does not require PyTorch.
The environment methods and properties are described in the monte_carlo
assignment.
Once you finish training (which you indicate by passing start_evaluation=True
to reset
), your goal is to reach an average return of -150 during 100
evaluation episodes.
You can start with the q_learning.py template, which parses several useful parameters, creates the environment and illustrates the overall usage. Note that setting hyperparameters of Q-learning is a bit tricky – I usually start with a larger value of (like 0.2 or even 0.5) and then gradually decrease it to almost zero.
During evaluation in ReCodEx, three different random seeds will be employed, and you need to reach the required return on all of them. The time limit for each test is 5 minutes.
Deadline: Mar 19, 22:00 2 points
Using the FrozenLake-v1 environment, implement Monte Carlo weighted importance sampling to estimate state value function of target policy, which uniformly chooses either action 1 (down) or action 2 (right), utilizing behaviour policy, which uniformly chooses among all four actions.
Start with the importance_sampling.py template, which creates the environment and generates episodes according to behaviour policy.
Note that your results may be slightly different, depending on your CPU type and whether you use a GPU.
python3 importance_sampling.py --episodes=200
0.00 0.00 0.24 0.32
0.00 0.00 0.40 0.00
0.00 0.00 0.20 0.00
0.00 0.00 0.22 0.00
python3 importance_sampling.py --episodes=5000
0.03 0.00 0.01 0.03
0.04 0.00 0.09 0.00
0.10 0.24 0.23 0.00
0.00 0.44 0.49 0.00
python3 importance_sampling.py --episodes=50000
0.03 0.02 0.05 0.01
0.13 0.00 0.07 0.00
0.21 0.33 0.36 0.00
0.00 0.35 0.76 0.00
Deadline: Mar 19, 22:00 4 points
Starting with the td_algorithms.py template, implement all of the following -step TD methods variants:
Note that your results may be slightly different, depending on your CPU type and whether you use a GPU.
python3 td_algorithms.py --episodes=10 --mode=sarsa --n=1
Episode 10, mean 100-episode return -652.70 +-37.77
python3 td_algorithms.py --episodes=10 --mode=sarsa --n=1 --off_policy
Episode 10, mean 100-episode return -632.90 +-126.41
python3 td_algorithms.py --episodes=10 --mode=sarsa --n=4
Episode 10, mean 100-episode return -715.70 +-156.56
python3 td_algorithms.py --episodes=10 --mode=sarsa --n=4 --off_policy
Episode 10, mean 100-episode return -649.10 +-171.73
python3 td_algorithms.py --episodes=10 --mode=expected_sarsa --n=1
Episode 10, mean 100-episode return -641.90 +-122.11
python3 td_algorithms.py --episodes=10 --mode=expected_sarsa --n=1 --off_policy
Episode 10, mean 100-episode return -633.80 +-63.61
python3 td_algorithms.py --episodes=10 --mode=expected_sarsa --n=4
Episode 10, mean 100-episode return -713.90 +-107.05
python3 td_algorithms.py --episodes=10 --mode=expected_sarsa --n=4 --off_policy
Episode 10, mean 100-episode return -648.20 +-107.08
python3 td_algorithms.py --episodes=10 --mode=tree_backup --n=1
Episode 10, mean 100-episode return -641.90 +-122.11
python3 td_algorithms.py --episodes=10 --mode=tree_backup --n=1 --off_policy
Episode 10, mean 100-episode return -633.80 +-63.61
python3 td_algorithms.py --episodes=10 --mode=tree_backup --n=4
Episode 10, mean 100-episode return -663.50 +-111.78
python3 td_algorithms.py --episodes=10 --mode=tree_backup --n=4 --off_policy
Episode 10, mean 100-episode return -708.50 +-125.63
Note that your results may be slightly different, depending on your CPU type and whether you use a GPU.
python3 td_algorithms.py --mode=sarsa --n=1
Episode 200, mean 100-episode return -235.23 +-92.94
Episode 400, mean 100-episode return -133.18 +-98.63
Episode 600, mean 100-episode return -74.19 +-70.39
Episode 800, mean 100-episode return -41.84 +-54.53
Episode 1000, mean 100-episode return -31.96 +-52.14
python3 td_algorithms.py --mode=sarsa --n=1 --off_policy
Episode 200, mean 100-episode return -227.81 +-91.62
Episode 400, mean 100-episode return -131.29 +-90.07
Episode 600, mean 100-episode return -65.35 +-64.78
Episode 800, mean 100-episode return -34.65 +-44.93
Episode 1000, mean 100-episode return -8.70 +-25.74
python3 td_algorithms.py --mode=sarsa --n=4
Episode 200, mean 100-episode return -277.55 +-146.18
Episode 400, mean 100-episode return -87.11 +-152.12
Episode 600, mean 100-episode return -6.95 +-23.28
Episode 800, mean 100-episode return -1.88 +-19.21
Episode 1000, mean 100-episode return 0.97 +-11.76
python3 td_algorithms.py --mode=sarsa --n=4 --off_policy
Episode 200, mean 100-episode return -339.11 +-144.40
Episode 400, mean 100-episode return -172.44 +-176.79
Episode 600, mean 100-episode return -36.23 +-100.93
Episode 800, mean 100-episode return -22.43 +-81.29
Episode 1000, mean 100-episode return -3.95 +-17.78
python3 td_algorithms.py --mode=expected_sarsa --n=1
Episode 200, mean 100-episode return -223.35 +-102.16
Episode 400, mean 100-episode return -143.82 +-96.71
Episode 600, mean 100-episode return -79.92 +-68.88
Episode 800, mean 100-episode return -38.53 +-47.12
Episode 1000, mean 100-episode return -17.41 +-31.26
python3 td_algorithms.py --mode=expected_sarsa --n=1 --off_policy
Episode 200, mean 100-episode return -231.91 +-87.72
Episode 400, mean 100-episode return -136.19 +-94.16
Episode 600, mean 100-episode return -79.65 +-70.75
Episode 800, mean 100-episode return -35.42 +-44.91
Episode 1000, mean 100-episode return -11.79 +-23.46
python3 td_algorithms.py --mode=expected_sarsa --n=4
Episode 200, mean 100-episode return -263.10 +-161.97
Episode 400, mean 100-episode return -102.52 +-162.03
Episode 600, mean 100-episode return -7.13 +-24.53
Episode 800, mean 100-episode return -1.69 +-12.21
Episode 1000, mean 100-episode return -1.53 +-11.04
python3 td_algorithms.py --mode=expected_sarsa --n=4 --off_policy
Episode 200, mean 100-episode return -376.56 +-116.08
Episode 400, mean 100-episode return -292.35 +-166.14
Episode 600, mean 100-episode return -173.83 +-194.11
Episode 800, mean 100-episode return -89.57 +-153.70
Episode 1000, mean 100-episode return -54.60 +-127.73
python3 td_algorithms.py --mode=tree_backup --n=1
Episode 200, mean 100-episode return -223.35 +-102.16
Episode 400, mean 100-episode return -143.82 +-96.71
Episode 600, mean 100-episode return -79.92 +-68.88
Episode 800, mean 100-episode return -38.53 +-47.12
Episode 1000, mean 100-episode return -17.41 +-31.26
python3 td_algorithms.py --mode=tree_backup --n=1 --off_policy
Episode 200, mean 100-episode return -231.91 +-87.72
Episode 400, mean 100-episode return -136.19 +-94.16
Episode 600, mean 100-episode return -79.65 +-70.75
Episode 800, mean 100-episode return -35.42 +-44.91
Episode 1000, mean 100-episode return -11.79 +-23.46
python3 td_algorithms.py --mode=tree_backup --n=4
Episode 200, mean 100-episode return -270.51 +-134.35
Episode 400, mean 100-episode return -64.27 +-109.50
Episode 600, mean 100-episode return -1.80 +-13.34
Episode 800, mean 100-episode return -0.22 +-13.14
Episode 1000, mean 100-episode return 0.60 +-9.37
python3 td_algorithms.py --mode=tree_backup --n=4 --off_policy
Episode 200, mean 100-episode return -248.56 +-147.74
Episode 400, mean 100-episode return -68.60 +-126.13
Episode 600, mean 100-episode return -6.25 +-32.23
Episode 800, mean 100-episode return -0.53 +-11.82
Episode 1000, mean 100-episode return 2.33 +-8.35
Deadline: Mar 19, 22:00 3 points
Improve the q_learning
task performance on the
MountainCar-v0 environment
using linear function approximation with tile coding.
Your goal is to reach an average reward of -110 during 100 evaluation episodes.
The environment methods are described in the q_learning
assignment, with
the following changes:
state
returned by the env.step
method is a list containing weight
indices of the current state (i.e., the feature vector of the state consists
of zeros and ones, and only the indices of the ones are returned). The
action-value function is therefore approximated as a sum of the weights whose
indices are returned by env.step
.env.observation_space.nvec
returns a list, where the -th element
is a number of weights used by first elements of state
. Notably,
env.observation_space.nvec[-1]
is the total number of the weights.You can start with the q_learning_tiles.py
template, which parses several useful parameters and creates the environment.
Implementing Q-learning is enough to pass the assignment, even if both N-step
Sarsa and Tree Backup converge a little faster. The default number of tiles in
tile encoding (i.e., the size of the list with weight indices) is
args.tiles=8
, but you can use any number you want (but the assignment is
solvable with 8).
During evaluation in ReCodEx, three different random seeds will be employed, and you need to reach the required return on all of them. The time limit for each test is 5 minutes.
Deadline: Mar 19, 22:00 5 points + 5 bonus
Solve the LunarLander-v3 environment from the Gymnasium library Note that this task does not require PyTorch.
The environment methods and properties are described in the monte_carlo
assignment,
but include one additional method:
expert_trajectory(seed=None) → trajectory
: This method generates one expert
trajectory, where trajectory
is a list of triples (state, action, reward),
where the action and reward is None
when reaching the terminal state.
If a seed
is given, the expert trajectory random generator is reset before
generating the trajectory.
You cannot change the implementation of this method or use its internals in
any other way than just calling expert_trajectory()
. Furthermore,
you can use this method only during training, not during evaluation.
To pass the task, you need to reach an average return of 0 during 1000 evaluation episodes. During evaluation in ReCodEx, three different random seeds will be employed, and you need to reach the required return on all of them. Time limit for each test is 15 minutes.
The task is additionally a competition, and at most 5 points will be awarded according to the relative ordering of your solutions.
You can start with the lunar_lander.py template, which parses several useful parameters, creates the environment and illustrates the overall usage.
In the competition, you should consider the environment states meaning to be unknown, so you cannot use the knowledge about how they are created. But you can learn any such information from the data.
When submitting a competition solution to ReCodEx, you should submit a trained agent and a Python source capable of running it.
Furthermore, please also include the Python source and hyperparameters
you used to train the submitted model. But be careful that there still must be
exactly one Python source with a line starting with def main(
.
Do not forget about the maximum allowed model size and time and memory limits.
Before the deadline, ReCodEx prints the exact performance of your agent, but only if it is worse than the baseline.
If you surpass the baseline, the assignment is marked as solved in ReCodEx and you immediately get regular points for the assignment. However, ReCodEx does not print the reached performance.
After the competition deadline, the latest submission of every user surpassing the required baseline participates in a competition. Additional bonus points are then awarded according to the ordering of the performance of the participating submissions.
After the competition results announcement, ReCodEx starts to show the exact performance for all the already submitted solutions and also for the solutions submitted later.
What Python version to use
The recommended Python version is 3.11. This version is used by ReCodEx to evaluate your solutions. Minimum required version is Python 3.10.
You can find out the version of your Python installation using python3 --version
.
Installing to central user packages repository
You can install all required packages to central user packages repository using
python3 -m pip install --user --no-cache-dir --extra-index-url=https://download.pytorch.org/whl/cu118 npfl139
.
On Linux and Windows, the above command installs CUDA 11.8 PyTorch build, but you can change cu118
to:
cpu
to get CPU-only (smaller) version,cu124
to get CUDA 12.4 build,rocm6.2.4
to get AMD ROCm 6.2.4 build (Linux only).On macOS, the --extra-index-url
has no effect and the Metal support is
installed in any case.
To update the npfl139
package later, use python3 -m pip install --user --upgrade npfl139
.
Installing to a virtual environment
Python supports virtual environments, which are directories containing
independent sets of installed packages. You can create a virtual environment
by running python3 -m venv VENV_DIR
followed by
VENV_DIR/bin/pip install --no-cache-dir --extra-index-url=https://download.pytorch.org/whl/cu118 npfl139
.
(or VENV_DIR/Scripts/pip
on Windows).
Again, apart from the CUDA 11.8 build, you can change cu118
on Linux and
Windows to:
cpu
to get CPU-only (smaller) version,cu124
to get CUDA 12.4 build,rocm6.2.4
to get AMD ROCm 6.2.4 build (Linux only).To update the npfl139
package later, use VENV_DIR/bin/pip install --upgrade npfl139
.
Windows installation
On Windows, it can happen that python3
is not in PATH, while py
command
is – in that case you can use py -m venv VENV_DIR
, which uses the newest
Python available, or for example py -3.11 -m venv VENV_DIR
, which uses
Python version 3.11.
If you encounter a problem creating the logs in the args.logdir
directory,
a possible cause is that the path is longer than 260 characters, which is
the default maximum length of a complete path on Windows. However, you can
increase this limit on Windows 10, version 1607 or later, by following
the instructions.
MacOS installation
Install Certificates.command
, which should be executed after installation;
see https://docs.python.org/3/using/mac.html#installation-steps.GPU support on Linux and Windows
PyTorch supports NVIDIA GPU or AMD GPU out of the box, you just need to select
appropriate --extra-index-url
when installing the packages.
If you encounter problems loading CUDA or cuDNN libraries, make sure your
LD_LIBRARY_PATH
does not contain paths to older CUDA/cuDNN libraries.
How to apply for MetaCentrum account?
After reading the Terms and conditions, you can apply for an account here.
After your account is created, please make sure that the directories containing your solutions are always private.
How to activate Python 3.10 on MetaCentrum?
On Metacentrum, currently the newest available Python is 3.10, which you need to activate in every session by running the following command:
module add python/python-3.10.4-intel-19.0.4-sc7snnf
How to install the required virtual environment on MetaCentrum?
To create a virtual environment, you first need to decide where it will reside. Either you can find a permanent storage, where you have large-enough quota, or you can use scratch storage for a submitted job.
TL;DR:
Run an interactive CPU job, asking for 16GB scratch space:
qsub -l select=1:ncpus=1:mem=8gb:scratch_local=16gb -I
In the job, use the allocated scratch space as the temporary directory:
export TMPDIR=$SCRATCHDIR
You should clear the scratch space before you exit using the clean_scratch
command. You can instruct the shell to call it automatically by running:
trap 'clean_scratch' TERM EXIT
Finally, create the virtual environment and install PyTorch in it:
module add python/python-3.10.4-intel-19.0.4-sc7snnf
python3 -m venv CHOSEN_VENV_DIR
CHOSEN_VENV_DIR/bin/pip install --no-cache-dir --upgrade pip setuptools
CHOSEN_VENV_DIR/bin/pip install --no-cache-dir --extra-index-url=https://download.pytorch.org/whl/cu118 npfl139
How to run a GPU computation on MetaCentrum?
First, read the official MetaCentrum documentation: Basic terms, Run simple job, GPU computing, GPU clusters.
TL;DR: To run an interactive GPU job with 1 CPU, 1 GPU, 8GB RAM, and 16GB scatch space, run:
qsub -q gpu -l select=1:ncpus=1:ngpus=1:mem=8gb:scratch_local=16gb -I
To run a script in a non-interactive way, replace the -I
option with the script to be executed.
If you want to run a CPU-only computation, remove the -q gpu
and ngpus=1:
from the above commands.
How to install required packages on AIC?
The Python 3.11.7 is available /opt/python/3.11.7/bin/python3
, so you should
start by creating a virtual environment using
/opt/python/3.11.7/bin/python3 -m venv VENV_DIR
and then install the required packages in it using
VENV_DIR/bin/pip install --no-cache-dir --extra-index-url=https://download.pytorch.org/whl/cu118 npfl139
How to run a GPU computation on AIC?
First, read the official AIC documentation: Submitting CPU Jobs, Submitting GPU Jobs.
TL;DR: To run an interactive GPU job with 1 CPU, 1 GPU, and 16GB RAM, run:
srun -p gpu -c1 -G1 --mem=16G --pty bash
To run a shell script requiring a GPU in a non-interactive way, use
sbatch -p gpu -c1 -G1 --mem=16G SCRIPT_PATH
If you want to run a CPU-only computation, remove the -p gpu
and -G1
from the above commands.
Is it possible to keep the solutions in a Git repository?
Definitely. Keeping the solutions in a branch of your repository, where you merge them with the course repository, is probably a good idea. However, please keep the cloned repository with your solutions private.
On GitHub, do not create a public fork with your solutions
If you keep your solutions in a GitHub repository, please do not create a clone of the repository by using the Fork button – this way, the cloned repository would be public.
Of course, if you just want to create a pull request, GitHub requires a public fork and that is fine – just do not store your solutions in it.
How to clone the course repository?
To clone the course repository, run
git clone https://github.com/ufal/npfl139
This creates the repository in the npfl139
subdirectory; if you want a different
name, add it as a last parameter.
To update the repository, run git pull
inside the repository directory.
How to keep the course repository as a branch in your repository?
If you want to store the course repository just in a local branch of your existing repository, you can run the following command while in it:
git remote add course_repo https://github.com/ufal/npfl139
git fetch course_repo
git checkout --track course_repo/master -b BRANCH_NAME
This creates a branch BRANCH_NAME
, and when you run git pull
in that
branch, it will be updated to the current state of the course repository.
How to merge the course repository updates with your modified branch?
If you want to store your solutions in your branch and gradually update this branch to track the changes in the course repository, you should start by
git remote add course_repo https://github.com/ufal/npfl139
git fetch course_repo
git checkout --no-track course_repo/master -b BRANCH_NAME
which creates a branch BRANCH_NAME
with the current state of the
course repository. However, unlike to the previous case, git pull
and git push
in this branch will not operate on the course repository.
Therefore, you can then commit to this branch and push it to your own
repository.
To update your branch with the changes from the course repository, run
git fetch course_repo
git merge course_repo/master
while in your branch. Of course, it might be necessary to resolve conflicts if both you and I modified the same lines in the templates.
What files can be submitted to ReCodEx?
You can submit multiple files of any type to ReCodEx. There is a limit of 20 files per submission, with a total size of 20MB.
What file does ReCodEx execute and what arguments does it use?
Exactly one file with py
suffix must contain a line starting with def main(
.
Such a file is imported by ReCodEx and the main
method is executed
(during the import, __name__ == "__recodex__"
).
The file must also export an argument parser called parser
. ReCodEx uses its
arguments and default values, but it overwrites some of the arguments
depending on the test being executed – the template should always indicate which
arguments are set by ReCodEx and which are left intact.
What are the time and memory limits?
The memory limit during evaluation is 1.5GB. The time limit varies, but it should be at least 10 seconds and at least twice the running time of my solution.
Do agents need to be trained directly in ReCodEx?
No, you can pre-train your agent locally (unless specified otherwise in the task description).
To pass the practicals, you need to obtain at least 80 points, excluding the bonus points. Note that all surplus points (both bonus and non-bonus) will be transfered to the exam. In total, assignments for at least 120 points (not including the bonus points) will be available, and if you solve all the assignments (any non-zero amount of points counts as solved), you automatically pass the exam with grade 1.
To pass the exam, you need to obtain at least 60, 75, or 90 points out of 100-point exam to receive a grade 3, 2, or 1, respectively. The exam consists of 100-point-worth questions from the list below (the questions are randomly generated, but in such a way that there is at least one question from every but the last lecture). In addition, you can get surplus points from the practicals and at most 10 points for community work (i.e., fixing slides or reporting issues) – but only the points you already have at the time of the exam count. You can take the exam without passing the practicals first.
Lecture 1 Questions
Derive how to incrementally update a running average (how to compute an average of numbers using the average of the first numbers). [5]
Describe multi-arm bandits and write down the -greedy algorithm for solving it. [5]
Define a Markov Decision Process, including the definition of a return. [5]
Describe how does a partially observable Markov decision process extend the Markov decision process and how is the agent altered. [5]
Define a value function, such that all expectations are over simple random variables (actions, states, rewards), not trajectories. [5]
Define an action-value function, such that all expectations are over simple random variables (actions, states, rewards), not trajectories. [5]
Express a value function using an action-value function, and express an action-value function using a value function. [5]
Define optimal value function and optimal action-value function. Then define optimal policy in such a way that its existence is guaranteed. [5]
Lecture 2 Questions
Write down the Bellman optimality equation. [5]
Define the Bellman backup operator. [5]
Write down the value iteration algorithm. [5]
Define the supremum norm and prove that Bellman backup operator is a contraction with respect to this norm. [10]
Formulate and prove the policy improvement theorem. [10]
Write down the policy iteration algorithm. [10]
Write down the tabular Monte-Carlo on-policy every-visit -soft algorithm. [5]
Write down the Sarsa algorithm. [5]
Write down the Q-learning algorithm. [5]
Lecture 3 Questions
Elaborate on how can importance sampling estimate expectations with respect to based on samples of . [5]
Show how to estimate returns in the off-policy case, both with (a) ordinary importance sampling and (b) weighted importance sampling. [10]
Write down the Expected Sarsa algorithm and show how to obtain Q-learning from it. [10]
Write down the Double Q-learning algorithm. [10]
Show the bootstrapped estimate of -step return. [5]
Write down the update in on-policy -step Sarsa (assuming you already have previous steps, actions and rewards). [5]
Write down the update in off-policy -step Sarsa with importance sampling (assuming you already have previous steps, actions and rewards). [10]
Write down the update of -step Tree-backup algorithm (assuming you already have previous steps, actions and rewards). [10]
Assuming function approximation, define Mean squared value error. [5]
Write down the gradient Monte-Carlo on-policy every-visit -soft algorithm. [10]