SIS code: 
C + Ex

NPFL124 – Natural Language Processing


NPFL124 provides students with knowledge and hands-on experience related to basic (mostly statistical) methods in the field of Natural Language Processing. The students will be acquainted with fundamental components such as corpora and language modes, as well as with complex end-user applications such as Machine Translation.

The course consists of six two-week thematic blocks taught by five lecturers:

Scheduled time and location

  • lectures on Wednesdays at 10:40 in S4, every week starting from the first week of the summer semester
  • plus practicals on Wednesdays at 14:00 in SW2, only odd weeks of the semestr (exact dates marked in the list of lectures and practicals)

Starting requirements


1. Introduction Intro to NLP Questions

2. Language modeling. Language Models Questions

3. Morphological analysis Morphology Questions

4. Syntactic analysis Syntax Questions

5. Overview of Language Data Resources Data resources Questions Assignment on diacritics

6. Evaluation measures in NLP Evaluation Questions

7. Information retrieval IR Assignment on IR

8. Information retrieval, cont. IR cont. Questions

9. Introduction to Deep Learning in NLP Deep learning intro

10. Deep learning applications in NLP DL in applications Questions

11. Machine translation MT intro+Word Alignment+PBMT Word Alignment by Philipp Koehn Recording of the Lecture Assignment on Word Alignment

12. Machine translation, cont. Main Slides: Neural MT Extra Slides: Transformer Recording of the Lecture Questions

13. ChatGPT (morning); Final written test (afternoon) LLMs


Unless otherwise stated, teaching materials for this course are available under CC BY-SA 4.0.

1. Introduction

 February 15, 2023, 10:40 in S4 (lecture)  February 15, 2023, 14:00 in SW2 (practicals) Intro to NLP Questions

Lecturer: Pavel Pecina


  • Motivation for NLP.
  • Basic notions from probability and information theory.

2. Language modeling.

 February 22, 2023, 10:40 in S4 (lecture) Language Models Questions

Lecturer: Pavel Pecina


  • Language models.
  • The noisy channel model.
  • Markov models.

3. Morphological analysis

 March 1, 2023, 10:40 in S4 (lecture)  March 1, 2023, 14:00 in SW2 (practicals) Morphology Questions

Lecturer: Daniel Zeman


  • Morphological tags, parts of speech, morphological categories.
  • Finite-state morphology.


4. Syntactic analysis

 March 8, 2023, 10:40 in S4 (lecture) Syntax Questions

Lecturer: Daniel Zeman


  • Dependency vs. phrase-based model.
  • Dependency parsing.

5. Overview of Language Data Resources

 March 15, 2023, 10:40 in S4 (lecture)  March 15, 2023, 14:00 in SW2 (practicals) Data resources Questions Assignment on diacritics

Lecturer: Zdeněk Žabokrtský

Lecture topics:

  • Types of language data resources.
  • Annotation principles.


6. Evaluation measures in NLP

 March 22, 2023, 10:40 in S4 (lecture) Evaluation Questions

Lecturer: Zdeněk Žabokrtský


  • Purposes of evaluation.
  • Evaluation best practices, estimating upper and lower bounds.
  • Task-specific measures.

7. Information retrieval

 March 29, 2023, 10:40 in S4 (lecture)  March 29, 2023, 14:00 in SW2 (practicals) IR Assignment on IR

Lecturer: Pavel Pecina


  • Intro to IR.
  • Boolean model.
  • Inverted index.

8. Information retrieval, cont.

 April 5, 2023, 10:40 in S4 (lecture) IR cont. Questions

Lecturer: Pavel Pecina


  • Probabilistic models for Information Retrieval.

9. Introduction to Deep Learning in NLP

 April 12, 2023, 10:40 in S4 (lecture)  April 12, 2023, 14:00 in SW2 (practicals)

Deep learning intro

Lecturer: Jindřich Libovický


  • Neural network basics
  • Word embeddings, sequence-processing architectures
  • Pre-trained models: Word2Vec, BERT

The excercise is available in a Google Colab Sheet.

10. Deep learning applications in NLP

 April 19, 2023, 10:40 in S4 (lecture) DL in applications Questions

Lecturer: Jindřich Libovický


  • Named entity recognition
  • Answer span selection
  • Unsupervised dictionary induction
  • Combining language and vision

Recording of this lecture failed and therefore will not be available in SIS. Most of the exam questions were covered in the previous lecture on deep learning. Sorry for inconvenience.

11. Machine translation

 April 26, 2023, 10:40 in S4 (lecture)  April 26, 2023, 14:00 in SW2 (practicals)

MT intro+Word Alignment+PBMT Word Alignment by Philipp Koehn Recording of the Lecture Assignment on Word Alignment

Lecturer: Ondřej Bojar


  • Introduction to MT.
  • MT evaluation.
  • Alignment.
  • Phrase-Based MT.

12. Machine translation, cont.

 May 3, 2023, 10:40 in S4 (lecture) Main Slides: Neural MT Extra Slides: Transformer Recording of the Lecture Questions


  • Fundamental problems of PBMT.
  • Neural machine translation (NMT).
    • Brief summary of NNs.
    • Sequence-to-sequence, with attention.
    • Transformer, self-attention.
    • Linguistic features in NMT.

13. ChatGPT (morning); Final written test (afternoon)

LLMs  May 17, 2023, 10:40 in S4; 14:00 in SW2 (test)

  • The first option for passing the final exam written test ("předtermín").
  • Additional exam dates can be offered in the exam period if needed.

1. Diacritics restoration

2. Boolean Retrieval

3. Word alignment -- IBM Model 1

1. Diacritics restoration

 Deadline: 31st March 2023, 23:59

  • Implement a program that reads a Czech text with removed diacritics from STDIN and print the same text with restored diacritics to STDOUT.
  • A possible solution: build a Czech corpus of your own (e.g. by downloading a few e-books or news or Wikipedia or ...) that contains at least one 100k tokens (words and punctuation marks). Create a modified copy of the corpus in which all Czech diacritics is removed. Extract a mapping from words without diacritics to words with diacritics. For out-of-vocabulary words use letter-trigram language model.
  • Evaluate the accuracy of the restoration as a percentage of correct non-white characters in the output.
  • Evaluation datasets - 2 randomly chosen articles from
  • You can use any programming language as long as it can be compiled/executed on a Linux without too much tweaking (esp. without purchasing any license). Recommended choice: Python 3.
  • You can use the devtest data any times you need, but you should use the etest data for evaluation only once.
  • Ideally, organize the execution of the whole experiment into a Makefile that (after typing make all) downloads your training data, as well as the development and evaluation test sets from the links above, trains the model, applies it on the development data and evaluates the accuracy.
  • Submission:
    • Please zip the whole directory and send it by email to by the given deadline.
    • Alternatively, you can submit the directory with your solution using the faculty GitLab server. Detailed information on creating and using your GitLab repository is available within the course NPFL125. For our course, the instructions are to be modified in an expectable way:
      • Your project name should be "NPFL124"; the identifier should be "npfl124".
      • Access to your repository should be given to Zdeněk Žabokrtský.

2. Boolean Retrieval

 Deadline: 19th April 2023, 23:59

  • Implement the inverted index with a hash used for the dictionary part of the index.
  • Implement algorithms for postings intersection and union.
  • Index the provided document collection.
  • Write a query parser for AND, OR, and NOT.
  • Process the provided set of boolean queris and submit the results.
  • Write a short report on you work.
  • Submit all the files in a zip archie by email to by the given deadline.

3. Word alignment -- IBM Model 1

 Deadline: 10th May 2023, 23:59

The goal of the homework is to implement IBM Model 1 for word alignment (a model that considers only lexical values of words, i.e. the words as they are written, not their position etc.). For simplicity, we are going to evaluate the output only visually, by looking at the resulting alignments and dictionary.

  1. Implement the IBM model 1 as shown in pseudocode in the slides.
  2. Download manual word alignments: czenali.gz (2501 lines)
    • The data originally come from: Czech-English manual word alignments.
    • I concatenated all files *.wa from merged_data/.
    • I stripped SGML and converted to four tab-delimited columns: English, Czech, sure alignments, possible alignments.
    • IMPORTANT: The alignments provided are only for reference, your script must not look at them. They are to contrast your outputs with manual outputs.
    • For illustration, you may want to print the manual alignments in plaintext using
      zcat czenali.gz | ./ --indexed-from-one | less
  3. Run your implementation (use only top N sentence pairs and only few iterations so that your non-optimized code finishes quickly).
  4. Print automatic translation dictionary:
    • For each pair of English-Czech tokens, print top three pairs according to P(English|Czech).
  5. To submit your homework, send by e-mail to Ondřej Bojar the following:
    • Your IBM Model 1 implementation.
    • The settings you used: how many sentences, how many iterations, what type of pre-processing, what threshold.
    • Your best translation dictionary, i.e. the top three pairs for each word.

Pool of possible exam questions

All variants of the final written exam tests will be assembled exclusively from questions selected from the following list:

(warning: the question list might be subject to occasional changes during the semester; the final version will be announced here no later than three weeks before the first exam date.)

  • Basic notions from probability and information theory.
    1. What are the three basic properties of a probability function? (1 point)
    2. When do we say that two events are (statistically) independent? (1 point)
    3. Show how Bayes' Theorem can be derived. (1 point)
    4. Explain Chain Rule. (1 point)
    5. Explain the notion of Entropy (formula expected too). (1 point)
    6. Explain Kullback-Leibler distance (formula expected too). (1 point)
    7. Explain Mutual Information (formula expected too). (1 point)
  • Language models. The noisy channel model.
    1. Explain the notion of The Noisy Channel. (1 point)
    2. Explain the notion of the n-gram language model. (1 point)
    3. Describe how Maximum Likelihood estimate of a trigram language model is computed. (2 points)
    4. Why do we need smoothing (in language modelling)? (1 point)
    5. Give at least two examples of smoothing methods. (2 points)
  • Morphological analysis.
    1. What is a morphological tag? List at least five features that are often encoded in morphological tag sets. (1 point)
    2. List the open and closed part-of-speech classes and explain the difference between open and closed classes. (1 point)
    3. Explain the difference between a finite-state automaton and a finite-state transducer. Describe the algorithm of using a finite-state transducer to transform a surface string to a lexical string (pseudocode or source code in your favorite programming language). (2 points)
    4. Give an example of a phonological or an orthographical change caused by morphological inflection (any natural language). Describe the rule that would take care of the change during analysis or generation. It is not required that you draw a transducer, although drawing a transducer is one of the possible ways of describing the rule. (1 point)
    5. Give an example of a long-distance dependency in morphology (any natural language). How would you handle it in a morphological analyzer? (1 point)
  • Syntactic analysis.
    1. Describe dependency trees, constituent trees, differences between them and phenomena that must be addressed when converting between them. (2 points)
    2. Give an example of a sentence (in any natural language) that has at least two plausible, semantically different syntactic analyses (readings). Draw the corresponding dependency trees and explain the difference in meaning. Are there other additional readings that are less probable but still grammatically acceptable? (2 points)
    3. What is coordination? Why is it difficult in dependency parsing? How would you capture coordination in a dependency structure? What are the advantages and disadvantages of your solution? (1 point)
    4. What is ellipsis? Why is it difficult in parsing? Give examples of different kinds of ellipsis (any natural language). (1 point)
  • Information retrieval.
    1. Explain the difference between information need and query. (1 point)
    2. What is inverted index and what are the optimal data structures for it? (1 point)
    3. What is stopword and what is it useful for? (1 point)
    4. Explain the bag-of-word principle? (1 point)
    5. What is the main advantage and disadvantage of boolean model. (1 point)
    6. Explain the role of the two components in the TF-IDF weighting scheme. (1 point)
    7. Explain length normalization in vector space model what is it useful for? (1 point)
  • Language data resources.
    1. Explain what a corpus is. (1 point)
    2. Explain what annotation is (in the context of language resources). What types of annotation do you know? (2 points)
    3. What are the reasons for variability of even basic types of annotation, such as the annotation of morphological categories (parts of speech etc.).(1 point)
    4. Explain what a treebank is. Why trees are used? (2 points)
    5. Explain what a parallel corpus is. What kind of alignments can we distinguish? (2 points)
    6. What is a sentiment-annotated corpus? How can it be used? (1 points)
    7. What is a coreference-annotated corpus? (1 points)
    8. Explain how WordNet is structured? (1 points)
    9. Explain the difference between derivation and inflection? (1 points)
  • Evaluation measures in NLP.
    1. Give at least two examples of situations in which measuring a percentage accuracy is not adequate. (1 point)
    2. Explain: precision, recall (1 point)
    3. What is F-measure, what is it useful for? (1 point)
    4. What is k-fold cross-validation ? (1 point)
    5. Explain BLEU (the exact formula not needed, just the main principles). (1 point)
    6. Explain the purpose of brevity penalty in BLEU. (1 point)
    7. What is Labeled Attachment Score (in parsing)? (1 point)
    8. What is Word Error Rate (in speech recognition)? (1 point)
    9. What is inter-annotator agreement? How can it be measured? (1 point)
    10. What is Cohen's kappa? (1 point)
  • Deep learning for NLP.
    1. Describe the two methods for training of the Word2Vec model. (1 point)
    2. Explain the difference between Word2Vec and FastText embeddings. (1 point)
    3. Explain convolutional networks for sequence processing. (1 point)
    4. What are residual connections in neural networks? Why do we use them? (1 point)
    5. Explain layer normalization and its effect to the training process. (1 point, 2 points with formula)
    6. Explain the vanishing gradient problem in recurrent neural networks; name architectures that deal with the issue. (1 point)
    7. Describe the LSTM networks. (1 point)
    8. Use formulas to express the loss function for training sequence labeling? (1 point)
    9. Sketch the structure of the Transformer model. (2 points)
    10. Why do we use positional encodings in the Transformer model. (1 point)
    11. Explain the training procedure of the BERT model. (2 points)
  • Machine translation fundamentals.
    1. Why is MT difficult from linguistic point of view? Provide examples and explanation for at least three different phenomena. (2 points)
    2. Why is MT difficult from computational point of view? (1 point)
    3. Briefly describe at least three methods of manual MT evaluation. (1-2 points)
    4. Describe BLEU. 1 point for the core properties explained, 1 point for the (commented) formula.
    5. Describe IBM Model 1 for word alignment, highlighting the EM structure of the algorithm. (1 point)
    6. Explain using equations the relation between Noisy channel model and log-linear model for classical statistical MT. (2 points)
    7. Describe the loop of weight optimization for the log-linear model as used in phrase-based MT. (1 point)
  • Neural machine translation.
    1. Describe the critical limitation of PBMT that NMT solves. Provide example training data and example input where PBMT is very likely to introduce an error. (1 points)
    2. Use formulas to highlight the similarity of NMT and LMs. (1 point)
    3. Describe, how words are fed to current NMT architectures and explain why is this beneficial over 1-hot representation. (1 point)
    4. Sketch the structure of an encoder-decoder architecture of neural MT, remember to describe the components in the picture (2 points)
    5. What is the difference in RNN decoder application at training time vs. at runtime? (1 point)
    6. What problem does attention in NMT address? Provide the key idea of the method. (1 point)
    7. What problem/task do both RNN and self-attention resolve and what is the main benefit of self-attention over RNN? (1 point)
    8. What are the three roles each state at a Transformer encoder layer takes in self-attention. (1 point)
    9. What are the three uses of self-attention in the Transformer model? (1 point)
    10. Provide an example of NMT improvement that was assumed to come from additional linguistic information but occurred also for a simpler reason. (1 point)
    11. Summarize and compare the strategy of "classical statistical MT" vs. the strategy of neural approaches to MT. (1 point)
  • Homework assignments

    • There will be 3 homework assignments.
    • For each assignment, you will get points, up to a given maximum (the maximum is specified with each assignment).
    • All assignments will have a fixed deadline (usually in two weeks).
    • If you submit the assignment after the deadline, you will get:
      • up to 50% of the maximum points if it is less than 2 weeks after the deadline;
      • 0 points if it is more than 2 weeks after the deadline.
    • Once we check the submitted assignments, you will see the points you got and the comments from us in:
    • To be allowed to take the test (which is required to pass the course), you need to get at least 50% of the total points from the assignments.

    Exam test


    Your grade is based on the average of your performance; the exam test and the homework assignments are weighted 1:1.

    • ≥ 90%: grade 1 (excellent)
    • ≥ 70%: grade 2 (very good)
    • ≥ 50%: grade 3 (good)
    • < 50%: grade 4 (fail)

    For example, if you get 600 out of 1000 points for homework assignments (60%) and 36 out of 40 points for the test (90%), your total performance is 75% and you get a 2.

    No cheating

    • Cheating is strictly prohibited and any student found cheating will be punished. The punishment can involve failing the whole course, or, in grave cases, being expelled from the faculty.
    • Discussing homework assignments with your classmates is OK. Sharing code is not OK (unless explicitly allowed); by default, you must complete the assignments yourself.
    • All students involved in cheating will be punished. E.g. if you share your assignment with a friend, both you and your friend will be punished.