NPFL068 – Statistical Methods in Natural Language Processing II

About

SIS code: NPFL068
Semester: summer
E-credits: 5
Examination: 2/2 C+Ex
Lecturer: Jan Hajič, hajic@ufal.mff.cuni.cz
Lecturer: Jindrich Helcl, helcl@ufal.mff.cuni.cz

Timespace Coordinates

  • Lectures: (Monday, 10:40-12:10, S4 - replaced by videolectures form SIS
  • Practicals: Tuesdays, 15:40-17:10, S1
  • Exam (Slot 1, for all): Monday, May 27, 10:40, S4

News

  • The quizzes are available in Moodle.

Lectures

1. Introduction, Tagging, Tagsets, and Morphology Slides

2. Tagging: An Overview Slides

3. Transformation-Based Tagging Slides Assignment 1: Tagging

4. Maximum Entropy Tagging Slides

5. Feature-Based Tagging Slides

6. Parsing: Introduction Slides

7. Shift-Reduce Parsing in Detail Slides

8. Treebanks, Treebanking and Evaluation Slides

9. Probabilistic CFG: Introduction, Using Probabilistic CFG Slides

11. Statistical Parsing Slides


Prerequisites & Relation to Other Courses

Students should have a substantial programming experience in either C, C++, Java and/or Perl, and have preferably taken Data Structures (NTIN060), Unix (NSWI095), and Intro to Probability (NMAI059) or their equivalents, even though all the probability theory needed will be re-explained. Knowledge of, or willingness to learn the basics of Perl as-you-go (and on your own) is also important. One of the benefits of the course is that it is given in English; it should enable you to read current literature on NLP more smoothly, since the literature is almost exclusively in English. Czech terminology will be explained for those interested.

The material covered in this course is selected in such a way that at its completion you should be able to understand papers in the field of Natural Language Processing, and it should also make your life easier when taking more advanced courses either at UFAL MFF UK or elsewhere.

No background in NLP is necessary.

Passing Requirements

To pass the course, students need to complete one homework assignment and pass a written test. See Grading for more details.

Archive

The original web pages for this course are also still active at http://www.cs.jhu.edu/~hajic/courses/cs465/syllabus.html.

License

Unless otherwise stated, teaching materials for this course are available under CC BY-SA 4.0.

1. Introduction, Tagging, Tagsets, and Morphology

Slides

2. Tagging: An Overview

Slides

3. Transformation-Based Tagging

Slides Assignment 1: Tagging

4. Maximum Entropy Tagging

Slides

5. Feature-Based Tagging

Slides

6. Parsing: Introduction

Slides

7. Shift-Reduce Parsing in Detail

Slides

8. Treebanks, Treebanking and Evaluation

Slides

9. Probabilistic CFG: Introduction, Using Probabilistic CFG

Slides

11. Statistical Parsing

Slides

Assignment 1: Tagging

 Deadline: May 31 23:59  100 points

Requirements

For all parts of this homework, work either alone or in a group of max. two people (identical grade will be assigned to both of you in such a case - thus please make sure you understand what your colleague is doing, and that s/he is doing it right!). On top of the results/requirements specific to a certain part of the homework, turn in all of your code, commented in such a way that it is possible to determine what, how and why you did what you did solely from the comments, and a discussion/comments of/on the results (in a plain text/html) file. Technically, follow the usual pattern (see the Syllabus): For this whole homework, use data found in

TEXTEN2.ptg and TEXTCZ2.ptg

In the following, "the data" refers to both English and Czech.

Split the data in the following way: use last 40,000 words for testing (data S), and from the remaining data, use the last 20,000 for smoothing (data H, if any). Call the rest "data T" (training).

1. Brill's Tagger & Tagger Evaluation

Download Eric Brill's supervised tagger from UFAL's course assignment space. Install it (i.e., uncompress (gunzip), untar, and make). You might need to make some changes in his makefile of course (it's and OLD program, in this fast changing world...).

Alternativelly, you can use the NLTK implementation in Python (pip install --user -U nltk).

After installation, get the data, train it on as much data from T as time allows (in the package, there is an extensive documentation on how to train it on new data), and evaluate on data S. Tabulate the results.

Do cross-validation of the results: split the data into S', [H',] T' such that S' is the first 40,000 words, and T' is the last but the first 20,000 words from the rest. Train Eric Brill's tagger on T' (again, use as much data as time allows) and evaluate on S'. Again, tabulate the results.

Do three more splits of your data (using the same formula: 40k/20k/the rest) in some way or another (as different as possible), and get another three sets of results. Compute the mean (average) accuracy and the standard deviation of the accuracy. Tabulate all results.

2. Unsupervised Learning: HMM Tagging

Use the datasets T, H, and S. Estimate the parameters of an HMM tagger using supervised learning off the T data (trigram and lower models for tags). Smooth (both the trigram tag model as well as the lexical model) in the same way as in Homework No. 1 (use data H). Evaluate your tagger on S, using the Viterbi algorithm.

Now use only the first 10,000 words of T to estimate the initial (raw) parameters of the HMM tagging model. Strip off the tags from the remaining data T. Use the Baum-Welch algorithm to improve on the initial parameters. Smooth as usual. Evaluate your unsupervised HMM tagger and compare the results to the supervised HMM tagger.

Tabulate and compare the results of the HMM tagger vs. the Brill's tagger.


Turning in the Assignments

  • Create a separate directory assign for your submission. Create a main web page called index.html or index.htm in that directory. Create as many other web pages as necessary. Put all the other necessary files (.ps and .pdf files, pictures, source code, ...) into the same directory and make relative links to them from your main or other linked web pages. If you use some "content creation" tools related to MSFT software please make sure the references use the correct case (matching uppercase/lowercase).

  • Pack everything into a single .tgz file:
    cd assign
    tar -czvf ~/FirstName.LastName.assignX.tgz ./*
    e.g. for Jan Novák turning in Assignment !
    tar -czvf ~/Jan.Novak.assign1.tgz ./*

  • Send the resulting file by e-mail (as an attachment) to
    hajic@ufal.mff.cuni.cz
    with the following subject line:
    Subject: FirstName.LastName NPFL068 Assignment X
    e.g. for Jan Novák
    Subject: Jan.Novak NPFL068 Assignment X

Unix lab accounts

For MFF UK students, please see http://www.ms.mff.cuni.cz/labs/unix. For others, please visit http://www.ms.mff.cuni.cz/students/externisti.html.

Grades

  • The grading table is available in SIS.

  • The final grade (or pass/fail for PhD students) will be determined by both the final exam and your assignment results in a 1:1 ratio.

Exam

  • The exam is written (not oral), with about 6 major questions and some subquestions. You will have 90 minutes to write down the answers.

Plagiarism

No plagiarism will be tolerated. The assignments are to be worked on on your own; please respect it. If the instructor determines that there are substantial similarities exceeding the likelihood of such an event, he will call the two (or more) students to explain them and possibly to take an immediate test (or assignment, at the discretion of the instructor, not to exceed four hours of work) to determine the student's abilities related to the offending work. All cases of confirmed plagiarism will be reported to the Student Office.

Lateness

  • For each day your submission is late, 5 points will be subtracted from the points awarded to the solution or a part of it, up to max. of 50 points per homework.

  • Submissions received less then 4 weeks before the closing date of the term will not be graded and will be awarded 0 points.

Required Reading

Foundations of Statistical Natural Language Processing

Manning, C. D. and H. Schütze. MIT Press. 1999. ISBN 0-262-13360-1.

Eight copies of this book are available at the CS library for borrowing. Please be considerate to other students and do not keep the book(s) longer than absolutely necessary.


Recommended & Reference Readings

Speech and Language Processing

Jurafsky, D. and J. H. Martin. Prentice-Hall. 2000. ISBN 0-13-095069-6.

Three copies of Jurafsky's book are available at UFAL's library.



Programming PERL

Wall, L., Christiansen, T. and R. L. Schwartz. O'Reilly. 1996. ISBN 0-596-00027-8.

(Sorry no large cover picture available.)



Natural Language Understanding

Allen, J.. Benajmins/Cummings Publishing Company 1994. ISBN 0-8053-0334-0.





Elements of Information Theory

Cover, T. M. and J. A. Thomas. Wiley. 1991. ISBN 0-471-06259-6.





Statistical Language Learning

Charniak, E. MIT Press. 1996. ISBN 0-262-53141-0.





Statistical Methods for Speech Recognition

Jelinek, F. MIT Press. 1998. ISBN 0-262-10066-5.

Four copies of Jelinek's book are available at UFAL's library, but they are primarily reserved for those taking Nino Peterek's and/or Filip Jurcicek's courses.


Proceedings of major conferences (related to Natural Language Processing)

Some of the Proceedings are available at UFAL's library, physically and/or in electronic form. Most of them are, however, freely available through the ACL Anthology, including all volumes of the Computational Linguistics journal and the new Transactions of the ACL journal.

  • ACL (Association of Computational Linguistics)
  • European Chapter of the ACL
  • North American Chapter of the ACL
  • EMNLP (Empirical Methods in Natural Language Processing)
  • COLING (International Committee of Computational Linguistics)
  • ANLP (Applied Natural Language Processing, by ACL)
  • ACL SIGDAT, other SIG (Special Interest Groups) Workhops, such as WVLC (Workshop on Very - Large Corpora)
  • DARPA HLT (Defense Advanced Research Project Agency Human Language Technology Workshops)

Other Resources