# NPFL067 Introduction to Statistical NLP I

## Winter semester exam: Jan 14, 2010 12:20pm (60 min.), S4

Name: ___________________________________________ Year: _____________

If asked to compute something for which you have the numbers, that really means to compute the final number, not just to write the formula. If asked for a formula, write down the formula.

## 1. Probability

Let S = { a, b, c } (the sample space), and p be the joint distribution on a sequence of two events (i.e. on S x S, ordered). If you know that p(a,a) [a followed by a] = 0.25, p(c,c) [c followed by c] = 0.25, p(b,a) [b followed by a] = 0.125, p(b,b) [b followed by b] = 0, p(a,c) [a followed by c] = 0.25, pL(a) [unigram probability of a as a left-hand bigram member] = .5, and pR(b) [unigram probability of b as the right-hand bigram member] = 0.125, is it enough to compute p(b|c) (i.e., the probability of seeing b if we already know that the preceding event generated c)?
• Yes / No: _______

• why? _________________________________________________________________

______________________________________________________________________

______________________________________________________________________

• If yes, compute: p(b|c) = _________________________________________

## 2. Estimation and Cross-entropy

Use the bigram distribution p from question 1.
• Write one example of a data sequence which faithfully follows the distribution (i.e., a training data from which we would get the above bigram distribution using the MLE method):

_____   _____   _____   _____   _____   _____   _____   _____   _____

• What is the cross-entropy Hdata(p) in bits and the perplexity1 Gdata(p) of the bigram distribution from question 1 if computed against the following data (use the data-oriented formula for conditional distribution derived from p):

data = b a a a

Hdata(p) = ___________________________    Gdata(p) = ___________________________

1 The cross-entropy and perplexity computation is the only one here for which you might need a calculator; but it is ok if you use an expression (use the appropriate (integer) numbers, though!).

## 3. Mutual information

• What is the pointwise mutual information of b and a (in this order), using the bigram distribution from question 1?

Ipointwise(b,a) = ___________________________________________

## 4. Smoothing and the sparse data problem

• If you were to design a trigram language model, how would the final smoothed distribution be defined if you use the "Linear Interpolation" smoothing method?

• _______________________________________________________________________
• Name at least one more smoothing method:

• _________________________________________________

## 5. Classes based on Mutual Information

Suppose you have the following data:

It is interesting to watch , at least from the social point of view , how the current financial crisis differs from the big financial meltdown following the 1929 stock market crash .

What is the best pair of candidates for the first merge, if you use the greedy algorithm for classes based on bigram mutual information? Use your judgment, not computation; in case of two or more best candidates, write as many as you can find.

• Word(s) 1: ____________________________________________________________
Word(s) 2: ____________________________________________________________

## 6. Hidden Markov Models

• What is the Viterbi algorithm good for? (Use max. 5 sentences for the answer.)

_________________________________________________________________________

_________________________________________________________________________

_________________________________________________________________________

_________________________________________________________________________

_________________________________________________________________________