NPTEL Natural Language Processing Week 4 Assignment Answers 2023

Join Our WhatsApp Group Join Now
Join Us On Telegram Join Now

NPTEL Natural Language Processing Week 4 Assignment Solutions

NPTEL Natural Language Processing Assignment Answers 2023

NPTEL Natural Language Processing Week 4 Assignment Answers 2023

1. Baum-Welch algorithm is an example of – [Marks 1]

a. Forward-backward algorithm
b. Special case of the Expectation-maximization algorithm
c. Both A and B
c. None

Answer :-For Answer Click Here

2.

NPTEL Natural Language Processing Week 4 Assignment Answers 2023
Answer :- For Answer Click Here

3.

NPTEL Natural Language Processing Week 4 Assignment Answers 2023
Answer :- For Answer Click Here

4. Let us define an HMM Model with K classes for hidden states and T data points as observations. The dataset is defined as X = {x1, x2, . .., XT } and the corresponding hidden states are Z = {z1, 22, . .., 2T }. Please note that each xi is an observed variable and each zi can belong to one of classes for hidden state. What will be the size of the state transition matrix, and the emission matrix, respectively for this example.

A) K × K, K x T
B) K× T, K× T
C) K× K. K × K
D) K x T, K × K

Answer :- For Answer Click Here

5. You are building a model distribution for an infinite stream of word tokens. You know that the source of this stream has a vocabulary of size 1000. Out of these 1000 words you know of 100 words to be stop words each of which has a probability of 0.0019. With only this knowledge what is the maximum possible entropy of the modelled distribution. (Use log base 10 for entropy calculation) [Marks 2]

a. 5.079
b. O
c. 2.984
d. 12.871

Answer :- For Answer Click Here

6. For an HMM model with N hidden states, V observable states, what are the dimensions of parameter matrices A,B and m? A: Transition matrix, B: Emission matrix, m: Initial Probability matrix. [Marks 1]

a. N× V, N× V, N× N
b. N × N, N × V, N × 1
c. N × N. V × V. N× 1
d. N × V, V × V, V × 1

Answer :- For Answer Click Here

7.

NPTEL Natural Language Processing Week 4 Assignment Answers 2023
Answer :- For Answer Click Here

8. In Hidden Markov Models or HMMs, the joint likelihood of an observed sequence O with a hidden state sequence Q, is written as P(O, Q; 0). In many applications, like POS tagging, one is interested in finding the hidden state sequence Q, for a given observation sequence, that maximizes P(O, Q; 0). What is the time required to compute the most likely Q using an exhaustive search? The required notations are, N: possible number of hidden states, T: length of the observed sequence. [Marks 1]

a. Of the order of TNT
b. Of the order of N2T
c. Of the order of Th
d. Of the order of N2

Answer :- For Answer Click Here
Course NameNatural Language Processing
CategoryNPTEL Assignment Answer
Home Click Here
Join Us on TelegramClick Here

Important Links

Follow us & Join Our Groups for Latest Information
Updated by US
🔥Follow US On Google NewsClick Here
🔥WhatsApp Group Join NowClick Here
🔥Join US On TelegramClick Here
🔥WebsiteClick Here

Leave a comment