NPTEL Deep Learning Week 4 Assignment Answers 2023

Join Our WhatsApp Group Join Now
Join Us On Telegram Join Now

NPTEL Deep Learning Week 4 Assignment Solutions

NPTEL Deep Learning Week 1,2 Assignment Answers 2023

NPTEL Deep Learning Week 4 Assignment Answers 2023

1. Which of the following cannot be realized with single layer perceptron (only input and output layer)?

a. AND
b. OR
C. NAND
d. XOR

Answer :- d

2. For a function f (0o, 01), if 0o and 01 are initialized at a local minimum, then what should be the values of 0o and 01 after a single iteration of gradient descent:

a. 0o and 01 will update as per gradient descent rule
b. 0o and 0, will remain same
c. Depends on the values of 0o and 01
d. Depends on the learning rate

Answer :- b

3. Choose the correct option:
i) Inability of a model to obtain sufficiently low training error is termed as Overfitting
ii) Inability of a model to reduce large margin between training and testing error is termed as Overfitting
iii) Inability of a model to obtain sufficiently low training error is termed as Underfitting
iv) Inability of a model to reduce large margin between training and testing error is termed as Underfitting

a. Only option (i) is correct
b. Both Options (ii) and (ili) are correct
c. Both Options (¡i) and (iv) are correct
d. Only option (iv) is correct

Answer :- b

4.

NPTEL Deep Learning Week 4 Assignment Answers 2023
Answer :- a

5. Choose the correct option. Gradient of a continuous and differentiable function is:
i) is zero at a minimum
ii) is non-zero at a maximum
iii) is zero at a saddle point
iv)magnitude decreases as you get closer to the minimum

a. Only option (i) is corerct
b. Options (1), (ili) and (iv) are correct
c. Options (i) and (iv) are correct
d. Only option (ii) is correct

Answer :- b

6. Input to SoftMax activation function is [3,1,2]. What will be the output?

a. [0.58,0.11, 0.31]
b. [0.43,0.24, 0.33]
c. [0.60,0.10,0.301
d. [0.67, 0.09,0.24]

Answer :- d

7.

NPTEL Deep Learning Week 4 Assignment Answers 2023
Answer :- d

8. Which of the following options is true?

a. In Stochastic Gradient Descent, a small batch of sample is selected randomly instead of the whole data set for each iteration. Too large update of weight values leading to faster convergence
b. In Stochastic Gradient Descent, the whole data set is processed together for update in each iteration.
c. Stochastic Gradient Descent considers only one sample for updates and has noisier updates.
d. Stochastic Gradient Descent is a non-iterative process

Answer :- c

9. What are the steps for using a gradient descent algorithm?

  1. Calculate error between the actual value and the predicted value
  2. Re-iterate until you find the best weights of network
  3. Pass an input through the network and get values from output layer
  4. Initialize random weight and bias
  5. Go to each neurons which contributes to the error and change its respective values to redu the error

    a. 1, 2, 3, 4, 5
    b. 5, 4, 3, 2, 1
    c. 3, 2, 1, 5, 4
    d. 4, 3, 1, 5, 2
Answer :- d

10.

NPTEL Deep Learning Week 4 Assignment Answers 2023
Answer :- d

Important Links

Follow us & Join Our Groups for Latest Information
Updated by US
🔥Follow US On Google NewsClick Here
🔥WhatsApp Group Join NowClick Here
🔥Join US On TelegramClick Here
🔥WebsiteClick Here

Leave a comment