# NPTEL Deep Learning – IIT Ropar Week 4 Assignment Answer 2023

NPTEL Deep Learning – IIT Ropar Week 4 Assignment Answer

## NPTEL Deep Learning – IIT Ropar Week 4 Assignment Answer 2023

1. Which step does Nesterov accelerated gradient descent perform before finding the update size?

• Increase the momentum
• Estimate the next position of the parameters
• Decrease the step size
`Answer :- For Answer Click Here`

2. Select the parameter of vanilla gradient descent controls the step size in the direction of the gradient.

• Learning rate
• Momentum
• Gamma
• None of the above
`Answer :- For Answer Click Here`

3. What does the distance between two contour lines on a contour map represent?

• The change in the output of function
• The direction of the function
• The rate of change of the function
• None of the above
`Answer :- For Answer Click Here`

4. Which of the following represents the contour plot of the function f(x,y) = x2−y?

`Answer :- For Answer Click Here`

• It converges faster than other optimization algorithms.
• It is less sensitive to the choice of hyperparameters (learning rate).
• It is more memory-efficient than other optimization algorithms.
• It is less likely to get stuck in local optima than other optimization algorithms.
`Answer :- For Answer Click Here`

6. We are training a neural network using the vanilla gradient descent algorithm. We observe that the change in weights is small in successive iterations. What are the possible causes for the following phenomenon?

• η is large
• ∇w is small
• ∇w is large
• η is small
`Answer :- For Answer Click Here`

7. You are given labeled data which we call X where rows are data points and columns feature. One column has most of its values as 0. What algorithm should we use here for faster convergence and achieve the optimal value of the loss function?

• NAG
`Answer :- For Answer Click Here`

8. What is the update rule for the ADAM optimizer?

• wt=wt−1−lr∗(mt/(vt−−+ϵ))
• wt=wt−1−lr∗m
• wt=wt−1−lr∗(mt/(vt+ϵ))
• wt=wt−1−lr∗(vt/(mt+ϵ))
`Answer :- For Answer Click Here`

• Mini-batch gradient descent is more computationally efficient than batch gradient descent.
`Answer :- For Answer Click Here`
`Answer :- For Answer Click Here`