These are some of the most common interview questions for Deep Learning Engineer
PART 2/5
- How do we overcome underfitting and overfitting?
- Does more data solve high variance or bias problem?
- What is Regularization?
- What are the types of regularisation?
- What is the learning rate in NN?
- is it better to have a high or low learning rate?
- Does increasing the learning rate get you results faster?
- What are dropouts?
- What do we do with dropouts in Training/Testing phase?
- Can you give an intuition of dropouts?
- What is data augmentation?
- What are the different types of Data Augmentation?
- What is Early stopping? How do we achieve that?
- What framework is better Tensorflow/Pytorch/any other or better to code it yourself as it have better control over the results?
- What is gradient checking? How do we do that?
- Is it required to Normalize the training set ? if so why and explain the process.?
- Is it required to Normalise the Test set? If so why and explain the process.
- What is vanishing and exploding gradients?
- What are the Optimization Algorithms in ML?
- What is Batch Gradient Descent?
- What is Mini-Batch Gradient Descent?
- What is Stochastic Gradient Descent?
- What is Gradient Descent with Momentum? What are the hyperparameters involved in it? Explain?
- What is RMSProp? What are the hyperparameters involved in it?
- What is Adam Optimization?
- What is Adagrad?
- What is Learning Rate Decay?
- What is a saddle point in Loss landscape?
- How does Adam Optimization help?
- What are the different hyperparameters involved in a NN?
The Best Books recommended to improve your Machine Learning skills:
No comments:
Post a Comment