반응형
https://youtube.com/playlist?list=PLkDaE6sCZn6Hn0vK8co82zjQtt3T2Nkqc
https://www.coursera.org/learn/deep-neural-network?specialization=deep-learning
Improving Deep Neural Networks:
Hyperparameter Tuning, Regularization and Optimization (Course2)
Contents
Week1 Practical Aspects of Deep Learning
- Train/Dev/Test Sets (C2W1L01)
- Bias/Variance (C2W1L02)
- Basic Recipe for Machine Learning (C2W1L03)
- Regularization (C2W1L04)
- Why Regularization Reduces Overfitting (C2W1L05)
- Dropout Regularization (C2W1L06)
- Understanding Dropout (C2W1L07)
- Other Regularization Methods (C2W1L08)
- Normalizing Inputs (C2W1L09)
- Vanishing/Exploding Gradients (C2W1L10)
- Weight Initialization in a Deep Network (C2W1L11)
- Numerical Approximations of Gradients (C2W1L12)
- Gradient Checking (C2W1L13)
- Gradient Checking Implementation Notes (C2W1L14)
Week2 Optimization Algorithms
- Mini Batch Gradient Descent (C2W2L01)
- Understanding Mini-Batch Gradient Descent (C2W2L02)
- Exponentially Weighted Averages (C2W2L03)
- Understanding Exponentially Weighted Averages (C2W2L04)
- Bias Correction of Exponentially Weighted Averages (C2W2L05)
- Gradient Descent With Momentum (C2W2L06)
- RMSProp (C2W2L07)
- Adam Optimization Algorithm (C2W2L08)
- Learning Rate Decay (C2W2L09)
Week3 Hyperparameter Tuning, Batch Normalization and Programming Frameworks
- Tuning Process (C2W3L01)
- Using an Appropriate Scale (C2W3L02)
- Hyperparameter Tuning in Practice (C2W3L03)
- Normalizing Activations in a Network (C2W3L04)
- Fitting Batch Norm Into Neural Networks (C2W3L05)
- Why Does Batch Norm Work? (C2W3L06)
- Batch Norm At Test Time (C2W3L07)
- Softmax Regression (C2W3L08)
- Training Softmax Classifier (C2W3L09)
- The Problem of Local Optima (C2W3L10)
- TensorFlow (C2W3L11)
ref.
https://www.deeplearning.ai/thebatch
https://github.com/amanchadha/coursera-deep-learning-specialization
반응형