机器学习和深度学习中的优化器
上次更新时间:2024-09-29
课程售价: 2.9 元
联系右侧微信客服充值或购买课程
课程内容
1 - Introduction
2 - Stochastic Gradient Descent
3 - Momentum
4 - NAG
5 - Adagrad
6 - RMSprop
7 - Adam
8 - Gradient derivation for different loss and activation functions
- 1 - Gradient derivation - Intro
- 2 - SGD with Mean Absolute Error
- 3 - SGD with Root Mean Squared Error
- 4 - SGD with ReLu Activation and Mean Absolute Error
- 5 - SGD with Sigmoid Activation and Binary Log loss - Part 1
- 6 - SGD with Sigmoid Activation and Binary Log loss - Part 2
- 7 - Summary of gradients
课程内容
8个章节 , 33个讲座
1 - Introduction
2 - Stochastic Gradient Descent
3 - Momentum
4 - NAG
5 - Adagrad
6 - RMSprop
7 - Adam
8 - Gradient derivation for different loss and activation functions
- 1 - Gradient derivation - Intro
- 2 - SGD with Mean Absolute Error
- 3 - SGD with Root Mean Squared Error
- 4 - SGD with ReLu Activation and Mean Absolute Error
- 5 - SGD with Sigmoid Activation and Binary Log loss - Part 1
- 6 - SGD with Sigmoid Activation and Binary Log loss - Part 2
- 7 - Summary of gradients