From: Ilqar Ramazanli Date: Wed, 8 Sep 2021 22:20:52 +0000 (-0700) Subject: To add Stochastic Gradient Descent to Documentation (#63805) X-Git-Tag: accepted/tizen/8.0/unified/20231005.095509~363 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=149f1114fe94dc90fbb7000669a98c2337e25a54;p=platform%2Fupstream%2Fpytorch.git To add Stochastic Gradient Descent to Documentation (#63805) Summary: It has been discussed before that adding description of Optimization algorithms to PyTorch Core documentation may result in a nice Optimization research tutorial. In the following tracking issue we mentioned about all the necessary algorithms and links to the originally published paper https://github.com/pytorch/pytorch/issues/63236. In this PR we are adding description of Stochastic Gradient Descent to the documentation. SGDalgo Pull Request resolved: https://github.com/pytorch/pytorch/pull/63805 Reviewed By: albanD Differential Revision: D30818947 Pulled By: iramazanli fbshipit-source-id: 3812028e322c8a64f4343552b0c8c4582ea382f3 --- diff --git a/torch/optim/sgd.py b/torch/optim/sgd.py index a7a67ff..a4a7222 100644 --- a/torch/optim/sgd.py +++ b/torch/optim/sgd.py @@ -6,6 +6,32 @@ from .optimizer import Optimizer, required class SGD(Optimizer): r"""Implements stochastic gradient descent (optionally with momentum). + .. math:: + \begin{aligned} + &\rule{110mm}{0.4pt} \\ + &\textbf{input} : \gamma \text{ (lr)}, \: \theta_0 \text{ (params)}, \: f(\theta) + \text{ (objective)}, \: \lambda \text{ (weight decay)}, \\ + &\hspace{13mm} \:\mu \text{ (momentum)}, \:\tau \text{ (dampening)},\:nesterov\\[-1.ex] + &\rule{110mm}{0.4pt} \\ + &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \: \textbf{do} \\ + &\hspace{5mm}g_t \leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\ + &\hspace{5mm}\textbf{if} \: \lambda \neq 0 \\ + &\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1} \\ + &\hspace{5mm}\textbf{if} \: \mu \neq 0 \\ + &\hspace{10mm}\textbf{if} \: t > 1 \\ + &\hspace{15mm} \textbf{b}_t \leftarrow \mu \textbf{b}_{t-1} + (1-\tau) g_t \\ + &\hspace{10mm}\textbf{else} \\ + &\hspace{15mm} \textbf{b}_t \leftarrow g_t \\ + &\hspace{10mm}\textbf{if} \: nesterov \\ + &\hspace{15mm} g_t \leftarrow g_{t-1} + \mu \textbf{b}_t \\ + &\hspace{10mm}\textbf{else} \\[-1.ex] + &\hspace{15mm} g_t \leftarrow \textbf{b}_t \\ + &\hspace{5mm}\theta_t \leftarrow \theta_{t-1} - \gamma g_t \\[-1.ex] + &\rule{110mm}{0.4pt} \\[-1.ex] + &\bf{return} \: \theta_t \\[-1.ex] + &\rule{110mm}{0.4pt} \\[-1.ex] + \end{aligned} + Nesterov momentum is based on the formula from `On the importance of initialization and momentum in deep learning`__.