Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Stochastic gradient descent
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Extensions and variants== Many improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set a [[learning rate]] (step size) has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge; setting it too low makes it slow to converge.<ref>{{Cite book|last1=Goodfellow|first1=Ian|url=https://www.deeplearningbook.org|title=Deep Learning|last2=Bengio|first2=Yoshua|last3=Courville|first3=Aaron|publisher=MIT Press|year=2016|isbn=978-0262035613|pages=291|author-link=Ian Goodfellow}}</ref> A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing function {{mvar|η<sub>t</sub>}} of the iteration number {{mvar|t}}, giving a ''learning rate schedule'', so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen on [[K-means clustering|{{mvar|k}}-means clustering]].<ref>Cited by {{cite conference |last1=Darken |first1=Christian |first2=John |last2=Moody |title=Fast adaptive k-means clustering: some empirical results |year=1990 |conference=Int'l Joint Conf. on Neural Networks (IJCNN) |publisher=IEEE|doi=10.1109/IJCNN.1990.137720 }}</ref> Practical guidance on choosing the step size in several variants of SGD is given by Spall.<ref>{{cite book|last=Spall|first=J. C.|title=Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control|publisher=Wiley|year=2003|isbn=0-471-33052-3|location=Hoboken, NJ|pages=Sections 4.4, 6.6, and 7.5}}</ref> [[File:Optimizer Animations.gif|thumb|A graph visualizing the behavior of a selected set of optimizers, using a 3D perspective projection of a loss function f(x, y)]] [[File:Optimizer Animations Birds-Eye.gif|thumb|A graph visualizing the behavior of a selected set of optimizers]] ===Implicit updates (ISGD)=== As mentioned earlier, classical stochastic gradient descent is generally sensitive to [[learning rate]] {{mvar|η}}. Fast convergence requires large learning rates but this may induce numerical instability. The problem can be largely solved<ref>{{cite journal |last1=Toulis |first1=Panos |first2=Edoardo |last2=Airoldi|title=Asymptotic and finite-sample properties of estimators based on stochastic gradients |journal=Annals of Statistics |volume=45 |issue=4 |year=2017 |pages=1694–1727|doi=10.1214/16-AOS1506|arxiv=1408.2923 |s2cid=10279395 }}</ref> by considering ''implicit updates'' whereby the stochastic gradient is evaluated at the next iterate rather than the current one: <math display="block">w^\text{new} := w^\text{old} - \eta\, \nabla Q_i(w^\text{new}).</math> This equation is implicit since <math>w^\text{new}</math> appears on both sides of the equation. It is a stochastic form of the [[proximal gradient method]] since the update can also be written as: <math display="block">w^\text{new} := \arg\min_w \left\{ Q_i(w) + \frac{1}{2\eta} \left\|w - w^\text{old}\right\|^2 \right\}.</math> As an example, consider least squares with features <math>x_1, \ldots, x_n \in\mathbb{R}^p</math> and observations <math>y_1, \ldots, y_n\in\mathbb{R}</math>. We wish to solve: <math display="block">\min_w \sum_{j=1}^n \left(y_j - x_j'w\right)^2,</math> where <math>x_j' w = x_{j1} w_1 + x_{j, 2} w_2 + ... + x_{j,p} w_p</math> indicates the inner product. Note that <math>x</math> could have "1" as the first element to include an intercept. Classical stochastic gradient descent proceeds as follows: <math display="block">w^\text{new} = w^\text{old} + \eta \left(y_i - x_i'w^\text{old}\right) x_i</math> where <math>i</math> is uniformly sampled between 1 and <math>n</math>. Although theoretical convergence of this procedure happens under relatively mild assumptions, in practice the procedure can be quite unstable. In particular, when <math>\eta</math> is misspecified so that <math>I - \eta x_i x_i'</math> has large absolute eigenvalues with high probability, the procedure may diverge numerically within a few iterations. In contrast, ''implicit stochastic gradient descent'' (shortened as ISGD) can be solved in closed-form as: <math display="block">w^\text{new} = w^\text{old} + \frac{\eta}{1 + \eta \left\|x_i\right\|^2} \left(y_i - x_i'w^\text{old}\right) x_i.</math> This procedure will remain numerically stable virtually for all <math>\eta</math> as the [[learning rate]] is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between [[Least mean squares filter|least mean squares (LMS)]] and [[Least mean squares filter#Normalized least mean squares filter (NLMS)|normalized least mean squares filter (NLMS)]]. Even though a closed-form solution for ISGD is only possible in least squares, the procedure can be efficiently implemented in a wide range of models. Specifically, suppose that <math>Q_i(w)</math> depends on <math>w</math> only through a linear combination with features <math>x_i</math>, so that we can write <math>\nabla_w Q_i(w) = -q(x_i'w) x_i</math>, where <math>q() \in\mathbb{R}</math> may depend on <math>x_i, y_i</math> as well but not on <math>w</math> except through <math>x_i'w</math>. Least squares obeys this rule, and so does [[logistic regression]], and most [[generalized linear model]]s. For instance, in least squares, <math>q(x_i'w) = y_i - x_i'w</math>, and in logistic regression <math>q(x_i'w) = y_i - S(x_i'w)</math>, where <math>S(u) = e^u/(1+e^u)</math> is the [[logistic function]]. In [[Poisson regression]], <math>q(x_i'w) = y_i - e^{x_i'w}</math>, and so on. In such settings, ISGD is simply implemented as follows. Let <math>f(\xi) = \eta q(x_i'w^\text{old} + \xi \|x_i\|^2)</math>, where <math>\xi</math> is scalar. Then, ISGD is equivalent to: <math display="block">w^\text{new} = w^\text{old} + \xi^\ast x_i,~\text{where}~\xi^\ast = f(\xi^\ast).</math> The scaling factor <math>\xi^\ast\in\mathbb{R}</math> can be found through the [[bisection method]] since in most regular models, such as the aforementioned generalized linear models, function <math>q()</math> is decreasing, and thus the search bounds for <math>\xi^\ast</math> are <math>[\min(0, f(0)), \max(0, f(0))]</math>. ===Momentum=== {{Anchor|Momentum|Nesterov}} Further proposals include the ''momentum method'' or the ''heavy ball method'', which in ML context appeared in [[David Rumelhart|Rumelhart]], [[Geoffrey Hinton|Hinton]] and [[Ronald J. Williams|Williams]]' paper on backpropagation learning<ref name="Rumelhart1986">{{cite journal|last=Rumelhart|first=David E.|author2=Hinton, Geoffrey E.|author3=Williams, Ronald J.|title=Learning representations by back-propagating errors|journal=Nature|date=8 October 1986|volume=323|issue=6088|pages=533–536|doi=10.1038/323533a0|bibcode=1986Natur.323..533R|s2cid=205001834}}</ref> and borrowed the idea from Soviet mathematician Boris Polyak's 1964 article on solving functional equations.<ref>{{cite web | url=https://boostedml.com/2020/07/gradient-descent-and-momentum-the-heavy-ball-method.html | title=Gradient Descent and Momentum: The Heavy Ball Method | date=13 July 2020 }}</ref> Stochastic gradient descent with momentum remembers the update {{math|Δ''w''}} at each iteration, and determines the next update as a [[linear combination]] of the gradient and the previous update:<ref name="Sutskever2013">{{cite conference|last=Sutskever|first=Ilya|author2=Martens, James|author3=Dahl, George|author4=Hinton, Geoffrey E.|editor=Sanjoy Dasgupta and David Mcallester|title=On the importance of initialization and momentum in deep learning|conference=In Proceedings of the 30th international conference on machine learning (ICML-13)|date=June 2013|volume=28|location=Atlanta, GA|pages=1139–1147|url=http://www.cs.utoronto.ca/~ilya/pubs/2013/1051_2.pdf|access-date=14 January 2016}}</ref><ref name="SutskeverPhD">{{cite thesis|last=Sutskever|first=Ilya|title=Training recurrent neural networks|date=2013|publisher=University of Toronto|url=http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf|type=Ph.D.|page=74}}</ref> <math display="block">\Delta w := \alpha \Delta w - \eta\, \nabla Q_i(w)</math> <math display="block">w := w + \Delta w </math> that leads to: <math display="block">w := w - \eta\, \nabla Q_i(w) + \alpha \Delta w </math> where the [[parametric statistics|parameter]] <math>w</math> which minimizes <math>Q(w)</math> is to be [[estimator|estimated]], <math>\eta</math> is a step size (sometimes called the ''[[learning rate]]'' in machine learning) and <math>\alpha</math> is an exponential [[Learning rate#Learning rate schedule|decay factor]] between 0 and 1 that determines the relative contribution of the current gradient and earlier gradients to the weight change. The name momentum stems from an analogy to [[momentum]] in physics: the weight vector <math>w</math>, thought of as a particle traveling through parameter space,{{r|Rumelhart1986}} incurs acceleration from the gradient of the loss ("[[force]]"). Unlike in classical stochastic gradient descent, it tends to keep traveling in the same direction, preventing oscillations. Momentum has been used successfully by computer scientists in the training of [[artificial neural networks]] for several decades.<ref name="Zeiler 2012">{{cite arXiv |last=Zeiler |first=Matthew D. |eprint=1212.5701 |title=ADADELTA: An adaptive learning rate method |year=2012|class=cs.LG }}</ref> The ''momentum method'' is closely related to [[Langevin dynamics|underdamped Langevin dynamics]], and may be combined with [[simulated annealing]].<ref name="Borysenko2021">{{cite journal|last=Borysenko|first=Oleksandr|author2=Byshkin, Maksym|title=CoolMomentum: A Method for Stochastic Optimization by Langevin Dynamics with Simulated Annealing|journal=Scientific Reports|date=2021|volume=11|issue=1|pages=10705|doi=10.1038/s41598-021-90144-3|pmid=34021212|pmc=8139967|arxiv=2005.14605|bibcode=2021NatSR..1110705B}}</ref> In mid-1980s the method was modified by [[Yurii Nesterov]] to use the gradient predicted at the next point, and the resulting so-called ''Nesterov Accelerated Gradient'' was sometimes used in ML in the 2010s.<ref>{{cite web | url=https://paperswithcode.com/method/nesterov-accelerated-gradient | title=Papers with Code - Nesterov Accelerated Gradient Explained }}</ref> ===Averaging=== ''Averaged stochastic gradient descent'', invented independently by Ruppert and Polyak in the late 1980s, is ordinary stochastic gradient descent that records an average of its parameter vector over time. That is, the update is the same as for ordinary stochastic gradient descent, but the algorithm also keeps track of<ref>{{cite journal |last1=Polyak |first1=Boris T. |first2=Anatoli B. |last2=Juditsky |title=Acceleration of stochastic approximation by averaging |journal=SIAM J. Control Optim. |volume=30 |issue=4 |year=1992 |pages=838–855 |url=http://www.meyn.ece.ufl.edu/archive/spm_files/Courses/ECE555-2011/555media/poljud92.pdf |doi=10.1137/0330046 |s2cid=3548228 |access-date=2018-02-14 |archive-date=2016-01-12 |archive-url=https://web.archive.org/web/20160112091615/http://www.meyn.ece.ufl.edu/archive/spm_files/Courses/ECE555-2011/555media/poljud92.pdf |url-status=dead }}</ref> <math display="block">\bar{w} = \frac{1}{t} \sum_{i=0}^{t-1} w_i.</math>When optimization is done, this averaged parameter vector takes the place of {{mvar|w}}. ===AdaGrad=== ''AdaGrad'' (for adaptive [[Gradient descent|gradient]] algorithm) is a modified stochastic gradient descent algorithm with per-parameter [[learning rate]], first published in 2011.<ref name="duchi">{{cite journal |last1=Duchi |first1=John |first2=Elad |last2=Hazan |first3=Yoram |last3=Singer |title=Adaptive subgradient methods for online learning and stochastic optimization |journal=[[Journal of Machine Learning Research|JMLR]] |volume=12 |year=2011 |pages=2121–2159 |url=http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf}}</ref> Informally, this increases the learning rate for {{clarify|text=sparser parameters|date=November 2023}} and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative. Examples of such applications include natural language processing and image recognition.<ref name="duchi"/> It still has a base learning rate {{mvar|η}}, but this is multiplied with the elements of a vector {{math|{''G''<sub>''j'',''j''</sub>} }} which is the diagonal of the [[outer product]] matrix <math display="block">G = \sum_{\tau=1}^t g_\tau g_\tau^\mathsf{T}</math> where <math>g_\tau = \nabla Q_i(w)</math>, the gradient, at iteration {{mvar|τ}}. The diagonal is given by <math display="block">G_{j,j} = \sum_{\tau=1}^t g_{\tau,j}^2.</math>This vector essentially stores a historical sum of gradient squares by dimension and is updated after every iteration. The formula for an update is now{{efn|<math>\odot</math> denotes the [[Hadamard product (matrices)|element-wise product]].}} <math display="block">w := w - \eta\, \mathrm{diag}(G)^{-\frac{1}{2}} \odot g</math> or, written as per-parameter updates, <math display="block">w_j := w_j - \frac{\eta}{\sqrt{G_{j,j}}} g_j.</math> Each {{math|{''G''<sub>(''i'',''i'')</sub>} }} gives rise to a scaling factor for the learning rate that applies to a single parameter {{math|''w''<sub>''i''</sub>}}. Since the denominator in this factor, <math display="inline">\sqrt{G_i} = \sqrt{\sum_{\tau=1}^t g_\tau^2}</math> is the [[Norm (mathematics)#Euclidean norm|''ℓ''<sub>2</sub> norm]] of previous derivatives, extreme parameter updates get dampened, while parameters that get few or small updates receive higher learning rates.<ref name="Zeiler 2012"/> While designed for [[convex optimization|convex problems]], AdaGrad has been successfully applied to non-convex optimization.<ref>{{cite journal |last1=Gupta |first1=Maya R. |first2=Samy |last2=Bengio |first3=Jason |last3=Weston |title=Training highly multiclass classifiers |journal=JMLR |volume=15 |issue=1 |year=2014 |pages=1461–1492 |url=http://jmlr.org/papers/volume15/gupta14a/gupta14a.pdf}}</ref> ===RMSProp=== ''RMSProp'' (for Root Mean Square Propagation) is a method invented in 2012 by James Martens and [[Ilya Sutskever]], at the time both PhD students in Geoffrey Hinton's group, in which the [[learning rate]] is, like in Adagrad, adapted for each of the parameters. The idea is to divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight.<ref name=rmsprop>{{Cite web|url=http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf|title=Lecture 6e rmsprop: Divide the gradient by a running average of its recent magnitude|last=Hinton|first=Geoffrey|author-link=Geoffrey Hinton|pages=26|access-date=19 March 2020}}</ref> Unusually, it was not published in an article but merely described in a [[Coursera]] lecture.{{citation needed|date=June 2023}} Citation 1: https://deepai.org/machine-learning-glossary-and-terms/rmsprop#:~:text=The%20RMSProp%20algorithm%20was%20introduced,its%20effectiveness%20in%20various%20applications. Citation 2: this video at 36:37 https://www.youtube.com/watch?v=-eyhCTvrEtE&t=36m37s So, first the running average is calculated in terms of means square, <math display="block">v(w,t):=\gamma v(w,t-1) + \left(1-\gamma\right) \left(\nabla Q_i(w)\right)^2</math> where, <math>\gamma</math> is the forgetting factor. The concept of storing the historical gradient as sum of squares is borrowed from Adagrad, but "forgetting" is introduced to solve Adagrad's diminishing learning rates in non-convex problems by gradually decreasing the influence of old data.{{cn|date=June 2024}} And the parameters are updated as, <math display="block">w:=w-\frac{\eta}{\sqrt{v(w,t)}}\nabla Q_i(w)</math> RMSProp has shown good adaptation of learning rate in different applications. RMSProp can be seen as a generalization of [[Rprop]] and is capable to work with mini-batches as well opposed to only full-batches.<ref name="rmsprop" /> ===Adam=== ''Adam''<ref name="Adam2014">{{cite arXiv |first1=Diederik |last1=Kingma |first2=Jimmy |last2=Ba |eprint=1412.6980 |title=Adam: A Method for Stochastic Optimization |year=2014 |class=cs.LG }}</ref> (short for Adaptive Moment Estimation) is a 2014 update to the ''RMSProp'' optimizer combining it with the main feature of the ''Momentum method''.<ref>{{cite web | url=https://www.oreilly.com/library/view/fundamentals-of-deep/9781491925607/ch04.html | title=4. Beyond Gradient Descent - Fundamentals of Deep Learning [Book] }}</ref> In this optimization algorithm, running averages with exponential forgetting of both the gradients and the second moments of the gradients are used. Given parameters <math> w^ {(t)} </math> and a loss function <math> L ^ {(t)} </math>, where <math> t </math> indexes the current training iteration (indexed at <math> 1 </math>), Adam's parameter update is given by: <math display="block">m_w ^ {(t)} := \beta_1 m_w ^ {(t-1)} + \left(1 - \beta_1\right) \nabla _w L ^ {(t-1)} </math> <math display="block">v_w ^ {(t)} := \beta_2 v_w ^ {(t-1)} + \left(1 - \beta_2\right) \left(\nabla _w L ^ {(t-1)} \right)^2 </math> <math display="block">\hat{m}_w ^ {(t)} = \frac{m_w ^ {(t)}}{1 - \beta_1^t} </math> <math display="block">\hat{v}_w ^ {(t)} = \frac{ v_w ^ {(t)}}{1 - \beta_2^t} </math> <math display="block">w ^ {(t)} := w ^ {(t-1)} - \eta \frac{\hat{m}_w^{(t)}}{\sqrt{\hat{v}_w^{(t)}} + \varepsilon} </math> where <math>\varepsilon</math> is a small scalar (e.g. <math>10^{-8}</math>) used to prevent division by 0, and <math>\beta_1</math> (e.g. 0.9) and <math>\beta_2</math> (e.g. 0.999) are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done element-wise. As the exponential moving averages of the gradient <math> m_w ^ {(t)}</math> and the squared gradient <math> v_w ^ {(t)}</math> are initialized with a vector of 0's, there would be a bias towards zero in the first training iterations. A factor <math>\tfrac{1}{1 - \beta_{1/2}^t}</math> is introduced to compensate this bias and get better estimates <math>\hat{m}_w ^ {(t)}</math> and <math>\hat{v}_w ^ {(t)}</math>. The initial proof establishing the convergence of Adam was incomplete, and subsequent analysis has revealed that Adam does not converge for all convex objectives.<ref>{{cite conference |last1=Reddi |first1=Sashank J. |last2=Kale |first2=Satyen |last3=Kumar |first3=Sanjiv |date=2018 |title=On the Convergence of Adam and Beyond |url=https://openreview.net/forum?id=ryQu7f-RZ |conference=6th International Conference on Learning Representations (ICLR 2018) |arxiv=1904.09237 |doi=}}</ref><ref>{{Cite thesis |last=Rubio |first=David Martínez |title=Convergence Analysis of an Adaptive Method of Gradient Descent |date=2017 |access-date=5 January 2024 |degree=Master |publisher=University of Oxford |url=https://damaru2.github.io/convergence_analysis_hypergradient_descent/dissertation_hypergradients.pdf}}</ref> Despite this, ''Adam'' continues to be used due to its strong performance in practice.<ref>{{cite conference |last1=Zhang |first1=Yushun |last2=Chen |first2=Congliang |last3=Shi |first3=Naichen |last4=Sun |first4=Ruoyu |last5=Luo |first5=Zhi-Quan |date=2022 |title=Adam Can Converge Without Any Modification On Update Rules |conference=Advances in Neural Information Processing Systems 35 (NeurIPS 2022) |arxiv=2208.09632 |book-title=Advances in Neural Information Processing Systems 35}}</ref> ==== Variants ==== The popularity of ''Adam'' inspired many variants and enhancements. Some examples include: * Nesterov-enhanced gradients: ''NAdam'',<ref>{{Cite journal |last=Dozat |first=T. |date=2016 |title=Incorporating Nesterov Momentum into Adam |s2cid=70293087 |language=en}}</ref> ''FASFA''<ref>{{Cite journal |last=Naveen |first=Philip |date=2022-08-09 |title=FASFA: A Novel Next-Generation Backpropagation Optimizer |url=http://dx.doi.org/10.36227/techrxiv.20427852.v1 |access-date=2022-11-19 |doi=10.36227/techrxiv.20427852.v1 }}</ref> * varying interpretations of second-order information: ''Powerpropagation''<ref>{{Cite book |last=Whye |first=Schwarz, Jonathan Jayakumar, Siddhant M. Pascanu, Razvan Latham, Peter E. Teh, Yee |url=http://worldcat.org/oclc/1333722169 |title=Powerpropagation: A sparsity inducing weight reparameterisation |date=2021-10-01 |oclc=1333722169}}</ref> and ''AdaSqrt''.<ref>{{Cite journal |last1=Hu |first1=Yuzheng |last2=Lin |first2=Licong |last3=Tang |first3=Shange |date=2019-12-20 |title=Second-order Information in First-order Optimization Methods |arxiv=1912.09926 }}</ref> * Using [[Uniform norm|infinity norm]]: ''AdaMax''<ref name="Adam2014" /> * ''AMSGrad'',<ref>{{Cite journal |last1=Reddi |first1=Sashank J. |last2=Kale |first2=Satyen |last3=Kumar |first3=Sanjiv |date=2018 |title=On the Convergence of Adam and Beyond |arxiv=1904.09237 }}</ref> which improves convergence over ''Adam'' by using maximum of past squared gradients instead of the exponential average.<ref>{{cite web | url=https://www.ruder.io/optimizing-gradient-descent/#amsgrad | title=An overview of gradient descent optimization algorithms | date=19 January 2016 }}</ref> ''AdamX''<ref>{{Cite journal |last1=Tran |first1=Phuong Thi |last2=Phong |first2=Le Trieu |date=2019 |title=On the Convergence Proof of AMSGrad and a New Version |url=https://ieeexplore.ieee.org/document/8713445 |journal=IEEE Access |volume=7 |pages=61706–61716 |doi=10.1109/ACCESS.2019.2916341 |issn=2169-3536|arxiv=1904.03590 |bibcode=2019IEEEA...761706T }}</ref> further improves convergence over ''AMSGrad''. * ''AdamW'',<ref name="AdamW">{{cite journal |last1=Loshchilov |first1=Ilya |last2=Hutter |first2=Frank |date=4 January 2019 |title=Decoupled Weight Decay Regularization |arxiv=1711.05101}}</ref> which improves the [[weight decay]]. ===Sign-based stochastic gradient descent=== Even though sign-based optimization goes back to the aforementioned ''Rprop'', in 2018 researchers tried to simplify Adam by removing the magnitude of the stochastic gradient from being taken into account and only considering its sign.<ref>{{cite web | url=https://openreview.net/forum?id=S1EwLkW0W | title=Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients | date=15 February 2018 | last1=Balles | first1=Lukas | last2=Hennig | first2=Philipp }}</ref><ref>{{cite web | url=https://proceedings.mlr.press/v80/bernstein18a.html | title=SignSGD: Compressed Optimisation for Non-Convex Problems | date=3 July 2018 | pages=560–569 }}</ref> {{expand section|date=June 2023}}<!--https://arxiv.org/pdf/2305.14342.pdf has more literature--> ===Backtracking line search=== [[Backtracking line search]] is another variant of gradient descent. All of the below are sourced from the mentioned link. It is based on a condition known as the Armijo–Goldstein condition. Both methods allow learning rates to change at each iteration; however, the manner of the change is different. Backtracking line search uses function evaluations to check Armijo's condition, and in principle the loop in the algorithm for determining the learning rates can be long and unknown in advance. Adaptive SGD does not need a loop in determining learning rates. On the other hand, adaptive SGD does not guarantee the "descent property" – which Backtracking line search enjoys – which is that <math>f(x_{n+1})\leq f(x_n)</math> for all n. If the gradient of the cost function is globally Lipschitz continuous, with Lipschitz constant L, and learning rate is chosen of the order 1/L, then the standard version of SGD is a special case of backtracking line search. ===Second-order methods=== A stochastic analogue of the standard (deterministic) [[Newton's method in optimization|Newton–Raphson algorithm]] (a "second-order" method) provides an asymptotically optimal or near-optimal form of iterative optimization in the setting of stochastic approximation{{Citation needed|date=April 2020}}. A method that uses direct measurements of the [[Hessian matrix|Hessian matrices]] of the summands in the empirical risk function was developed by Byrd, Hansen, Nocedal, and Singer.<ref>{{cite journal |first1=R. H. |last1=Byrd |first2=S. L. |last2=Hansen |first3=J. |last3=Nocedal |first4=Y. |last4=Singer |title=A Stochastic Quasi-Newton method for Large-Scale Optimization |journal=SIAM Journal on Optimization |volume=26 |issue=2 |pages=1008–1031 |year=2016 |doi=10.1137/140954362 |arxiv=1401.7020 |s2cid=12396034 }}</ref> However, directly determining the required Hessian matrices for optimization may not be possible in practice. Practical and theoretically sound methods for second-order versions of SGD that do not require direct Hessian information are given by Spall and others.<ref>{{cite journal |first=J. C. |last=Spall |year=2000 |title=Adaptive Stochastic Approximation by the Simultaneous Perturbation Method |journal=IEEE Transactions on Automatic Control |volume=45 |issue= 10|pages=1839−1853 |doi=10.1109/TAC.2000.880982 }}</ref><ref>{{cite journal |first=J. C. |last=Spall |year=2009 |title=Feedback and Weighting Mechanisms for Improving Jacobian Estimates in the Adaptive Simultaneous Perturbation Algorithm |journal=IEEE Transactions on Automatic Control |volume=54 |issue=6 |pages=1216–1229 |doi=10.1109/TAC.2009.2019793 |s2cid=3564529 }}</ref><ref>{{cite book |first1=S. |last1=Bhatnagar |first2=H. L. |last2=Prasad |first3=L. A. |last3=Prashanth |year=2013 |title=Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods |location=London |publisher=Springer |isbn=978-1-4471-4284-3 }}</ref> (A less efficient method based on finite differences, instead of simultaneous perturbations, is given by Ruppert.<ref>{{cite journal |first=D. |last=Ruppert |title=A Newton-Raphson Version of the Multivariate Robbins-Monro Procedure |journal=[[Annals of Statistics]] |volume=13 |issue=1 |pages=236–245 |year=1985 |doi=10.1214/aos/1176346589 |doi-access=free }}</ref>) Another approach to the approximation Hessian matrix is replacing it with the Fisher information matrix, which transforms usual gradient to natural.<ref>{{cite journal |first1 = S. |last1 = Amari |year=1998 |title = Natural gradient works efficiently in learning |journal = Neural Computation| volume=10 | issue=2 |pages=251–276 |doi=10.1162/089976698300017746|s2cid = 207585383 }}</ref> These methods not requiring direct Hessian information are based on either values of the summands in the above empirical risk function or values of the gradients of the summands (i.e., the SGD inputs). In particular, second-order optimality is asymptotically achievable without direct calculation of the Hessian matrices of the summands in the empirical risk function. When the objective is a [[Non-linear least squares| nonlinear least-squres]] loss <math display="block"> Q(w) = \frac{1}{n} \sum_{i=1}^n Q_i(w) = \frac{1}{n} \sum_{i=1}^n (m(w;x_i)-y_i)^2, </math> where <math>m(w;x_i)</math> is the predictive model (e.g., a [[Neural network (machine learning)|deep neural network]]) the objective's structure can be exploited to estimate 2nd order information using gradients only. The resulting methods are simple and often effective<ref>{{cite conference |last1=Brust |first1=J.J. |date=2021 |title=Nonlinear least squares for large-scale machine learning using stochastic Jacobian estimates |conference=ICML 2021 |arxiv=2107.05598 |book-title=Workshop: Beyond First Order Methods in Machine Learning}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)