arrow
Volume 16, Issue 4
A Convergence Study of SGD-Type Methods for Stochastic Optimization

Tiannan Xiao & Guoguo Yang

Numer. Math. Theor. Meth. Appl., 16 (2023), pp. 914-930.

Published online: 2023-11

Export citation
  • Abstract

In this paper, we first reinvestigate the convergence of the vanilla SGD method in the sense of $L^2$ under more general learning rates conditions and a more general convex assumption, which relieves the conditions on learning rates and does not need the problem to be strongly convex. Then, by taking advantage of the Lyapunov function technique, we present the convergence of the momentum SGD and Nesterov accelerated SGD methods for the convex and non-convex problem under $L$-smooth assumption that extends the bounded gradient limitation to a certain extent. The convergence of time averaged SGD was also analyzed.

  • AMS Subject Headings

60F05, 60J22, 37N40

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{NMTMA-16-914, author = {Xiao , Tiannan and Yang , Guoguo}, title = {A Convergence Study of SGD-Type Methods for Stochastic Optimization}, journal = {Numerical Mathematics: Theory, Methods and Applications}, year = {2023}, volume = {16}, number = {4}, pages = {914--930}, abstract = {

In this paper, we first reinvestigate the convergence of the vanilla SGD method in the sense of $L^2$ under more general learning rates conditions and a more general convex assumption, which relieves the conditions on learning rates and does not need the problem to be strongly convex. Then, by taking advantage of the Lyapunov function technique, we present the convergence of the momentum SGD and Nesterov accelerated SGD methods for the convex and non-convex problem under $L$-smooth assumption that extends the bounded gradient limitation to a certain extent. The convergence of time averaged SGD was also analyzed.

}, issn = {2079-7338}, doi = {https://doi.org/10.4208/nmtma.OA-2022-0179}, url = {http://global-sci.org/intro/article_detail/nmtma/22116.html} }
TY - JOUR T1 - A Convergence Study of SGD-Type Methods for Stochastic Optimization AU - Xiao , Tiannan AU - Yang , Guoguo JO - Numerical Mathematics: Theory, Methods and Applications VL - 4 SP - 914 EP - 930 PY - 2023 DA - 2023/11 SN - 16 DO - http://doi.org/10.4208/nmtma.OA-2022-0179 UR - https://global-sci.org/intro/article_detail/nmtma/22116.html KW - SGD, momentum SGD, Nesterov acceleration, time averaged SGD, convergence analysis, non-convex. AB -

In this paper, we first reinvestigate the convergence of the vanilla SGD method in the sense of $L^2$ under more general learning rates conditions and a more general convex assumption, which relieves the conditions on learning rates and does not need the problem to be strongly convex. Then, by taking advantage of the Lyapunov function technique, we present the convergence of the momentum SGD and Nesterov accelerated SGD methods for the convex and non-convex problem under $L$-smooth assumption that extends the bounded gradient limitation to a certain extent. The convergence of time averaged SGD was also analyzed.

Xiao , Tiannan and Yang , Guoguo. (2023). A Convergence Study of SGD-Type Methods for Stochastic Optimization. Numerical Mathematics: Theory, Methods and Applications. 16 (4). 914-930. doi:10.4208/nmtma.OA-2022-0179
Copy to clipboard
The citation has been copied to your clipboard