- Journal Home
- Volume 42 - 2024
- Volume 41 - 2023
- Volume 40 - 2022
- Volume 39 - 2021
- Volume 38 - 2020
- Volume 37 - 2019
- Volume 36 - 2018
- Volume 35 - 2017
- Volume 34 - 2016
- Volume 33 - 2015
- Volume 32 - 2014
- Volume 31 - 2013
- Volume 30 - 2012
- Volume 29 - 2011
- Volume 28 - 2010
- Volume 27 - 2009
- Volume 26 - 2008
- Volume 25 - 2007
- Volume 24 - 2006
- Volume 23 - 2005
- Volume 22 - 2004
- Volume 21 - 2003
- Volume 20 - 2002
- Volume 19 - 2001
- Volume 18 - 2000
- Volume 17 - 1999
- Volume 16 - 1998
- Volume 15 - 1997
- Volume 14 - 1996
- Volume 13 - 1995
- Volume 12 - 1994
- Volume 11 - 1993
- Volume 10 - 1992
- Volume 9 - 1991
- Volume 8 - 1990
- Volume 7 - 1989
- Volume 6 - 1988
- Volume 5 - 1987
- Volume 4 - 1986
- Volume 3 - 1985
- Volume 2 - 1984
- Volume 1 - 1983
Cited by
- BibTex
- RIS
- TXT
We introduce a new algorithm, extended regularized dual averaging (XRDA), for solving regularized stochastic optimization problems, which generalizes the regularized dual averaging (RDA) method. The main novelty of the method is that it allows a flexible control of the backward step size. For instance, the backward step size used in RDA grows without bound, while for XRDA the backward step size can be kept bounded. We demonstrate experimentally that additional control over the backward step size can speed up the convergence of the algorithm while preserving desired properties of the iterates, such as sparsity. Theoretically, we show that the XRDA method achieves the same convergence rate as RDA for general convex objectives.
}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.2210-m2021-0106}, url = {http://global-sci.org/intro/article_detail/jcm/21637.html} }We introduce a new algorithm, extended regularized dual averaging (XRDA), for solving regularized stochastic optimization problems, which generalizes the regularized dual averaging (RDA) method. The main novelty of the method is that it allows a flexible control of the backward step size. For instance, the backward step size used in RDA grows without bound, while for XRDA the backward step size can be kept bounded. We demonstrate experimentally that additional control over the backward step size can speed up the convergence of the algorithm while preserving desired properties of the iterates, such as sparsity. Theoretically, we show that the XRDA method achieves the same convergence rate as RDA for general convex objectives.