arrow
Volume 35, Issue 4
ExtraPush for Convex Smooth Decentralized Optimization over Directed Networks

Jinshan Zeng & Wotao Yin

J. Comp. Math., 35 (2017), pp. 383-396.

Published online: 2017-08

[An open-access article; the PDF is free to any online user.]

Export citation
  • Abstract

In this note, we extend the algorithms Extra [13] and subgradient-push [10] to a new algorithm ExtraPush for consensus optimization with convex differentiable objective functions over a directed network. When the stationary distribution of the network can be computed in advance, we propose a simplified algorithm called Normalized ExtraPush. Just like Extra, both ExtraPush and Normalized ExtraPush can iterate with a fixed step size. But unlike Extra, they can take a column-stochastic mixing matrix, which is not necessarily doubly stochastic. Therefore, they remove the undirected-network restriction of Extra. Subgradient-push, while also works for directed networks, is slower on the same type of problem because it must use a sequence of diminishing step sizes.
We present preliminary analysis for ExtraPush under a bounded sequence assumption. For Normalized ExtraPush, we show that it naturally produces a bounded, linearly convergent sequence provided that the objective function is strongly convex.
In our numerical experiments, ExtraPush and Normalized ExtraPush performed similarly well. They are significantly faster than subgradient-push, even when we hand-optimize the step sizes for the latter.

  • AMS Subject Headings

90C25, 90C30.

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address

jinshanzeng@jxnu.edu.cn (Jinshan Zeng)

wotaoyin@math.ucla.edu (Wotao Yin)

  • BibTex
  • RIS
  • TXT
@Article{JCM-35-383, author = {Zeng , Jinshan and Yin , Wotao}, title = {ExtraPush for Convex Smooth Decentralized Optimization over Directed Networks}, journal = {Journal of Computational Mathematics}, year = {2017}, volume = {35}, number = {4}, pages = {383--396}, abstract = {

In this note, we extend the algorithms Extra [13] and subgradient-push [10] to a new algorithm ExtraPush for consensus optimization with convex differentiable objective functions over a directed network. When the stationary distribution of the network can be computed in advance, we propose a simplified algorithm called Normalized ExtraPush. Just like Extra, both ExtraPush and Normalized ExtraPush can iterate with a fixed step size. But unlike Extra, they can take a column-stochastic mixing matrix, which is not necessarily doubly stochastic. Therefore, they remove the undirected-network restriction of Extra. Subgradient-push, while also works for directed networks, is slower on the same type of problem because it must use a sequence of diminishing step sizes.
We present preliminary analysis for ExtraPush under a bounded sequence assumption. For Normalized ExtraPush, we show that it naturally produces a bounded, linearly convergent sequence provided that the objective function is strongly convex.
In our numerical experiments, ExtraPush and Normalized ExtraPush performed similarly well. They are significantly faster than subgradient-push, even when we hand-optimize the step sizes for the latter.

}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.1606-m2015-0452}, url = {http://global-sci.org/intro/article_detail/jcm/10022.html} }
TY - JOUR T1 - ExtraPush for Convex Smooth Decentralized Optimization over Directed Networks AU - Zeng , Jinshan AU - Yin , Wotao JO - Journal of Computational Mathematics VL - 4 SP - 383 EP - 396 PY - 2017 DA - 2017/08 SN - 35 DO - http://doi.org/10.4208/jcm.1606-m2015-0452 UR - https://global-sci.org/intro/article_detail/jcm/10022.html KW - Decentralized optimization, Directed graph, Consensus, Non-doubly stochastic, Extra. AB -

In this note, we extend the algorithms Extra [13] and subgradient-push [10] to a new algorithm ExtraPush for consensus optimization with convex differentiable objective functions over a directed network. When the stationary distribution of the network can be computed in advance, we propose a simplified algorithm called Normalized ExtraPush. Just like Extra, both ExtraPush and Normalized ExtraPush can iterate with a fixed step size. But unlike Extra, they can take a column-stochastic mixing matrix, which is not necessarily doubly stochastic. Therefore, they remove the undirected-network restriction of Extra. Subgradient-push, while also works for directed networks, is slower on the same type of problem because it must use a sequence of diminishing step sizes.
We present preliminary analysis for ExtraPush under a bounded sequence assumption. For Normalized ExtraPush, we show that it naturally produces a bounded, linearly convergent sequence provided that the objective function is strongly convex.
In our numerical experiments, ExtraPush and Normalized ExtraPush performed similarly well. They are significantly faster than subgradient-push, even when we hand-optimize the step sizes for the latter.

Zeng , Jinshan and Yin , Wotao. (2017). ExtraPush for Convex Smooth Decentralized Optimization over Directed Networks. Journal of Computational Mathematics. 35 (4). 383-396. doi:10.4208/jcm.1606-m2015-0452
Copy to clipboard
The citation has been copied to your clipboard