- Journal Home
- Volume 36 - 2024
- Volume 35 - 2024
- Volume 34 - 2023
- Volume 33 - 2023
- Volume 32 - 2022
- Volume 31 - 2022
- Volume 30 - 2021
- Volume 29 - 2021
- Volume 28 - 2020
- Volume 27 - 2020
- Volume 26 - 2019
- Volume 25 - 2019
- Volume 24 - 2018
- Volume 23 - 2018
- Volume 22 - 2017
- Volume 21 - 2017
- Volume 20 - 2016
- Volume 19 - 2016
- Volume 18 - 2015
- Volume 17 - 2015
- Volume 16 - 2014
- Volume 15 - 2014
- Volume 14 - 2013
- Volume 13 - 2013
- Volume 12 - 2012
- Volume 11 - 2012
- Volume 10 - 2011
- Volume 9 - 2011
- Volume 8 - 2010
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2009
- Volume 4 - 2008
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
Commun. Comput. Phys., 31 (2022), pp. 1296-1316.
Published online: 2022-03
Cited by
- BibTex
- RIS
- TXT
Objective functions in large-scale machine-learning and artificial intelligence
applications often live in high dimensions with strong non-convexity and massive
local minima. Gradient-based methods, such as the stochastic gradient method and
Adam [15], and gradient-free methods, such as the consensus-based optimization (CBO)
method, can be employed to find minima. In this work, based on the CBO method and
Adam, we propose a consensus-based global optimization method with adaptive momentum estimation (Adam-CBO). Advantages of the Adam-CBO method include:
• It is capable of finding global minima of non-convex objective functions with
high success rates and low costs. This is verified by finding the global minimizer
of the 1000 dimensional Rastrigin function with 100% success rate at a cost only
growing linearly with respect to the dimensionality.
• It can handle non-differentiable activation functions and thus approximate low-regularity functions with better accuracy. This is confirmed by solving a machine learning task for partial differential equations with low-regularity solutions
where the Adam-CBO method provides better results than Adam.
• It is robust in the sense that its convergence is insensitive to the learning rate by a
linear stability analysis. This is confirmed by finding the minimizer of a quadratic
function.
Objective functions in large-scale machine-learning and artificial intelligence
applications often live in high dimensions with strong non-convexity and massive
local minima. Gradient-based methods, such as the stochastic gradient method and
Adam [15], and gradient-free methods, such as the consensus-based optimization (CBO)
method, can be employed to find minima. In this work, based on the CBO method and
Adam, we propose a consensus-based global optimization method with adaptive momentum estimation (Adam-CBO). Advantages of the Adam-CBO method include:
• It is capable of finding global minima of non-convex objective functions with
high success rates and low costs. This is verified by finding the global minimizer
of the 1000 dimensional Rastrigin function with 100% success rate at a cost only
growing linearly with respect to the dimensionality.
• It can handle non-differentiable activation functions and thus approximate low-regularity functions with better accuracy. This is confirmed by solving a machine learning task for partial differential equations with low-regularity solutions
where the Adam-CBO method provides better results than Adam.
• It is robust in the sense that its convergence is insensitive to the learning rate by a
linear stability analysis. This is confirmed by finding the minimizer of a quadratic
function.