- Journal Home
- Volume 36 - 2024
- Volume 35 - 2024
- Volume 34 - 2023
- Volume 33 - 2023
- Volume 32 - 2022
- Volume 31 - 2022
- Volume 30 - 2021
- Volume 29 - 2021
- Volume 28 - 2020
- Volume 27 - 2020
- Volume 26 - 2019
- Volume 25 - 2019
- Volume 24 - 2018
- Volume 23 - 2018
- Volume 22 - 2017
- Volume 21 - 2017
- Volume 20 - 2016
- Volume 19 - 2016
- Volume 18 - 2015
- Volume 17 - 2015
- Volume 16 - 2014
- Volume 15 - 2014
- Volume 14 - 2013
- Volume 13 - 2013
- Volume 12 - 2012
- Volume 11 - 2012
- Volume 10 - 2011
- Volume 9 - 2011
- Volume 8 - 2010
- Volume 7 - 2010
- Volume 6 - 2009
- Volume 5 - 2009
- Volume 4 - 2008
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
Commun. Comput. Phys., 36 (2024), pp. 200-220.
Published online: 2024-07
Cited by
- BibTex
- RIS
- TXT
Algebraic Multigrid (AMG) is one of the most widely used iterative algorithms for solving large sparse linear equations $Ax=b.$ In AMG, the coarse grid is a key component that affects the efficiency of the algorithm, the construction of which relies on the strong threshold parameter $θ.$ This parameter is generally chosen empirically, with a default value in many current AMG solvers of 0.25 for 2D problems and 0.5 for 3D problems. However, for many practical problems, the quality of the coarse grid and the efficiency of the AMG algorithm are sensitive to $θ;$ the default value is rarely optimal, and sometimes is far from it. Therefore, how to choose a better $θ$ is an important question. In this paper, we propose a deep learning based auto-tuning method, AutoAMG($θ$) for multiscale sparse linear equations, which are common in practical problems. The method uses Graph Neural Network (GNN) to extract matrix features, and a Multilayer Perceptron (MLP) to build the mapping between matrix features and the optimal $θ,$ which can adaptively predict $θ$ values for different matrices. Numerical experiments show that AutoAMG($θ$) can achieve significant speedup compared to the default $θ$ value.
}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2023-0072}, url = {http://global-sci.org/intro/article_detail/cicp/23301.html} }Algebraic Multigrid (AMG) is one of the most widely used iterative algorithms for solving large sparse linear equations $Ax=b.$ In AMG, the coarse grid is a key component that affects the efficiency of the algorithm, the construction of which relies on the strong threshold parameter $θ.$ This parameter is generally chosen empirically, with a default value in many current AMG solvers of 0.25 for 2D problems and 0.5 for 3D problems. However, for many practical problems, the quality of the coarse grid and the efficiency of the AMG algorithm are sensitive to $θ;$ the default value is rarely optimal, and sometimes is far from it. Therefore, how to choose a better $θ$ is an important question. In this paper, we propose a deep learning based auto-tuning method, AutoAMG($θ$) for multiscale sparse linear equations, which are common in practical problems. The method uses Graph Neural Network (GNN) to extract matrix features, and a Multilayer Perceptron (MLP) to build the mapping between matrix features and the optimal $θ,$ which can adaptively predict $θ$ values for different matrices. Numerical experiments show that AutoAMG($θ$) can achieve significant speedup compared to the default $θ$ value.