arrow
Volume 28, Issue 5
Learning to Discretize: Solving 1D Scalar Conservation Laws via Deep Reinforcement Learning

Yufei Wang, Ziju Shen, Zichao Long & Bin Dong

Commun. Comput. Phys., 28 (2020), pp. 2158-2179.

Published online: 2020-11

Export citation
  • Abstract

Conservation laws are considered to be fundamental laws of nature. It has broad applications in many fields, including physics, chemistry, biology, geology, and engineering. Solving the differential equations associated with conservation laws is a major branch in computational mathematics. The recent success of machine learning, especially deep learning in areas such as computer vision and natural language processing, has attracted a lot of attention from the community of computational mathematics and inspired many intriguing works in combining machine learning with traditional methods. In this paper, we are the first to view numerical PDE solvers as an MDP and to use (deep) RL to learn new solvers. As proof of concept, we focus on 1-dimensional scalar conservation laws. We deploy the machinery of deep reinforcement learning to train a policy network that can decide on how the numerical solutions should be approximated in a sequential and spatial-temporal adaptive manner. We will show that the problem of solving conservation laws can be naturally viewed as a sequential decision-making process, and the numerical schemes learned in such a way can easily enforce long-term accuracy. Furthermore, the learned policy network is carefully designed to determine a good local discrete approximation based on the current state of the solution, which essentially makes the proposed method a meta-learning approach. In other words, the proposed method is capable of learning how to discretize for a given situation mimicking human experts. Finally, we will provide details on how the policy network is trained, how well it performs compared with some state-of-the-art numerical solvers such as WENO schemes, and supervised learning based approach L3D and PINN, and how well it generalizes.

  • AMS Subject Headings

65M06, 68T05, 49N90

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{CiCP-28-2158, author = {Wang , YufeiShen , ZijuLong , Zichao and Dong , Bin}, title = {Learning to Discretize: Solving 1D Scalar Conservation Laws via Deep Reinforcement Learning}, journal = {Communications in Computational Physics}, year = {2020}, volume = {28}, number = {5}, pages = {2158--2179}, abstract = {

Conservation laws are considered to be fundamental laws of nature. It has broad applications in many fields, including physics, chemistry, biology, geology, and engineering. Solving the differential equations associated with conservation laws is a major branch in computational mathematics. The recent success of machine learning, especially deep learning in areas such as computer vision and natural language processing, has attracted a lot of attention from the community of computational mathematics and inspired many intriguing works in combining machine learning with traditional methods. In this paper, we are the first to view numerical PDE solvers as an MDP and to use (deep) RL to learn new solvers. As proof of concept, we focus on 1-dimensional scalar conservation laws. We deploy the machinery of deep reinforcement learning to train a policy network that can decide on how the numerical solutions should be approximated in a sequential and spatial-temporal adaptive manner. We will show that the problem of solving conservation laws can be naturally viewed as a sequential decision-making process, and the numerical schemes learned in such a way can easily enforce long-term accuracy. Furthermore, the learned policy network is carefully designed to determine a good local discrete approximation based on the current state of the solution, which essentially makes the proposed method a meta-learning approach. In other words, the proposed method is capable of learning how to discretize for a given situation mimicking human experts. Finally, we will provide details on how the policy network is trained, how well it performs compared with some state-of-the-art numerical solvers such as WENO schemes, and supervised learning based approach L3D and PINN, and how well it generalizes.

}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2020-0194}, url = {http://global-sci.org/intro/article_detail/cicp/18408.html} }
TY - JOUR T1 - Learning to Discretize: Solving 1D Scalar Conservation Laws via Deep Reinforcement Learning AU - Wang , Yufei AU - Shen , Ziju AU - Long , Zichao AU - Dong , Bin JO - Communications in Computational Physics VL - 5 SP - 2158 EP - 2179 PY - 2020 DA - 2020/11 SN - 28 DO - http://doi.org/10.4208/cicp.OA-2020-0194 UR - https://global-sci.org/intro/article_detail/cicp/18408.html KW - Conservation laws, deep reinforcement learning, finite difference approximation, WENO. AB -

Conservation laws are considered to be fundamental laws of nature. It has broad applications in many fields, including physics, chemistry, biology, geology, and engineering. Solving the differential equations associated with conservation laws is a major branch in computational mathematics. The recent success of machine learning, especially deep learning in areas such as computer vision and natural language processing, has attracted a lot of attention from the community of computational mathematics and inspired many intriguing works in combining machine learning with traditional methods. In this paper, we are the first to view numerical PDE solvers as an MDP and to use (deep) RL to learn new solvers. As proof of concept, we focus on 1-dimensional scalar conservation laws. We deploy the machinery of deep reinforcement learning to train a policy network that can decide on how the numerical solutions should be approximated in a sequential and spatial-temporal adaptive manner. We will show that the problem of solving conservation laws can be naturally viewed as a sequential decision-making process, and the numerical schemes learned in such a way can easily enforce long-term accuracy. Furthermore, the learned policy network is carefully designed to determine a good local discrete approximation based on the current state of the solution, which essentially makes the proposed method a meta-learning approach. In other words, the proposed method is capable of learning how to discretize for a given situation mimicking human experts. Finally, we will provide details on how the policy network is trained, how well it performs compared with some state-of-the-art numerical solvers such as WENO schemes, and supervised learning based approach L3D and PINN, and how well it generalizes.

Wang , YufeiShen , ZijuLong , Zichao and Dong , Bin. (2020). Learning to Discretize: Solving 1D Scalar Conservation Laws via Deep Reinforcement Learning. Communications in Computational Physics. 28 (5). 2158-2179. doi:10.4208/cicp.OA-2020-0194
Copy to clipboard
The citation has been copied to your clipboard