East Asian J. Appl. Math., 14 (2024), pp. 788-819.
Published online: 2024-09
Cited by
- BibTex
- RIS
- TXT
In this article we introduce a novel numerical method to solve the problem of optimal transport and the related elliptic Monge-Ampère equation using neural networks. It is one of the few numerical algorithms capable of solving this problem efficiently with the proper transport boundary condition. Unlike the traditional deep learning solution of partial differential equations (PDEs) attributed to an optimization problem, in this paper we adopt a relaxation algorithm to split the problem into three sub-optimization problems, making each subproblem easy to solve. The algorithm not only obtains the mapping that solves the optimal mass transport problem, but also can find the unique convex solution of the related elliptic Monge-Ampère equation from the mapping using deep input convex neural networks, where second-order partial derivatives can be avoided. It can be solved for high-dimensional problems, and has the additional advantage that the target domain may be non-convex. We present the method and several numerical experiments.
}, issn = {2079-7370}, doi = {https://doi.org/10.4208/eajam.2023-084.050923}, url = {http://global-sci.org/intro/article_detail/eajam/23438.html} }In this article we introduce a novel numerical method to solve the problem of optimal transport and the related elliptic Monge-Ampère equation using neural networks. It is one of the few numerical algorithms capable of solving this problem efficiently with the proper transport boundary condition. Unlike the traditional deep learning solution of partial differential equations (PDEs) attributed to an optimization problem, in this paper we adopt a relaxation algorithm to split the problem into three sub-optimization problems, making each subproblem easy to solve. The algorithm not only obtains the mapping that solves the optimal mass transport problem, but also can find the unique convex solution of the related elliptic Monge-Ampère equation from the mapping using deep input convex neural networks, where second-order partial derivatives can be avoided. It can be solved for high-dimensional problems, and has the additional advantage that the target domain may be non-convex. We present the method and several numerical experiments.