TY - JOUR T1 - Sparse Deep Neural Network for Nonlinear Partial Differential Equations AU - Xu , Yuesheng AU - Zeng , Taishan JO - Numerical Mathematics: Theory, Methods and Applications VL - 1 SP - 58 EP - 78 PY - 2023 DA - 2023/01 SN - 16 DO - http://doi.org/10.4208/nmtma.OA-2022-0104 UR - https://global-sci.org/intro/article_detail/nmtma/21343.html KW - Sparse approximation, deep learning, nonlinear partial differential equations, sparse regularization, adaptive approximation. AB -
More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications. Data that we encounter often have certain embedded sparsity structures. That is, if they are represented in an appropriate basis, their energies can concentrate on a small number of basis functions. This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations whose solutions may have singularities, by deep neural networks (DNNs) with a sparse regularization with multiple parameters. Noting that DNNs have an intrinsic multi-scale structure which is favorable for adaptive representation of functions, by employing a penalty with multiple parameters, we develop DNNs with a multi-scale sparse regularization (SDNN) for effectively representing functions having certain singularities. We then apply the proposed SDNN to numerical solutions of the Burgers equation and the Schrödinger equation. Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.