Finite Neuron Method and Convergence Analysis

Finite Neuron Method and Convergence Analysis

Year:    2020

Author:    Jinchao Xu

Communications in Computational Physics, Vol. 28 (2020), Iss. 5 : pp. 1707–1745

Abstract

We study a family of $H^m$-conforming piecewise polynomials based on the artificial neural network, referred to as the finite neuron method (FNM), for numerical solution of $2m$-th-order partial differential equations in $\mathbb{R}^d$ for any $m,d≥1$ and then provide convergence analysis for this method. Given a general domain Ω$⊂\mathbb{R}^d$ and a partition $\mathcal{T}_h$ of Ω, it is still an open problem in general how to construct a conforming finite element subspace of $H^m$(Ω) that has adequate approximation properties. By using techniques from artificial neural networks, we construct a family of $H^m$-conforming functions consisting of piecewise polynomials of degree $k$ for any $k≥m$ and we further obtain the error estimate when they are applied to solve the elliptic boundary value problem of any order in any dimension. For example, the error estimates that $‖u−u_N‖_{H^m(\rm{Ω})}=\mathcal{O}(N^{−\frac{1}{2}−\frac{1}{d}})$ is obtained for the error between the exact solution $u$ and the finite neuron approximation $u_N$. A discussion is also provided on the difference and relationship between the finite neuron method and finite element methods (FEM). For example, for the finite neuron method, the underlying finite element grids are not given a priori and the discrete solution can be obtained by only solving a non-linear and non-convex optimization problem. Despite the many desirable theoretical properties of the finite neuron method analyzed in the paper, its practical value requires further investigation as the aforementioned underlying non-linear and non-convex optimization problem can be expensive and challenging to solve. For completeness and the convenience of the reader, some basic known results and their proofs are introduced.

You do not have full access to this article.

Already a Subscriber? Sign in as an individual or via your institution

Journal Article Details

Publisher Name:    Global Science Press

Language:    English

DOI:    https://doi.org/10.4208/cicp.OA-2020-0191

Communications in Computational Physics, Vol. 28 (2020), Iss. 5 : pp. 1707–1745

Published online:    2020-01

AMS Subject Headings:    Global Science Press

Copyright:    COPYRIGHT: © Global Science Press

Pages:    39

Keywords:    Finite neuron method finite element method neural network error estimate.

Author Details

Jinchao Xu

  1. Characterization of the Variation Spaces Corresponding to Shallow Neural Networks

    Siegel, Jonathan W. | Xu, Jinchao

    Constructive Approximation, Vol. 57 (2023), Iss. 3 P.1109

    https://doi.org/10.1007/s00365-023-09626-4 [Citations: 4]
  2. High-order approximation rates for shallow neural networks with cosine and ReLU activation functions

    Siegel, Jonathan W. | Xu, Jinchao

    Applied and Computational Harmonic Analysis, Vol. 58 (2022), Iss. P.1

    https://doi.org/10.1016/j.acha.2021.12.005 [Citations: 20]
  3. Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks

    Parhi, Rahul | Nowak, Robert D.

    IEEE Transactions on Information Theory, Vol. 69 (2023), Iss. 2 P.1125

    https://doi.org/10.1109/TIT.2022.3208653 [Citations: 8]
  4. Computational Science – ICCS 2022

    CNNs with Compact Activation Function

    Wang, Jindong | Xu, Jinchao | Zhu, Jianqing

    2022

    https://doi.org/10.1007/978-3-031-08754-7_40 [Citations: 2]
  5. Approximation error estimates by noise‐injected neural networks

    Akiyama, Keito

    Mathematical Methods in the Applied Sciences, Vol. 47 (2024), Iss. 18 P.14563

    https://doi.org/10.1002/mma.10288 [Citations: 0]
  6. Neural control of discrete weak formulations: Galerkin, least squares & minimal-residual methods with quasi-optimal weights

    Brevis, Ignacio | Muga, Ignacio | van der Zee, Kristoffer G.

    Computer Methods in Applied Mechanics and Engineering, Vol. 402 (2022), Iss. P.115716

    https://doi.org/10.1016/j.cma.2022.115716 [Citations: 6]
  7. Generalization of PINNs for elliptic interface problems

    Jiang, Xuelian | Wang, Ziming | Bao, Wei | Xu, Yingxiang

    Applied Mathematics Letters, Vol. 157 (2024), Iss. P.109175

    https://doi.org/10.1016/j.aml.2024.109175 [Citations: 0]
  8. A deep First-Order System Least Squares method for solving elliptic PDEs

    Bersetche, Francisco M. | Borthagaray, Juan Pablo

    Computers & Mathematics with Applications, Vol. 129 (2023), Iss. P.136

    https://doi.org/10.1016/j.camwa.2022.11.014 [Citations: 5]
  9. \({H^m}\)-Conforming Virtual Elements in Arbitrary Dimension

    Chen, Chunyu | Huang, Xuehai | Wei, Huayi

    SIAM Journal on Numerical Analysis, Vol. 60 (2022), Iss. 6 P.3099

    https://doi.org/10.1137/21M1440323 [Citations: 12]
  10. SVD-PINNs: Transfer Learning of Physics-Informed Neural Networks via Singular Value Decomposition

    Gao, Yihang | Cheung, Ka Chun | Ng, Michael K.

    2022 IEEE Symposium Series on Computational Intelligence (SSCI), (2022), P.1443

    https://doi.org/10.1109/SSCI51031.2022.10022281 [Citations: 5]
  11. A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems

    Lu, Jianfeng | Lu, Yulong

    Communications of the American Mathematical Society, Vol. 2 (2022), Iss. 1 P.1

    https://doi.org/10.1090/cams/5 [Citations: 9]
  12. Adaptive two-layer ReLU neural network: II. Ritz approximation to elliptic PDEs

    Liu, Min | Cai, Zhiqiang

    Computers & Mathematics with Applications, Vol. 113 (2022), Iss. P.103

    https://doi.org/10.1016/j.camwa.2022.03.010 [Citations: 7]
  13. ReLU deep neural networks from the hierarchical basis perspective

    He, Juncai | Li, Lin | Xu, Jinchao

    Computers & Mathematics with Applications, Vol. 120 (2022), Iss. P.105

    https://doi.org/10.1016/j.camwa.2022.06.006 [Citations: 19]
  14. Uniform convergence guarantees for the deep Ritz method for nonlinear problems

    Dondl, Patrick | Müller, Johannes | Zeinhofer, Marius

    Advances in Continuous and Discrete Models, Vol. 2022 (2022), Iss. 1

    https://doi.org/10.1186/s13662-022-03722-8 [Citations: 3]
  15. Neural Network Methods Based on Efficient Optimization Algorithms for Solving Impulsive Differential Equations

    Xing, Baixue | Liu, Hongliang | Tang, Xiao | Shi, Long

    IEEE Transactions on Artificial Intelligence, Vol. 5 (2024), Iss. 3 P.1067

    https://doi.org/10.1109/TAI.2022.3217207 [Citations: 3]
  16. Neural and spectral operator surrogates: unified construction and expression rate bounds

    Herrmann, Lukas | Schwab, Christoph | Zech, Jakob

    Advances in Computational Mathematics, Vol. 50 (2024), Iss. 4

    https://doi.org/10.1007/s10444-024-10171-2 [Citations: 0]
  17. Deep Ritz method with adaptive quadrature for linear elasticity

    Liu, Min | Cai, Zhiqiang | Ramani, Karthik

    Computer Methods in Applied Mechanics and Engineering, Vol. 415 (2023), Iss. P.116229

    https://doi.org/10.1016/j.cma.2023.116229 [Citations: 3]
  18. Imaging conductivity from current density magnitude using neural networks*

    Jin, Bangti | Li, Xiyao | Lu, Xiliang

    Inverse Problems, Vol. 38 (2022), Iss. 7 P.075003

    https://doi.org/10.1088/1361-6420/ac6d03 [Citations: 7]
  19. A lowest-degree quasi-conforming finite element de Rham complex on general quadrilateral grids by piecewise polynomials

    Quan, Qimeng | Ji, Xia | Zhang, Shuo

    Calcolo, Vol. 59 (2022), Iss. 1

    https://doi.org/10.1007/s10092-021-00447-0 [Citations: 0]
  20. A Linear Finite Difference Scheme for the Two-Dimensional Nonlinear Schrödinger Equation with Fractional Laplacian

    Wang, Yanyan | Hao, Zhaopeng | Du, Rui

    Journal of Scientific Computing, Vol. 90 (2022), Iss. 1

    https://doi.org/10.1007/s10915-021-01703-9 [Citations: 8]
  21. PINNs and GaLS: A Priori Error Estimates for Shallow Physics Informed Neural Networks Applied to Elliptic Problems

    Zerbinati, U.

    IFAC-PapersOnLine, Vol. 55 (2022), Iss. 20 P.61

    https://doi.org/10.1016/j.ifacol.2022.09.072 [Citations: 3]
  22. Computational Science – ICCS 2023

    Fast Solver for Advection Dominated Diffusion Using Residual Minimization and Neural Networks

    Służalec, Tomasz | Paszyński, Maciej

    2023

    https://doi.org/10.1007/978-3-031-36021-3_52 [Citations: 0]
  23. An Efficient and Fast Sparse Grid Algorithm for High-Dimensional Numerical Integration

    Zhong, Huicong | Feng, Xiaobing

    Mathematics, Vol. 11 (2023), Iss. 19 P.4191

    https://doi.org/10.3390/math11194191 [Citations: 0]
  24. Learn bifurcations of nonlinear parametric systems via equation-driven neural networks

    Hao, Wenrui | Zheng, Chunyue

    Chaos: An Interdisciplinary Journal of Nonlinear Science, Vol. 32 (2022), Iss. 1

    https://doi.org/10.1063/5.0078306 [Citations: 0]
  25. Randomized Newton’s Method for Solving Differential Equations Based on the Neural Network Discretization

    Chen, Qipin | Hao, Wenrui

    Journal of Scientific Computing, Vol. 92 (2022), Iss. 2

    https://doi.org/10.1007/s10915-022-01905-9 [Citations: 2]
  26. Two-layer networks with the $$\text {ReLU}^k$$ activation function: Barron spaces and derivative approximation

    Li, Yuanyuan | Lu, Shuai | Mathé, Peter | Pereverzev, Sergei V.

    Numerische Mathematik, Vol. 156 (2024), Iss. 1 P.319

    https://doi.org/10.1007/s00211-023-01384-6 [Citations: 1]
  27. A Construction of $$C^r$$ Conforming Finite Element Spaces in Any Dimension

    Hu, Jun | Lin, Ting | Wu, Qingyu

    Foundations of Computational Mathematics, Vol. (2023), Iss.

    https://doi.org/10.1007/s10208-023-09627-6 [Citations: 1]
  28. Optimal Convergence Rates for the Orthogonal Greedy Algorithm

    Siegel, Jonathan W. | Xu, Jinchao

    IEEE Transactions on Information Theory, Vol. 68 (2022), Iss. 5 P.3354

    https://doi.org/10.1109/TIT.2022.3147984 [Citations: 8]
  29. A finite difference scheme for the two-dimensional Gray-Scott equation with fractional Laplacian

    Lei, Su | Wang, Yanyan | Du, Rui

    Numerical Algorithms, Vol. 94 (2023), Iss. 3 P.1185

    https://doi.org/10.1007/s11075-023-01532-x [Citations: 1]
  30. Uniform approximation rates and metric entropy of shallow neural networks

    Ma, Limin | Siegel, Jonathan W. | Xu, Jinchao

    Research in the Mathematical Sciences, Vol. 9 (2022), Iss. 3

    https://doi.org/10.1007/s40687-022-00346-y [Citations: 3]
  31. ChebNet: Efficient and Stable Constructions of Deep Neural Networks with Rectified Power Units via Chebyshev Approximation

    Tang, Shanshan | Li, Bo | Yu, Haijun

    Communications in Mathematics and Statistics, Vol. (2024), Iss.

    https://doi.org/10.1007/s40304-023-00392-0 [Citations: 0]
  32. Gauss Newton Method for Solving Variational Problems of PDEs with Neural Network Discretizaitons

    Hao, Wenrui | Hong, Qingguo | Jin, Xianlin

    Journal of Scientific Computing, Vol. 100 (2024), Iss. 1

    https://doi.org/10.1007/s10915-024-02535-z [Citations: 0]
  33. Local randomized neural networks with hybridized discontinuous Petrov–Galerkin methods for Stokes–Darcy flows

    Dang, Haoning | Wang, Fei

    Physics of Fluids, Vol. 36 (2024), Iss. 8

    https://doi.org/10.1063/5.0218131 [Citations: 0]
  34. Greedy training algorithms for neural networks and applications to PDEs

    Siegel, Jonathan W. | Hong, Qingguo | Jin, Xianlin | Hao, Wenrui | Xu, Jinchao

    Journal of Computational Physics, Vol. 484 (2023), Iss. P.112084

    https://doi.org/10.1016/j.jcp.2023.112084 [Citations: 13]
  35. Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks

    Siegel, Jonathan W. | Xu, Jinchao

    Foundations of Computational Mathematics, Vol. 24 (2024), Iss. 2 P.481

    https://doi.org/10.1007/s10208-022-09595-3 [Citations: 18]
  36. Error analysis of deep Ritz methods for elliptic equations

    Jiao, Yuling | Lai, Yanming | Lo, Yisu | Wang, Yang | Yang, Yunfei

    Analysis and Applications, Vol. 22 (2024), Iss. 01 P.57

    https://doi.org/10.1142/S021953052350015X [Citations: 4]
  37. Approximation properties of deep ReLU CNNs

    He, Juncai | Li, Lin | Xu, Jinchao

    Research in the Mathematical Sciences, Vol. 9 (2022), Iss. 3

    https://doi.org/10.1007/s40687-022-00336-0 [Citations: 10]