Collocation approximation by deep neural ReLU networks for parametric and stochastic PDEs with lognormal inputs
- Authors: Dinh D.1
-
Affiliations:
- Vietnam National University
- Issue: Vol 214, No 4 (2023)
- Pages: 38-75
- Section: Articles
- URL: https://bakhtiniada.ru/0368-8666/article/view/133511
- DOI: https://doi.org/10.4213/sm9791
- ID: 133511
Cite item
Abstract
We find the convergence rates of the collocation approximation by deep ReLU neural networks of solutions to elliptic PDEs with lognormal inputs, parametrized by
About the authors
Dũng Dinh
Vietnam National University
Author for correspondence.
Email: dinhzung@gmail.com
Doctor of physico-mathematical sciences, Professor
References
- M. Ali, A. Nouy, “Approximation of smoothness classes by deep rectifier networks”, SIAM J. Numer. Anal., 59:6 (2021), 3032–3051
- R. Arora, A. Basu, P. Mianjy, A. Mukherjee, Understanding deep neural networks with rectified linear units, Electronic colloquium on computational complexity, report No. 98, 2017, 21 pp.
- M. Bachmayr, A. Cohen, Dinh Dũng, Ch. Schwab, “Fully discrete approximation of parametric and stochastic elliptic PDEs”, SIAM J. Numer. Anal., 55:5 (2017), 2151–2186
- M. Bachmayr, A. Cohen, R. DeVore, G. Migliorati, “Sparse polynomial approximation of parametric elliptic PDEs. Part II: Lognormal coeffcients”, ESAIM Math. Model. Numer. Anal., 51:1 (2017), 341–363
- M. Bachmayr, A. Cohen, G. Migliorati, “Sparse polynomial approximation of parametric elliptic PDEs. Part I: Affine coefficients”, ESAIM Math. Model. Numer. Anal., 51:1 (2017), 321–339
- A. R. Barron, “Complexity regularization with application to artificial neural networks”, Nonparametric functional estimation and related topics (Spetses, 1990), NATO Adv. Sci. Inst. Ser. C: Math. Phys. Sci., 335, Kluwer Acad. Publ., Dordrecht, 1991, 561–576
- A. Chkifa, A. Cohen, R. DeVore, Ch. Schwab, “Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs”, ESAIM Math. Model. Numer. Anal., 47:1 (2013), 253–280
- A. Chkifa, A. Cohen, Ch. Schwab, “High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs”, Found. Comput. Math., 14:4 (2014), 601–633
- A. Chkifa, A. Cohen, Ch. Schwab, “Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs”, J. Math. Pures Appl. (9), 103:2 (2015), 400–428
- A. Cohen, R. DeVore, “Approximation of high-dimensional parametric PDEs”, Acta Numer., 24 (2015), 1–159
- A. Cohen, R. DeVore, Ch. Schwab, “Convergence rates of best $N$-term Galerkin approximations for a class of elliptic sPDEs”, Found. Comput. Math., 10:6 (2010), 615–646
- A. Cohen, R. DeVore, Ch. Schwab, “Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDE's”, Anal. Appl. (Singap.), 9:1 (2011), 11–47
- G. Cybenko, “Approximation by superpositions of a sigmoidal function”, Math. Control Signals Systems, 2:4 (1989), 303–314
- Динь Зунг, “Линейная совместная коллокационная аппроксимация для параметрических и стохастических эллиптических дифференциальных уравнений с частными производными”, Матем. сб., 210:4 (2019), 103–127
- Dinh Dũng, “Sparse-grid polynomial interpolation approximation and integration for parametric and stochastic elliptic PDEs with lognormal inputs”, ESAIM Math. Model. Numer. Anal., 55:3 (2021), 1163–1198
- Dinh Dũng, Van Kien Nguyen, “Deep ReLU neural networks in high-dimensional approximation”, Neural Netw., 142 (2021), 619–635
- Dinh Dũng, Van Kien Nguyen, Duong Thanh Pham, Deep ReLU neural network approximation of parametric and stochastic elliptic PDEs with lognormal inputs
- Dinh Dũng, Van Kien Nguyen, Ch. Schwab, J. Zech, Analyticity and sparsity in uncertainty quantification for PDEs with Gaussian random field inputs
- Dinh Dũng, Van Kien Nguyen, Mai Xuan Thao, “Computation complexity of deep ReLU neural networks in high-dimensional approximation”, J. Comp. Sci. Cybern., 37:3 (2021), 292–320
- I. Daubechies, R. DeVore, S. Foucart, B. Hanin, G. Petrova, “Nonlinear approximation and (deep) ReLU networks”, Constr. Approx., 55:1 (2022), 127–172
- R. DeVore, B. Hanin, G. Petrova, “Neural network approximation”, Acta Numer., 30 (2021), 327–444
- Weinan E, Qingcan Wang, “Exponential convergence of the deep neural network approximation for analytic functions”, Sci. China Math., 61:10 (2018), 1733–1740
- D. Elbrächter, P. Grohs, A. Jentzen, Ch. Schwab, DNN expression rate analysis of high-dimensional PDEs: application to option pricing, SAM res. rep. 2018-33, Seminar for Applied Mathematics, ETH Zürich, Zürich, 2018, 50 pp.
- O. G. Ernst, B. Sprungk, L. Tamellini, “Convergence of sparse collocation for functions of countably many Gaussian random variables (with application to elliptic PDEs)”, SIAM J. Numer. Anal., 56:2 (2018), 877–905
- K.-I. Funahashi, “Approximate realization of identity mappings by three-layer neural networks”, Electron. Comm. Japan Part III Fund. Electron. Sci., 73:11 (1990), 61–68
- M. Geist, P. Petersen, M. Raslan, R. Schneider, G. Kutyniok, “Numerical solution of the parametric diffusion equation by deep neural networks”, J. Sci. Comput., 88:1 (2021), 22, 37 pp.
- L. Gonon, Ch. Schwab, Deep ReLU network expression rates for option prices in high-dimensional, exponential Levy models, SAM res. rep. 2020-52 (rev. 1), Seminar for Applied Mathematics, ETH Zürich, Zürich, 2021, 35 pp.
- L. Gonon, Ch. Schwab, Deep ReLU neural network approximation for stochastic differential equations with jumps, SAM res. rep. 2021-08, Seminar for Applied Mathematics, ETH Zürich, Zürich, 2021, 35 pp.
- R. Gribonval, Kutyniok, M. Nielsen, F. Voigtländer, “Approximation spaces of deep neural networks”, Constr. Approx., 55:1 (2022), 259–367
- P. Grohs, L. Herrmann, “Deep neural network approximation for high-dimensional elliptic PDEs with boundary conditions”, IMA J. Numer. Anal., 42:3 (2022), 2055–2082
- D. Elbrachter, D. Perekrestenko, P. Grohs, H. Bölcskei, “Deep neural network approximation theory”, IEEE Trans. Inform. Theory, 67:5 (2021), 2581–2623
- I. Gühring, G. Kutyniok, P. Petersen, “Error bounds for approximations with deep ReLU neural networks in $W^{s,p}$ norms”, Anal. Appl. (Singap.), 18:5 (2020), 803–859
- L. Herrmann, J. A. A. Opschoor, Ch. Schwab, Constructive deep ReLU neural network approximation, SAM res. rep. 2021-04, Seminar for Applied Mathematics, ETH Zürich, Zürich, 2021, 32 pp.
- L. Herrmann, Ch. Schwab, J. Zech, “Deep neural network expression of posterior expectations in Bayesian PDE inversion”, Inverse Problems, 36:12 (2020), 125011, 32 pp.
- E. Hewitt, K. Stromberg, Real and abstract analysis. A modern treatment of the theory of functions of a real variable, Springer-Verlag, New York, 1965, vii+476 pp.
- Viet Ha Hoang, Ch. Schwab, “$N$-term Wiener chaos approximation rates for elliptic PDEs with lognormal Gaussian random inputs”, Math. Models Methods Appl. Sci., 24:4 (2014), 797–826
- K. Hornik, M. Stinchcombe, H. White, “Multilayer feedforward networks are universal approximators”, Neural Netw., 2:5 (1989), 359–366
- G. Kutyniok, P. Petersen, M. Raslan, R. Schneider, “A theoretical analysis of deep neural networks and parametric PDEs”, Constr. Approx., 55:1 (2022), 73–125
- Jianfeng Lu, Zuowei Shen, Haizhao Yang, Shijun Zhang, “Deep network approximation for smooth functions”, SIAM J. Math. Anal., 53:5 (2021), 5465–5506
- D. M. Matjila, “Bounds for Lebesgue functions for Freud weights”, J. Approx. Theory, 79:3 (1994), 385–406
- D. M. Matjila, “Convergence of Lagrange interpolation for Freud weights in weighted $L_p(mathbb R)$, $0 < P le 1$”, Nonlinear numerical methods and rational approximation. II (Wilrijk, 1993), Math. Appl., 296, Kluwer Acad. Publ., Dordrecht, 1994, 25–35
- H. N. Mhaskar, “Neural networks for optimal approximation of smooth and analytic functions”, Neural Comput., 8 (1996), 164–177
- H. Montanelli, Qiang Du, “New error bounds for deep ReLU networks using sparse grids”, SIAM J. Math. Data Sci., 1:1 (2019), 78–92
- G. Montufar, R. Pascanu, Kyunghyun Cho, Yoshua Bengio, “On the number of linear regions of deep neural networks”, NIPS 2014, Adv. Neural Inf. Process. Syst., 27, MIT Press, Cambridge, MA, 2014, 2924–2932
- J. A. A. Opschoor, Ch. Schwab, J. Zech, Deep learning in high dimension: ReLU network expression rates for Bayesian PDE inversion, SAM res. rep. 2020-47, Seminar for Applied Mathematics, ETH Zürich, Zürich, 2020, 50 pp.
- J. A. A. Opschoor, Ch. Schwab, J. Zech, “Exponential ReLU DNN expression of holomorphic maps in high dimension”, Constr. Approx., 55:1 (2022), 537–582
- P. C. Petersen, Neural network theory, 2022, 60 pp.
- P. Petersen, F. Voigtlaender, “Optimal approximation of piecewise smooth functions using deep ReLU neural networks”, Neural Netw., 108 (2018), 296–330
- Ch. Schwab, J. Zech, “Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ”, Anal. Appl. (Singap.), 17:1 (2019), 19–55
- Ch. Schwab, J. Zech, Deep learning in high dimension: neural network approximation of analytic functions in $L^2(mathbb R^d, gamma_d)$
- Zuowei Shen, Haizhao Yang, Shijun Zhang, “Deep network approximation characterized by number of neurons”, Commun. Comput. Phys., 28:5 (2020), 1768–1811
- J. Sirignano, K. Spiliopoulos, “DGM: a deep learning algorithm for solving partial differential equations”, J. Comput. Phys., 375 (2018), 1339–1364
- T. Suzuki, Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality, ICLR 2019: International conference on learning representations (New Orleans, LA, 2019)
- J. Szabados, “Weighted Lagrange and Hermite–Fejer interpolation on the real line”, J. Inequal. Appl., 1:2 (1997), 99–123
- Г. Сегe, Ортогональные многочлены, Физматлит, М., 1962, 500 с.
- M. Telgarsky, Representation benefits of deep feedforward networks
- M. Telgrasky, “Benefits of depth in neural nets”, 29th annual conference on learning theory (Columbia Univ., New York, NY, 2016), Proceedings of Machine Learning Research (PMLR), 49, 2016, 1517–1539
- R. K. Tripathy, I. Bilionis, “Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification”, J. Comput. Phys., 375 (2018), 565–588
- D. Yarotsky, “Error bounds for approximations with deep ReLU networks”, Neural Netw., 94 (2017), 103–114
- D. Yarotsky, “Optimal approximation of continuous functions by very deep ReLU networks”, 31st annual conference on learning theory, Proceedings of Machine Learning Research (PMLR), 75, 2018, 639–649
- J. Zech, Dinf Dũng, Ch. Schwab, “Multilevel approximation of parametric and stochastic PDES”, Math. Models Methods Appl. Sci., 29:9 (2019), 1753–1817
- J. Zech, Ch. Schwab, “Convergence rates of high dimensional Smolyak quadrature”, ESAIM Math. Model. Numer. Anal., 54:4 (2020), 1259–1307
Supplementary files
