×

Principled interpolation of Green’s functions learned from data. (English) Zbl 1539.65197

Summary: We present a data-driven approach to mathematically model physical systems whose governing partial differential equations are unknown, by learning their associated Green’s function. The subject systems are observed by collecting input-output pairs of system responses under excitations drawn from a Gaussian process. Two methods are proposed to learn the Green’s function. In the first method, we use the proper orthogonal decomposition (POD) modes of the system as a surrogate for the eigenvectors of the Green’s function, and subsequently fit the eigenvalues, using data. In the second, we employ a generalization of the randomized singular value decomposition (SVD) to operators, in order to construct a low-rank approximation to the Green’s function. Then, we propose a manifold interpolation scheme, for use in an offline-online setting, where offline excitation-response data, taken at specific model parameter instances, are compressed into empirical eigenmodes. These eigenmodes are subsequently used within a manifold interpolation scheme, to uncover other suitable eigenmodes at unseen model parameters. The approximation and interpolation numerical techniques are demonstrated on several examples in one and two dimensions.

MSC:

65N80 Fundamental solutions, Green’s function methods, etc. for boundary value problems involving PDEs
65D05 Numerical interpolation
65M80 Fundamental solutions, Green’s function methods, etc. for initial value and initial-boundary value problems involving PDEs

References:

[1] Strogatz, S. H., Infinite Powers: How Calculus Reveals the Secrets of the Universe (2019), Mariner Books · Zbl 1434.00017
[2] Feynman, R. P., The Character of Physical Laws (1967), M.I.T. Press
[3] Olver, P. J., Application of Lie Groups To Differential Equations (1993), Springer · Zbl 0785.58003
[4] Boullé, N.; Earls, C. J.; Townsend, A., Data-driven discovery of Green’s functions with human-understandable deep learning, Sci. Rep, 12, 1, 1-9 (2022)
[5] Evans, L., Partial Differential Equations (2010), American Mathematical Society · Zbl 1194.35001
[6] Feliu-Fabà, J.; Fan, Y.; Ying, L., Meta-learning pseudo-differential operators with deep neural networks, J. Comput. Phys., 408, Article 109309 pp. (2020) · Zbl 07505630
[7] Kovachki, N.; Li, Z.; Liu, B.; Azizzadenesheli, K.; Bhattacharya, K.; Stuart, A.; Anandkumar, A., Neural operator: Learning maps between function spaces (2021), arXiv:2108.08481
[8] Gin, C. R.; Shea, D. E.; Brunton, S. L.; Kutz, J. N., DeepGreen: Deep learning of green’s functions for nonlinear boundary value problems, Sci. Rep., 11, 1, 1-14 (2021)
[9] N. Boullé, Y. Nakatsukasa, A. Townsend, Rational neural networks, in: Advances in Neural Information Processing Systems, NeurIPS, 33, 2020, pp. 14243-14253.
[10] Raissi, M., Deep hidden physics models: Deep learning of nonlinear partial differential equations, J. Mach. Learn. Res., 19, 1, 932-955 (2018) · Zbl 1439.68021
[11] Raissi, M.; Karniadakis, G. E., Hidden physics models: Machine learning of nonlinear partial differential equations, J. Comput. Phys., 357, 125-141 (2018) · Zbl 1381.68248
[12] Raissi, M.; Perdikaris, P.; Karniadakis, G. E., Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378, 686-707 (2019) · Zbl 1415.68175
[13] Rudy, S. H.; Brunton, S. L.; Proctor, J. L.; Kutz, J. N., Data-driven discovery of partial differential equations, Sci. Adv., 3, 4, Article e1602614 pp. (2017)
[14] Schaeffer, H., Learning partial differential equations via data discovery and sparse optimization, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 473, 2197, Article 20160446 pp. (2017) · Zbl 1404.35397
[15] Berg, J.; Nyström, K., Neural network augmented inverse problems for PDEs (2017), arXiv preprint arXiv:1712.09685
[16] Brunton, S. L.; Proctor, J. L.; Kutz, J. N., Discovering governing equations from data by sparse identification of nonlinear dynamical systems, Proc. Natl. Acad. Sci. USA, 113, 15, 3932-3937 (2016) · Zbl 1355.94013
[17] Stephany, R.; Earls, C., PDE-READ: Human-readable partial differential equation discovery using deep learning, Neural Netw., 154, 360-382 (2022) · Zbl 07751038
[18] Bonneville, C.; Earls, C., Bayesian deep learning for partial differential equation parameter discovery with sparse and noisy data, J. Comput. Phys.: X, 16, Article 100115 pp. (2022) · Zbl 07785564
[19] Fountoulakis, V.; Earls, C., Inverting for maritime environments using proper orthogonal bases from sparsely sampled electromagnetic propagation data, IEEE Trans. Geosci. Remote Sens., 54, 12, 7166-7176 (2016)
[20] Fountoulakis, V.; Earls, C. J., Duct heights inferred from radar sea clutter using proper orthogonal bases, Radio Sci., 51, 10, 1614-1626 (2016)
[21] Hsing, T.; Eubank, R., Theoretical Foundations of Functional Data Analysis, with an Introduction To Linear Operators (2015), John Wiley & Sons · Zbl 1338.62009
[22] N. Boullé, A. Townsend, A generalization of the randomized singular value decomposition, in: International Conference on Learning Representations, ICLR, 2022.
[23] Boullé, N.; Townsend, A., Learning elliptic partial differential equations with randomized linear algebra, Found. Comput. Math., 1-31 (2022)
[24] Driscoll, T. A.; Hale, N.; Trefethen, L. N., Chebfun Guide (2014), Pafnuty Publications
[25] Filip, S.; Javeed, A.; Trefethen, L. N., Smooth random functions, random ODEs, and Gaussian processes, SIAM Rev., 61, 1, 185-205 (2019) · Zbl 1412.42006
[26] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar, Fourier neural operator for parametric partial differential equations, in: International Conference on Learning Representations, ICLR, 2021.
[27] Logg, A.; Mardal, K.-A.; Wells, G., utomated Solution of Differential Equations By the Finite Element Method: The FEniCS Book (2012), Springer Science & Business Media · Zbl 1247.65105
[28] Liang, Y.; Lee, H.; Lim, S.; Lin, W.; Lee, K.; Wu, C., Proper orthogonal decomposition and its applications—Part I: Theory, J. Sound Vib., 252, 3, 527-544 (2002) · Zbl 1237.65040
[29] Harris, C. R.; Millman, K. J.; Van Der Walt, S. J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N. J., Array programming with NumPy, Nature, 585, 7825, 357-362 (2020)
[30] Halko, N.; Martinsson, P.-G.; Tropp, J. A., Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53, 2, 217-288 (2011) · Zbl 1269.65043
[31] Martinsson, P.-G.; Tropp, J. A., Randomized numerical linear algebra: Foundations and algorithms, Acta Numer., 29, 403-572 (2020) · Zbl 07674565
[32] Ailon, N.; Chazelle, B., The fast johnson-lindenstrauss transform and approximate nearest neighbors, SIAM J. Comput., 39, 1, 302-322 (2009) · Zbl 1185.68327
[33] Clarkson, K. L.; Woodruff, D. P., Low-rank approximation and regression in input sparsity time, J. ACM, 63, 6, 1-45 (2017) · Zbl 1426.65057
[34] X. Meng, M.W. Mahoney, Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression, in: Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, 2013, pp. 91-100. · Zbl 1293.68150
[35] J. Nelson, H.L. Nguyên, OSNAP: Faster Numerical Linear Algebra Algorithms via Sparser Subspace Embeddings, in: IEEE 54th Annual Symposium on Foundations of Computer Science, 2013, pp. 117-126.
[36] Urano, Y., A Fast Randomized Algorithm for Linear Least-Squares Regression Via Sparse Transforms (2013), New York University, (Master’s thesis)
[37] N. Ailon, B. Chazelle, Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform, in: Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing, 2006, pp. 557-563. · Zbl 1301.68232
[38] Parker, D. S., Random Butterfly Transformations with Applications in Computational Linear AlgebraTech. Rep. CSD-950023 (1995), UCLA
[39] Woolfe, F.; Liberty, E.; Rokhlin, V.; Tygert, M., A fast randomized algorithm for the approximation of matrices, Appl. Comput. Harmon. Anal., 25, 3, 335-366 (2008) · Zbl 1155.65035
[40] Tropp, J. A.; Yurtsever, A.; Udell, M.; Cevher, V., Practical sketching algorithms for low-rank matrix approximation, SIAM J. Matrix Anal. Appl., 38, 4, 1454-1485 (2017) · Zbl 1379.65026
[41] Tropp, J. A.; Yurtsever, A.; Udell, M.; Cevher, V., Streaming low-rank matrix approximation with an application to scientific simulation, SIAM J. Sci. Comput., 41, 4, A2430-A2463 (2019) · Zbl 1420.65060
[42] Upadhyay, J., Fast and space-optimal low-rank factorization in the streaming model with application in differential privacy (2016), arXiv preprint arXiv:1604.01429
[43] Nakatsukasa, Y., Fast and stable randomized low-rank matrix approximation (2020), arXiv preprint arXiv:2009.11392
[44] Nyström, E. J., Über die praktische auflösung von integralgleichungen mit anwendungen auf randwertaufgaben, Acta Math., 54, 185-204 (1930) · JFM 56.0342.01
[45] C. Williams, M. Seeger, Using the Nyström method to speed up kernel machines, in: Advances in Neural Information Processing Systems, Vol. 13, NeurIPS, 2000.
[46] Farrell, P. E.; Ham, D. A.; Funke, S. W.; Rognes, M. E., Automated derivation of the adjoint of high-level transient finite element programs, SIAM J. Sci. Comput., 35, 4, C369-C393 (2013) · Zbl 1362.65103
[47] Bebendorf, M.; Hackbusch, W., Existence of \(\mathcal{H} \)-matrix approximants to the inverse FE-matrix of elliptic operators with \(L^\infty \)-coefficients, Numer. Math., 95, 1, 1-28 (2003) · Zbl 1033.65100
[48] Bebendorf, M., Hierarchical Matrices (2008), Springer · Zbl 1151.65090
[49] Boullé, N.; Kim, S.; Shi, T.; Townsend, A., Learning green’s functions associated with time-dependent partial differential equations, J. Mach. Learn. Res., 23, 218, 1-34 (2022)
[50] Lin, L.; Lu, J.; Ying, L., Fast construction of hierarchical matrix representation from matrix-vector multiplication, J. Comput. Phys., 230, 10, 4071-4087 (2011) · Zbl 1218.65038
[51] Martinsson, P.-G., Fast Direct Solvers for Elliptic PDEs (2019), SIAM
[52] Amsallem, D.; Farhat, C., Interpolation method for adapting reduced-order models and application to aeroelasticity, AIAA J., 46, 7, 1803-1813 (2008)
[53] Absil, P.-A.; Mahony, R.; Sepulchre, R., Optimization Algorithms on Matrix Manifolds (2008), Princeton University Press · Zbl 1147.65043
[54] Edelman, A.; Arias, T. A.; Smith, S. T., The geometry of algorithms with orthogonality constraints, SIAM J. Matrix Anal. Appl., 20, 2, 303-353 (1998) · Zbl 0928.65050
[55] Sternfels, R.; Earls, C. J., Reduced-order model tracking and interpolation to solve PDE-based Bayesian inverse problems, Inverse Problems, 29, 7, Article 075014 pp. (2013) · Zbl 1278.65148
[56] Degroote, J.; Vierendeels, J.; Willcox, K., Interpolation among reduced-order matrices to obtain parameterized models for design, optimization and probabilistic analysis, Internat. J. Numer. Methods Fluids, 63, 207-230 (2010) · Zbl 1188.65110
[57] Gilles, M. A.; Earls, C.; Bindel, D., A subspace pursuit method to infer refractivity in the marine atmospheric boundary layer, IEEE Trans. Geosci. Remote Sens., 57, 8, 5606-5617 (2019)
[58] Myint-U, T.; Debnath, L., Linear Partial Differential Equations for Scientists and Engineers (2007), Springer Science & Business Media · Zbl 1104.35001
[59] Geuzaine, C.; Remacle, J.-F., Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities, Internat. J. Numer. Methods Engrg., 79, 11, 1309-1331 (2009) · Zbl 1176.74181
[60] Rathgeber, F.; Ham, D. A.; Mitchell, L.; Lange, M.; Luporini, F.; McRae, A. T.T., Firedrake: automating the finite element method by composing abstractions, ACM Trans. Math. Software, 43, 3, 1-24 (2016) · Zbl 1396.65144
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.