×

GPLaSDI: Gaussian process-based interpretable latent space dynamics identification through deep autoencoder. (English) Zbl 1539.65085

Summary: Numerically solving partial differential equations (PDEs) can be challenging and computationally expensive. This has led to the development of reduced-order models (ROMs) that are accurate but faster than full order models (FOMs). Recently, machine learning advances have enabled the creation of non-linear projection methods, such as Latent Space Dynamics Identification (LaSDI). LaSDI maps full-order PDE solutions to a latent space using autoencoders and learns the system of ODEs governing the latent space dynamics. By interpolating and solving the ODE system in the reduced latent space, fast and accurate ROM predictions can be made by feeding the predicted latent space dynamics into the decoder. In this paper, we introduce GPLaSDI, a novel LaSDI-based framework that relies on Gaussian process (GP) for latent space ODE interpolations. Using GPs offers two significant advantages. First, it enables the quantification of uncertainty over the ROM predictions. Second, leveraging this prediction uncertainty allows for efficient adaptive training through a greedy selection of additional training data points. This approach does not require prior knowledge of the underlying PDEs. Consequently, GPLaSDI is inherently non-intrusive and can be applied to problems without a known PDE or its residual. We demonstrate the effectiveness of our approach on the Burgers equation, Vlasov equation for plasma physics, and a rising thermal bubble problem. Our proposed method achieves between 200 and 100,000 times speed-up, with up to 7% relative error.

MSC:

65M06 Finite difference methods for initial value and initial-boundary value problems involving PDEs
62G08 Nonparametric regression and quantile regression

References:

[1] Raczynski, S., Modeling and simulation : The computer science of illusion / Stanislaw Raczynski
[2] Jones, D.; Snider, C.; Nassehi, A.; Yon, J.; Hicks, B., Characterising the digital twin: A systematic literature review. CIRP J. Manuf. Sci. Technol., 36-52 (2020), URL https://www.sciencedirect.com/science/article/pii/S1755581720300110
[3] Review of digital twin about concepts, technologies, and industrial applications. J. Manuf. Syst., 346-361 (2020)
[4] Calder, M.; Craig, C.; Culley, D.; Cani, R.; Donnelly, C.; Douglas, R.; Edmonds, B.; Gascoigne, J.; Gilbert, N.; Hargrove, C.; Hinds, D.; Lane, D.; Mitchell, D.; Pavey, G.; Robertson, D.; Rosewell, B.; Sherwin, S.; Walport, M.; Wilson, A., Computational modelling for decision-making: Where, why, what, who and how. R. Soc. Open Sci. (2018)
[5] Winsberg, E., Computer Simulations in Science
[6] Cummings, R. M.; Mason, W. H.; Morton, S. A.; McDaniel, D. R.
[7] Diston, D.
[8] Kurec, K.; Remer, M.; Broniszewski, J.; Bibik, P.; Tudruj, S.; Piechna, J., Advanced modeling and simulation of vehicle active aerodynamic safety. J. Adv. Transp., 1-17 (2019)
[9] Muhammad, A.; Shanono, I. H., Simulation of a car crash using ANSYS, 1-5
[10] Peterson, A.; Ray, S.; Mittra, R., Computational Methods for Electromagnetics (1997), Wiley, John & Sons
[11] Rylander, T.; Ingelström, P.; Bondeson, A., Computational Electromagnetics (2013), Springer · Zbl 1259.78002
[12] Thijssen, J., Computational Physics (2007), Cambridge University Press · Zbl 1153.81004
[13] Schwartz, R., Biological Modeling and Simulation (2008), MIT Press
[14] Biegler, L.; Biros, G.; Ghattas, O.; Heinkenschloss, M.; Keyes, D.; Mallick, B.; Marzouk, Y.; Tenorio, L.; van Bloemen Waanders, B.; Willcox, K., Large-Scale Inverse Problems and Quantification of Uncertainty (2010), Wiley, URL http://hdl.handle.net/10754/656260
[15] Smith, R. C., Uncertainty quantification - theory, implementation, and applications · Zbl 1284.65019
[16] Sternfels, R.; Earls, C. J., Reduced-order model tracking and interpolation to solve PDE-based Bayesian inverse problems. Inverse Problems, 7 (2013) · Zbl 1278.65148
[17] Galbally, D.; Fidkowski, K.; Willcox, K.; Ghattas, O., Non-linear model reduction for uncertainty quantification in large-scale inverse problems. Internat. J. Numer. Methods Engrg., 12, 1581-1608 (2010), arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/nme.2746 · Zbl 1183.76837
[18] Fountoulakis, V.; Earls, C., Duct heights inferred from radar sea clutter using proper orthogonal bases. Radio Sci., 10, 1614-1626 (2016), arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/2016RS005998
[19] Wang, S.; Sturler, E.d.; Paulino, G. H., Large-scale topology optimization using preconditioned Krylov subspace methods with recycling. Internat. J. Numer. Methods Engrg., 12, 2441-2468 (2007), arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/nme.1798 · Zbl 1194.74265
[20] White, D.; Choi, Y.; Kudo, J., A dual mesh method with adaptivity for stress-constrained topology optimization. Struct. Multidiscip. Optim. (2020)
[21] Choi, Y.; Farhat, C.; Murray, W.; Saunders, M., A practical factorization of a Schur complement for PDE-constrained distributed optimal control (2013), arXiv. URL https://arxiv.org/abs/1312.5653
[22] Berkooz, G.; Holmes, P.; Lumley, J. L., The proper orthogonal decomposition in the analysis of turbulent flows. Annu. Rev. Fluid Mech., 1, 539-575 (1993)
[23] Rozza, G.; Huynh, D.; Patera, A., Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations. Arch. Comput. Methods Eng., 1-47 (2007)
[24] M.G. Safonov, R.Y. Chiang, A Schur Method for Balanced Model Reduction, in: 1988 American Control Conference, 1988, pp. 1036-1040.
[25] Lauzon, J. T.; Cheung, S. W.; Shin, Y.; Choi, Y.; Copeland, D. M.; Huynh, K., S-OPT: A points selection algorithm for hyper-reduction in reduced order models (2022), arXiv. URL https://arxiv.org/abs/2203.16494
[26] Choi, Y.; Coombs, D.; Anderson, R., SNS: A solution-based nonlinear subspace method for time-dependent model order reduction. SIAM J. Sci. Comput., 2, A1116-A1146 (2020) · Zbl 1442.37111
[27] Stabile, G.; Rozza, G., Finite volume POD-Galerkin stabilised reduced order methods for the parametrised incompressible Navier-Stokes equations. Comput. & Fluids, 273-284 (2018) · Zbl 1410.76264
[28] Iliescu, T.; Wang, Z., Variational multiscale proper orthogonal decomposition: Navier-stokes equations. Numer. Methods Partial Differential Equations, 2, 641-663 (2014), arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/num.21835 · Zbl 1452.76048
[29] Copeland, D. M.; Cheung, S. W.; Huynh, K.; Choi, Y., Reduced order models for Lagrangian hydrodynamics. Comput. Methods Appl. Mech. Engrg. (2022) · Zbl 1507.76179
[30] Cheung, S. W.; Choi, Y.; Copeland, D. M.; Huynh, K., Local Lagrangian reduced-order modeling for Rayleigh-Taylor instability by solution manifold decomposition (2022), arXiv. URL https://arxiv.org/abs/2201.07335
[31] McLaughlin, B.; Peterson, J.; Ye, M., Stabilized reduced order models for the advection-diffusion-reaction equation using operator splitting. Comput. Math. Appl., 11, 2407-2420 (2016), Proceedings of the conference on Advances in Scientific Computing and Applied Mathematics. A special issue in honor of Max Gunzburger’s 70th birthday. URL https://www.sciencedirect.com/science/article/pii/S0898122116300281 · Zbl 1443.65216
[32] Kim, Y.; Wang, K.; Choi, Y., Efficient space-time reduced order model for linear dynamical systems in python using less than 120 lines of code. Mathematics, 14 (2021), URL https://www.mdpi.com/2227-7390/9/14/1690
[33] Choi, Y.; Oxberry, G.; White, D.; Kirchdoerfer, T., Accelerating design optimization using reduced order models (2019), arXiv. URL https://arxiv.org/abs/1909.11320
[34] McBane, S.; Choi, Y., Component-wise reduced order model lattice-type structure design. Comput. Methods Appl. Mech. Engrg. (2021) · Zbl 1506.74290
[35] Kim, Y.; Choi, Y.; Widemann, D.; Zohdi, T., A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder (2020), arXiv. URL https://arxiv.org/abs/2009.11990
[36] Kim, Y.; Choi, Y.; Widemann, D.; Zohdi, T., Efficient nonlinear manifold reduced order model (2020), arXiv. URL https://arxiv.org/abs/2011.07727
[37] Lee, K.; Carlberg, K. T., Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. J. Comput. Phys. (2020), URL https://www.sciencedirect.com/science/article/pii/S0021999119306783
[38] Diaz, A. N.; Choi, Y.; Heinkenschloss, M., A fast and accurate domain-decomposition nonlinear manifold reduced order model (2023), arXiv:2305.15163
[39] Hinton, G. E.; Salakhutdinov, R. R., Reducing the dimensionality of data with neural networks. Science, 5786, 504-507 (2006), arXiv:https://www.science.org/doi/pdf/10.1126/science.1127647 · Zbl 1226.68083
[40] DeMers, D.; Cottrell, G., Non-linear dimensionality reduction, URL https://proceedings.neurips.cc/paper/1992/file/cdc0d6e63aa8e41c89689f54970bb35f-Paper.pdf
[41] Fries, W. D.; He, X.; Choi, Y., LaSDI: Parametric latent space dynamics identification. Comput. Methods Appl. Mech. Engrg. (2022) · Zbl 1507.65078
[42] He, X.; Choi, Y.; Fries, W. D.; Belof, J.; Chen, J.-S., gLaSDI: Parametric physics-informed greedy latent space dynamics identification. J. Comput. Phys. (2023), URL https://www.sciencedirect.com/science/article/abs/pii/S0021999123003625 · Zbl 07705908
[43] McBane, S.; Choi, Y.; Willcox, K., Stress-constrained topology optimization of lattice-like structures using component-wise reduced order models. Comput. Methods Appl. Mech. Engrg. (2022) · Zbl 1507.74323
[44] Tapia, G.; Khairallah, S. A.; Matthews, M. J.; King, W. E.; Elwany, A., Gaussian process-based surrogate modeling framework for process planning in laser powder-bed fusion additive manufacturing of 316L stainless steel. Int. J. Adv. Manuf. Technol., 9-12 (2017)
[45] Marjavaara, D., CFD driven optimization of hydraulic turbine draft tubes using surrogate models (2006)
[46] Cheng, K.; Zimmermann, R., Sliced gradient-enhanced Kriging for high-dimensional function approximation and aerodynamic modeling (2022)
[47] Kutz, J. N., Deep learning in fluid dynamics. J. Fluid Mech., 1-4 (2017) · Zbl 1383.76380
[48] Koza, J. R., Genetic programming as a means for programming computers by natural selection. Stat. Comput., 2, 87-112 (1994)
[49] Schmidt, M.; Lipson, H., Distilling free-form natural laws from experimental data. Science, 5923, 81-85 (2009), arXiv:https://www.science.org/doi/pdf/10.1126/science.1165893
[50] Brunton, S. L.; Proctor, J. L.; Kutz, J. N., Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci., 15, 3932-3937 (2016), arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.1517384113 · Zbl 1355.94013
[51] Rudy, S. H.; Brunton, S. L.; Proctor, J. L.; Kutz, J. N., Data-driven discovery of partial differential equations. Sci. Adv., 4 (2017), arXiv:https://www.science.org/doi/pdf/10.1126/sciadv.1602614
[52] Gao, L. M.; Kutz, J. N., Bayesian autoencoders for data-driven discovery of coordinates, governing equations and fundamental constants (2022), arXiv. URL https://arxiv.org/abs/2211.10575
[53] Owens, K.; Kutz, J. N., Data-driven discovery of governing equations for coarse-grained heterogeneous network dynamics (2022), arXiv. URL https://arxiv.org/abs/2205.10965
[54] Hirsh, S. M.; Barajas-Solano, D. A.; Kutz, J. N., Sparsifying priors for Bayesian uncertainty quantification in model discovery. R. Soc. Open Sci., 2 (2022), arXiv:https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.211823
[55] Messenger, D. A.; Bortz, D. M., Weak SINDy for partial differential equations. J. Comput. Phys. (2021) · Zbl 07515424
[56] Chen, Z.; Liu, Y.; Sun, H., Physics-informed learning of governing equations from scarce data. Nature Commun., 1 (2021)
[57] Bonneville, C.; Earls, C., Bayesian deep learning for partial differential equation parameter discovery with sparse and noisy data. J. Comput. Phys.: X (2022), URL https://www.sciencedirect.com/science/article/pii/S2590055222000117 · Zbl 07785564
[58] Stephany, R.; Earls, C., PDE-READ: Human-readable partial differential equation discovery using deep learning. Neural Netw., 360-382 (2022), URL https://www.sciencedirect.com/science/article/pii/S0893608022002660 · Zbl 07751038
[59] Stephany, R.; Earls, C., PDE-LEARN: Using deep learning to discover partial differential equations from noisy, limited data (2022), arXiv. URL https://arxiv.org/abs/2212.04971
[60] Champion, K.; Lusch, B.; Kutz, J. N.; Brunton, S. L., Data-driven discovery of coordinates and governing equations. Proc. Natl. Acad. Sci., 45, 22445-22451 (2019), arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.1906995116 · Zbl 1433.68396
[61] Bai, Z.; Peng, L., Non-intrusive nonlinear model reduction via machine learning approximations to low-dimensional operators (2021), arXiv. URL https://arxiv.org/abs/2106.09658
[62] Rasmussen, C. E.; Williams, C. K.I., I-XVIII, 1-248
[63] Goodfellow, I.; Bengio, Y.; Courville, A., Deep Learning (2016), MIT Press, http://www.deeplearningbook.org · Zbl 1373.68009
[64] Kingma, D. P.; Ba, J., Adam: A method for stochastic optimization (2014), arXiv. URL https://arxiv.org/abs/1412.6980
[65] Fuhg, J. N.; Marino, M.; Bouklas, N., Local approximate Gaussian process regression for data-driven constitutive models: Development and comparison with neural networks. Comput. Methods Appl. Mech. Engrg. (2022), URL https://www.sciencedirect.com/science/article/pii/S004578252100548X · Zbl 1507.65182
[66] Wilson, A. G.; Nickisch, H., Kernel interpolation for scalable structured Gaussian processes (KISS-GP) (2015), arXiv:1503.01057
[67] Wilson, A. G.; Dann, C.; Nickisch, H., Thoughts on massively scalable Gaussian processes (2015), arXiv:1511.01870
[68] Muyskens, A.; Priest, B.; Goumiri, I.; Schneider, M., MuyGPs: Scalable Gaussian process hyperparameter estimation using local cross-validation (2021), arXiv:2104.14581
[69] Bishop, C. M., Pattern Recognition and Machine Learning (Information Science and Statistics) (2007), Springer, URL http://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738
[70] Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; Kopf, A.; Yang, E.; DeVito, Z.; Raison, M.; Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.; Chintala, S., PyTorch: An imperative style, high-performance deep learning library, 8024-8035, URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
[71] Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V., Scikit-learn: Machine learning in Python. J. Mach. Learn. Res., Oct, 2825-2830 (2011) · Zbl 1280.68189
[72] HyPar Repository, https://bitbucket.org/deboghosh/hypar
[73] Jiang, G.-S.; Shu, C.-W., Efficient implementation of weighted ENO schemes. J. Comput. Phys., 1, 202-228 (1996) · Zbl 0877.65065
[74] Neal, R. M., Bayesian learning for neural networks (1995), URL https://api.semanticscholar.org/CorpusID:60809283
[75] Ghosh, D.; Constantinescu, E. M., Well-balanced, conservative finite-difference algorithm for atmospheric flows. AIAA J., 4, 1370-1385 (2016)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.