×

GPU-based molecular dynamics of turbulent liquid flows with OpenMM. (English) Zbl 1533.65009

Wyrzykowski, Roman (ed.) et al., Parallel processing and applied mathematics. 14th international conference, PPAM 2022, Gdansk, Poland, September 11–14, 2022. Revised selected papers. Part I. Cham: Springer. Lect. Notes Comput. Sci. 13826, 346-358 (2023).
Summary: In this paper we describe the computational framework for GPU-based molecular dynamics of turbulent flows. The framework is based on the open-source molecular dynamics library OpenMM. The implementation of a special type of open boundary conditions is presented together with a generic case of a turbulent flow of Lennard-Jones liquid. We compare the computational efficiency of OpenMM with another popular MD library LAMMPS and other legacy MD programs used for studying turbulence.
For the entire collection see [Zbl 1517.68031].

MSC:

65C05 Monte Carlo methods
76-04 Software, source code, etc. for problems pertaining to fluid mechanics
76F99 Turbulence
Full Text: DOI

References:

[1] https://github.com/dann239/openmm/tree/open-boundary
[2] https://github.com/openmm/openmm/pull/3577
[3] Abraham, M., GROMACS: high performance molecular simulations through multi-level parallelism from laptops to supercomputers, SoftwareX, 1-2, 19-25 (2015) · doi:10.1016/j.softx.2015.06.001
[4] Anderson, JA; Lorenz, CD; Travesset, A., General purpose molecular dynamics simulations fully implemented on graphics processing units, J. Comput. Phys., 227, 10, 5342-5359 (2008) · Zbl 1148.81301 · doi:10.1016/j.jcp.2008.01.047
[5] Berendsen, H.; van der Spoel, D.; van Drunen, R., GROMACS: a message-passing parallel molecular dynamics implementation, Comput. Phys. Commun., 91, 1, 43-56 (1995) · doi:10.1016/0010-4655(95)00042-E
[6] Brown, WM; Kohlmeyer, A.; Plimpton, SJ; Tharrington, AN, Implementing molecular dynamics on hybrid high performance computers - Particle-particle particle-mesh, Comput. Phys. Commun., 183, 3, 449-459 (2012) · doi:10.1016/j.cpc.2011.10.012
[7] Brown, WM; Wang, P.; Plimpton, SJ; Tharrington, AN, Implementing molecular dynamics on hybrid high performance computers – short range forces, Comput. Phys. Commun., 182, 4, 898-911 (2011) · Zbl 1221.82008 · doi:10.1016/j.cpc.2010.12.021
[8] Brown, WM; Yamada, M., Implementing molecular dynamics on hybrid high performance computers-three-body potentials, Comput. Phys. Commun., 184, 12, 2785-2793 (2013) · doi:10.1016/j.cpc.2013.08.002
[9] Eastman, P., OpenMM 4: a reusable, extensible, hardware independent library for high performance molecular simulation, J. Chem. Theory Comput., 9, 1, 461-469 (2013) · doi:10.1021/ct300857j
[10] Eastman, P., Pande, V.S.: Efficient nonbonded interactions for molecular dynamics on a graphics processing unit. J. Comput. Chem. 31, 1268-1272 (2009). doi:10.1002/jcc.21413
[11] Eastman, P., et al.:OpenMM 7: rapid development of high performance algorithms for molecular dynamics. PLOS Comput. Biol. 13, 1-17 ( 2017). doi:10.1371/journal.pcbi.1005659
[12] Glaser, J., Strong scaling of general-purpose molecular dynamics simulations on GPUs, Comput. Phys. Commun., 192, 97-107 (2015) · doi:10.1016/j.cpc.2015.02.028
[13] Grinberg, L., et al.: A new computational paradigm in multiscale simulations: Application to brain blood flow. In: Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-5 (2011)
[14] Hitz, T., Heinen, M., Vrabec, J., Munz, C.D.: Comparison of macro-and microscopic solutions of the riemann problem I. supercritical shock tube and expansion into vacuum. J. Comput. Phys. 402, 109077 (2020) · Zbl 1453.76095
[15] Hitz, T., Jöns, S., Heinen, M., Vrabec, J., Munz, C.D.: Comparison of macro-and microscopic solutions of the riemann problem II. two-phase shock tube. J. Comput Phys 429, 110027 (2021) · Zbl 07500759
[16] Johar, A.: Final HIP Platform implementation for AMD GPUs on ROCm 3338 (2021). https://github.com/openmm/openmm/pull/3338
[17] Kadau, K.; Barber, JL; Germann, TC; Holian, BL; Alder, BJ, Atomistic methods in fluid simulation, Philos. Trans. R. Soc. A Math. Phys. Eng. Sci., 368, 1916, 1547-1560 (2010) · Zbl 1192.76035 · doi:10.1098/rsta.2009.0218
[18] Kondratyuk, N.; Nikolskiy, V.; Pavlov, D.; Stegailov, V., GPU-accelerated molecular dynamics: State-of-art software performance and porting from nvidia CUDA to AMD HIP, The International Journal of High Performance Computing Applications, 35, 4, 312-324 (2021) · doi:10.1177/10943420211008288
[19] Kostenetskiy, P., Chulkevich, R., Kozyrev, V.: HPC resources of the Higher School of Economics. J. Phys. Conf. Ser. 1740, 012050. IOP Publishing (2021)
[20] Kutzner, C.; Páll, S.; Fechner, M.; Esztermann, A.; de Groot, BL; Grubmüller, H., Best bang for your buck: GPU nodes for GROMACS biomolecular simulations, J. Comput. Chem., 36, 26, 1990-2008 (2015) · doi:10.1002/jcc.24030
[21] Kutzner, C.; Páll, S.; Fechner, M.; Esztermann, A.; de Groot, BL; Grubmüller, H., More bang for your buck: Improved use of GPU nodes for GROMACS 2018, J. Comput. Chem., 40, 27, 2418-2431 (2019) · doi:10.1002/jcc.26011
[22] Moon, B.; Jagadish, H.; Faloutsos, C.; Saltz, J., Analysis of the clustering properties of the Hilbert space-filling curve, IEEE Trans. Knowl. Data Eng., 13, 1, 124-141 (2001) · doi:10.1109/69.908985
[23] Nikolskiy, V.P., Stegailov, V.V., Vecher, V.S.: Efficiency of the Tegra K1 and X1 systems-on-chip for classical molecular dynamics. In: 2016 International Conference on High Performance Computing & Simulation (HPCS), pp. 682-689. IEEE (2016)
[24] OpenMM team: OpenMM application layer python API http://docs.openmm.org/latest/api-python/app.html
[25] OpenMM team: OpenMM library level C++/Python API http://docs.openmm.org/development/api-c++/
[26] Perdikaris, P.; Grinberg, L.; Karniadakis, GE, Multiscale modeling and simulation of brain blood flow, Phys. Fluids, 28, 2 (2016) · doi:10.1063/1.4941315
[27] Plimpton, S., Fast parallel algorithms for short-range molecular dynamics, J. Comput. Phys., 117, 1, 1-19 (1995) · Zbl 0830.65120 · doi:10.1006/jcph.1995.1039
[28] Rapaport, D.C., Clementi, E.: Eddy formation in obstructed fluid flow: A molecular-dynamics study. Phys. Rev. Lett. 57, 695-698 (1986). doi:10.1103/PhysRevLett.57.695
[29] Shamsutdinov, A.; Balandin, D.; Barkalov, K.; Gergel, V.; Meyerov, I., Performance of supercomputers based on Angara interconnect and novel AMD CPUs/GPUs, Mathematical Modeling and Supercomputer Technologies, 401-416 (2021), Cham: Springer, Cham · doi:10.1007/978-3-030-78759-2_33
[30] Smith, E., A molecular dynamics simulation of the turbulent Couette minimal flow unit, Phys. Fluids, 27, 11 (2015) · doi:10.1063/1.4935213
[31] Smith, E.; Trevelyan, D.; Ramos-Fernandez, E.; Sufian, A.; O’Sullivan, C.; Dini, D., CPL library – a minimal framework for coupled particle and continuum simulation, Comput. Phys. Commun., 250 (2020) · doi:10.1016/j.cpc.2019.107068
[32] Stegailov, M., Angara interconnect makes GPU-based Desmos supercomputer an efficient tool for molecular dynamics calculations, Int. J. High Perform. Comput. Appl., 33, 3, 507-521 (2019) · doi:10.1177/1094342019826667
[33] Tchipev, N., et al.: Twetris: twenty trillion-atom simulation. Int. J. High Perf. Comp. Appl. 0(0), 1094342018819741 (2019). doi:10.1177/1094342018819741
[34] Thompson, A.P. et al.: LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Comput. Phys. Commun. 271, 108171 (2022) · Zbl 1516.74108
[35] Trott, C.R., et al.: Kokkos 3: programming model extensions for the exascale era. IEEE Trans. Parallel Distrib. Syst. 33(4), 805-817 (2022). doi:10.1109/TPDS.2021.3097283
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.