Learning to accelerate the simulation of PDEs - Tailin Wu (Stanford)


Abstract: Simulating the time evolution of Partial Differential Equations (PDEs) for large-scale systems is crucial in many scientific and engineering domains such as plasma physics and fluid dynamics. Such large-scale simulations are computationally intensive, requiring massive computing resources which often become infeasible for full-scale 3D dynamics. However, such systems often exhibit characteristics that suggest the dynamics may lie on a lower dimensional manifold. In this talk, I will present our early work that introduces Latent Evolution of PDEs (LE-PDE), which learns a compact representation of the dynamical system and evolves it with learned latent evolution models to predict the state of the system in future time steps. LE-PDE's objective consists of a multi-step prediction loss, a consistency loss in latent space, and a spectral regularization in the latent evolution model to improve generalization. We test our method starting with small and medium-sized systems. For a Navier-Stokes flow, the model is able to learn the evolution in a latent space with 2000-fold compression ratio. For plasma studies, we aim to evolve the kinetic Vlasov equation by learning the dynamics of the electron distribution in phase space and electric field, achieving compression of the velocity-space dimension. We also explore methods that encourages the model to obey conservation laws of the system.

Bio: Tailin Wu<https://tailin.org/> is a postdoctoral scholar in the Department of Computer Science at Stanford University, working with professor Jure Leskovec. His research interests include representation learning, reasoning, and AI for science, using tools of graph neural networks and information theory. He obtained his PhD in MIT Physics, where his thesis focused on application of machine learning to physics and introducing physics insights/techniques into machine learning.

Recorded Meeting Video: https://youtu.be/Q8yA9cjOR5I