Code for a few simple examples is available in a colab notebook on my GitHub, and may be updated over time.
The Evolution of Physics-Informed Learning: Literature Review
Deep learning has impacted many fields with its ability to find patterns in massive datasets. However, when it comes to modeling complex physical systems, purely data-driven “black-box” methods often fall short. Here are a few reasons why:
- Physically inconsistent results: Standard deep learning models can produce predictions that do not follow known physical laws.
- Overfitting and poor generalization: These models are prone to learning shortcuts that only work on training and test datasets. In addition, without physical principles, models often struggle to make accurate predictions outside the scope of the data they were trained on.
- Missing physical insights: Understanding how and why a model makes its prediction is essential, especially in scientific applications.
The intersection of neural networks and physics-based modeling has been developing for decades, laying a theoretical and practical foundation for today’s Physics-Informed Neural Networks (PINNs).
The foundational concept that enables neural networks to be applied in scientific domains stems from the universal approximation theorem, which shows that feedforward networks can approximate any measurable function [1]. This study established neural networks as approximators for modeling complex and nonlinear physical systems.
Shortly after, researchers explored applied neural networks to solve differential equations, using them to minimize finite difference formulations [2]. This early work demonstrated that neural networks could satisfy numerical constraints from physical laws, not just fit data.
This approach extended to solving ordinary and partial differential equations (ODEs and PDEs), training networks to meet boundary conditions while learning the equation solutions through parameters tuning [3]. Though limited in scale and scope, this was an early form of embedding physics into a learning framework.
A major leap forward occurred with the development of Physics-Informed Neural Networks [4] that incorporated physical laws directly in the training process. This methodology has been successfully applied to problems in fluid mechanics, quantum physics, reaction–diffusion systems, and nonlinear wave propagation.
Recent developments combined PINNs with sparse regression techniques to learn PDEs from limited and noisy data [5], enabling to approximate solutions but also discovering underlying closed-form equations for real-world applications.
A recent review highlights the growing maturity of the field [6], noting advances such as mesh-free implementations, scalable solvers via domain decomposition, and promising research directions such as operator regression, and progress toward benchmarks and theoretical foundations for next-generation models.
Incorporating Physics into Deep Learning
As machine learning advances in scientific fields, a key challenge is how to embed known physics into deep learning models to improve accuracy and efficiency? PINNs address this by integrating physical laws directly into the neural network, bridging data-driven models and physics-based approaches.
This integration is especially valuable when data is scarce, noisy, or incomplete. Depending on available knowledge or data, PINNs operate in three main regimes [6].
- Full physics, small or no data: Governing equations are known and data is available primarily for initial/boundary conditions.
- Partial physics, partial data: Some equations or parameters are missing; PINNs leverage combination of data and physical constraints.
- Big data, little or no physics: Equations are unknown or too complex; data-driven methods dominate, but PINNs can help discover hidden physical structures within the data.
Key benefits include leveraging prior knowledge to enhance model performance, reducing reliance on data, and improving interpretability and trust in scientific applications [6].
PINNs use a feedforward neural network where training the model involves the minimization of a loss. The loss function L is commonly constructed as a weighted sum of data loss and the PDE loss.
\[ L = \lambda_{\text{data}} \cdot \mathcal{L}_{\text{data}} + \lambda_{\text{PDE}} \cdot \mathcal{L}_{\text{PDE}}\]
Minimizing this loss yields a model that fits data while satisfying the governing PDE throughout the domain. Alternatively, it’s also possible to train the neural network using only physical constraints.
With this background, you can explore some examples utilizing PINNs to approximate solutions for the governing equations and boundary conditions in the Github repository linked above.
References
- Multilayer feedforward networks are universal approximators. Neural Networks. 2, 359–366 (1989). https://doi.org/10.1016/0893-6080(89)90020-8
- Neural algorithm for solving differential equations. Journal of Computational Physics. 91, 110–131 (1990). https://doi.org/10.1016/0021-9991(90)90007-N
- Lagaris, I.E., Likas, A., Fotiadis, D.I.: Artificial Neural Networks for Solving Ordinary and Partial Differential Equations. IEEE Trans. Neural Netw. 9, 987–1000 (1998). https://doi.org/10.1109/72.712178
- Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics. 378, 686–707 (2019). https://doi.org/10.1016/j.jcp.2018.10.045
- Chen, Z., Liu, Y., Sun, H.: Physics-informed learning of governing equations from scarce data. Nat Commun. 12, 6136 (2021). https://doi.org/10.1038/s41467-021-26434-1
- Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S., Yang, L.: Physics-informed machine learning. Nat Rev Phys. 3, 422–440 (2021). https://doi.org/10.1038/s42254-021-00314-5