PDE Modelling
What do forecasting catastrophic storms, harnessing the power of stars on Earth to build cleaner energy, and designing vehicles that slice through the air with minimal resistance have in common? The answer lies in Partial Differential Equations (PDEs), the mathematical language that expresses many fundamental physical laws, enabling scientists and engineers to model complex physical phenomena—from the swirling winds of a storm to the flow of air around a speeding car—and solve some of our world’s most pressing challenges.
However, in most realistic scenarios, PDE modelling alone provides an incomplete description of the relevant physics and often needs to be augmented, for example with empirical parametrizations or closures, to align with the observed physics. Additionally, the traditional numerical approaches used to solve these equations, such as the Finite Element Method (FEM) are, in most cases, expensive to run and do not scale well with historical observations.
Developing efficient methods to accurately describe physical systems is essential for scientists, engineers, and domain experts across a wide range of fields. Whether it is predicting the weather, modelling fluid dynamics, or understanding nuclear fusion, reliable simulations are crucial for efficient decision-making. Take weather forecasting, for example; it plays a pivotal role in key economic areas, including energy, transportation, and agriculture. Beyond economics, the ability to accurately predict the weather is also vital for mitigating the impact of extreme weather events. This is particularly important in the constantly evolving global weather landscape, where record-breaking temperatures, storms, floods, and wildfires are becoming increasingly common, affecting millions of people around the world.
Integrating AI with PDE Systems
In recent years, artificial intelligence algorithms have become a popular tool for addressing complex scientific challenges, from weather forecasting to nuclear fusion. Their growing popularity mainly comes from their ability to harness the large volume of data available in many fields (such as the ECMWF’s ERA5 archive that contains historical global weather dating back to the 1940s) and to solve problems with greater efficiency than traditional numerical methods. Despite these advantages, many existing machine learning approaches fall short in one aspect: they either ignore the fundamental physical laws underlying these systems—usually expressed as PDEs—or model them in overly simplistic ways. However, by embedding physical prior knowledge into ML models, we can significantly enhance their performance: improve interpretability, increase robustness, and enable better generalisation, all while requiring less training data.
This is what my fellowship project focused on: combining physical laws expressed as PDEs, which offer theoretically grounded representations of real-world systems, with advanced AI algorithms to construct hybrid methods that leverage the strength of both approaches. This emerging field is often referred to as physics-driven machine learning.

Physics-driven Machine Learning
The simulation of complex physical systems through physics-driven machine learning involves combining advanced numerics for PDEs with state-of-the-art machine learning algorithms, effectively bringing together specialised PDE-solving frameworks with industry-standard ML tools.
During my postdoctoral fellowship, I developed a software framework designed to provide scientists and engineers with an efficient, accessible platform for building and running physics-driven ML models. This software marries traditional methods, such as the Finite Element Method (FEM), with modern state-of-the-art AI algorithms. In doing so, it unlocks the potential of physics-driven ML and allows scientists and engineers from different backgrounds to address a wide range of problems that were previously unsolvable. Our framework has been adopted in the well-established Firedrake finite-element library and supports coupling with some of the most popular machine learning ecosystems such as PyTorch or JAX.
In parallel, I have also worked on developing a novel family of AI architectures called Structure-Preserving Operator Networks (SPON). These models are designed to learn complex dynamics driven by PDEs directly from data while preserving key continuous properties at the discrete level by leveraging finite element discretisations. As a result, SPON architectures can operate effectively on complex geometries, enforce certain physical constraints exactly, and provide theoretical guarantees, all while offering an explicit trade-off between performance and efficiency.
Towards Real-World Impact
By combining partial differential equations with the power of artificial intelligence, physics-driven machine learning is opening new ways to model, understand, and solve some of the world’s most pressing scientific challenges.
The software interface I developed during my fellowship has already begun to unlock a wide range of applications that were previously out of reach. Integrated into the open-source Firedrake project, the framework is being used by researchers to simulate a range of real-world problems, such as the simulation of material deformations for CO2 storage and other subsurface applications. I am also collaborating with the start-up Tanuki Technologies (tanuki.ai), using my interface and the structure-preserving framework to tackle scientific problems, including the simulation of artic sea ice for a £10M ARIA UK funded project.
As for the SPON, we believe it will provide a foundation for a new generation of AI methods for scientific simulations that are not only more powerful and data-efficient but also offer improved interpretability. With these new tools, physics-driven machine learning is moving beyond theory and into real-world impact, paving the way for smarter, faster, and more trustworthy scientific discovery.