Notices of the American Mathematical Society
Welcome to the current issue of the Notices of the American Mathematical Society.
With support from AMS membership, we are pleased to share the journal with the global mathematical community.
- Previous Issue
- Volume 72 | Number 4 | April 2025
- No newer issues
PDFLINK |
Taming Uncertainty in a Complex World: The Rise of Uncertainty Quantification—A Tutorial for Beginners
Communicated by Notices Associate Editor Reza Malek-Madani
George Box, a British statistician, wrote the famous aphorism, “All models are wrong, but some are useful.” The aphorism acknowledges that models, regardless of whether they are qualitative, quantitative, dynamical, or statistical, always fall short of the complexities of reality. The ubiquitous imperfections of models come from various sources, including the lack of a perfect understanding of nature, limited spatiotemporal model resolutions due to computational power, inaccuracy in the initial and boundary conditions, etc. Therefore, uncertainty quantification (UQ), which quantitatively characterizes and estimates uncertainties, is essential to identify the usefulness of the model. UQ also defines the range of possible outcomes when certain aspects of the system are not precisely known. A key objective of UQ is to explore how uncertainties propagate, both through time evolution and across different quantities via complex nonlinear dependencies.
Characterizing Uncertainties
When uncertainty appears in the model and the input, the output can potentially take different values with typically unequal chances. Therefore, it is natural to characterize the output as a random variable, and the UQ of the model output can be based on the associated probability density function (PDF).
Intuitively, the spread of a distribution, which describes how close the possible values of the output are to each other, measures the uncertainty of a variable. If a PDF is Gaussian, variance is a natural indicator describing the spread. Therefore, the uncertainty associated with the three Gaussian distributions is expected to increase from Panel (a) to Panel (c) in Figure 1. On the other hand, non-Gaussian distributions are widely seen in practice due to the intrinsic nonlinearity of the underlying system. The non-Gaussian features should also be highlighted in the rigorous quantification of the uncertainty.
Quantifying the uncertainty using Shannon’s entropy (S.E.) for Gaussian and non-Gaussian distributions. In all panels, the x-axis spans units.

Shannon’s entropy: Quantifying the uncertainty in one PDF
Denote by the PDF of a random variable Shannon’s entropy, an information measurement, is a natural choice to rigorously quantify uncertainty. It is defined as .
Shannon’s entropy originates from the theory of communication when a “word” is represented as a sequence of binary digits with length so the set of all words of length , has elements. Therefore, the amount of information needed to characterize one element, which is the number of digits, is Consider the case where the entire set is divided into disjoint subsets, each with . total elements. The chance of randomly taking one element that belongs to the th subset is - If an element belongs to the .th subset, then the additional information to determine it is - Therefore, the average amount of information to determine an element is .
Recall that is the information to determine an element given the full set. Thus, the corresponding average lack of information is which is the uncertainty. The formal definition of Shannon’s entropy ,1 generalizes the above argument. It exploits the negative of a natural logarithm function, to characterize the lack of information of each event , and then takes the continuous limit to replace the finite summation with an integral that represents the uncertainty averaged over all events. See
For certain distributions, can be written down explicitly. If is an dimension Gaussian distribution, where - and are the mean and covariance, then Shannon entropy has the following form (where “det” denotes the matrix determinant):
3In the one-dimensional situation, 3 implies that the uncertainty is uniquely determined by the covariance and is independent of the mean value, consistent with the intuition. Therefore, Shannon’s entropy confirms that the uncertainty increases from Panel (a) to Panel (c) in Figure 1, and the uncertainty has the same value in Panels (a) and (d). However, without a systematic information-based measurement, there is typically no single empirical indicator, like variance (or equivalently, peak height), that can fully characterize the uncertainty in complex non-Gaussian PDFs. Therefore, Shannon’s entropy provides a systematic way for UQ. Panel (e) shows a Gamma distribution, which is skewed and has a one-sided fat tail. The fat tail usually corresponds to extreme events, which are farther from the mean value, naturally increasing the uncertainty. Therefore, despite the same peak height of the two PDFs in Panels (e) and (f), the fat-tailed Gamma PDF has a larger entropy.
Relative entropy: Measuring how one PDF is different from another
In many practical problems, the interest lies in estimating the lack of information in one distribution related to another Typically, . is associated with a full probabilistic model while comes from a reduced-order approximate model. The latter is often less informative but is widely used in practice to accelerate computations.
Solutions of the linear system 8. Panel (a): time evolution of with a deterministic initial condition. Panel (b): time evolution of with the initial condition given by a Gaussian distribution. The ensemble size is 1000.

Recall that the uncertainty about in is The average lack of information, i.e., Shannon’s entropy, is .
4However, while the uncertainty about in is the expected lack of information under , is
5This is because even though the approximate model is used to measure the information the actual probability of , to appear always comes from the full system, namely which is objective and independent of the choice of the model. In other words, the role of the approximate model is to provide the lack of information for each event , In contrast, the underlying distribution of the occurrence of . is objective regardless of the approximate model used. Relative entropy, also known as the Kullback-Leibler (KL) divergence, characterizes the difference between these two entropies
The relative entropy is nonnegative. It becomes larger when and are more distinct. Since appears in both and the relative entropy is not symmetric. One desirable feature of , is that it is invariant under general nonlinear change of variables. As a remark, if is utilized in 5 to compute the expected lack of information under then the analog to ,6 is called Shannon entropy difference.
To illustrate that relative entropy 6 is a more appropriate definition of the lack of information than Shannon entropy difference, consider a model to bet on a soccer game: Team A vs. Team B. A comprehensive model gives the odds for Team A: 10% (win), 10% (draw), and 80% (lose). However, someone unfamiliar with soccer may have a biased model which gives the odds for Team A: 80% (win), 10% (draw), and 10% (lose). If entropy difference is used, the resulting lack of information will be precisely zero. In contrast, relative entropy considers the lack of information in , related to regarding each outcome (win/draw/lose).
When both and are dimensional Gaussians, the relative entropy has the following explicit formula -
where “tr” is the trace of a matrix. The first term on the right-hand side of 7 is called “signal,” which measures the lack of information in the mean weighted by model covariance. The second term involving the covariance ratio is called “dispersion.”
UQ in Dynamical Systems
From now on, UQ will be discussed in the context of dynamical systems. The uncertainty propagates in different ways in linear and nonlinear systems.
Examples of uncertainty propagation in linear and nonlinear dynamical systems
Consider a linear ordinary differential equation,
8where is the damping coefficient and is an external forcing. The solution of 8 can be written down explicitly where , converges to in an exponential rate. Consider the time evolution of the linear dynamics 8 with two different initial conditions. A deterministic initial condition is given in the first case. The time evolution of is shown in Panel (a) of Figure 2. In the second case, uncertainty appears in the initial condition, which is given by a Gaussian distribution Different initial values are drawn from this distribution and follow the governing equation .8. The black curves in Panel (b) show the time evolution of different ensemble members, while the red curve is the ensemble average. The uncertainty dissipated over time, and the time evolution of the ensemble average follows the same trajectory as the deterministic case. In other words, the uncertainty does not change the mean dynamics.
Solutions of the nonlinear chaotic Lorenz 63 model 9. Panel (a): the Lorenz attractor. Panel (b): time evolution of with a deterministic initial condition. Panel (b): time evolution of with the initial condition given by a Gaussian distribution. The ensemble size is 1000.

Next, consider the Lorenz 63 model, which is a nonlinear chaotic system
Again, consider the time evolution of the solution starting from two sets of initial conditions, one deterministic and one containing uncertainty By taking the standard parameter values . , and the system displays a chaotic behavior. Panel (a) of Figure ,3 shows the attractor of the system, which resembles a butterfly. Panel (b) shows the time evolution of starting from the deterministic initial condition. In contrast, the black curves in Panel (c) show ensemble members starting from different values drawn from the given initial distribution, while the red curve is the ensemble average. Although the ensemble mean follows the trajectory in the deterministic case and has a small uncertainty within the first few units, the ensemble members diverge quickly. Notably, the ensemble average significantly differs from any model trajectories. This implies that uncertainty has a large impact on the mean dynamics. In other words, the mean evolution and the uncertainty cannot be considered separately, as in the linear case.
Impact of nonlinearity on uncertainty propagation
To understand the impact of nonlinearity on uncertainty propagation, let us decompose a random variable into its mean and fluctuation parts via the Reynolds decomposition
where represents the ensemble average computed by taking the summation of all ensemble members and then dividing by the total number of ensembles. So, is the variance of For the linear system .2, taking the ensemble average leads to the mean dynamics
which implies the time evolution of the mean is not affected by the uncertainty described by the fluctuation part. However, the situation becomes very different if nonlinear terms appear in the dynamics. Consider a quadratic nonlinear term on the right-hand side of the dynamics,
12Since
13taking the ensemble average of 12 leads to
14which indicates that the higher-order moments containing the information of uncertainties affect the time evolution of the lower-order moments (e.g., the mean dynamics) via nonlinearity. It also reveals that the moment equations for general nonlinear dynamics will never be closed. In practice, approximations are made to handle the terms involving higher-order moments in the governing equations of the lower-order moments to form a solvable closed system
In chaotic systems, small uncertainties are quickly amplified by positive Lyapunov exponents, making an accurate state forecast/estimation challenging. Additional resources can be combined with models to facilitate uncertainty reduction in state estimation.
Uncertainty Reduction Via Data Assimilation (DA)
Model and observational data are widely utilized to solve practical problems. However, neither model nor observation is close to perfect in most applications. Models are typically chaotic and involve large uncertainties. Observations contain noise and are often sparse, incomplete, and indirect. Nevertheless, when a numerical model and observations are optimally combined, the estimation of the state can be significantly improved. This is known as data assimilation (DA). DA was initially developed in numerical weather prediction, which improves the initialization for a numerical forecast model. Since then, DA has become an essential tool for many applications, including dynamical interpolation of missing data, inferring the unobserved variables, parameter estimation, assisting control and machine learning, etc.
Posterior distribution from Bayes’ formula 15 with , and , Panels (a)–(d): the PDFs with different observational uncertainty . and number of observations To keep the figure concise, the PDFs corresponding to the . or noisy observations are omitted from Panels (c)–(d). Panel (e): the asymptotic behavior of the posterior mean and the posterior variance as a function of Due to the randomness in observations, the shaded area shows the variation of the results with 100 sets of independent observations for each fixed . .

Denote by the state variable and the noisy observation with the observational operator and a random noise, the underlying principle of DA is given by Bayes’ theorem
where “ stands for “proportional to.” In ”15, is the forecast distribution by using a model built upon prior knowledge, while is the probability of observation under the model assumption. Their combination is the conditional distribution of given which is called the posterior distribution. Notably, due to the additional information from observation, uncertainty is expected to be reduced from the prior to the posterior distribution. ,
Assume the prior distribution and the observational noise with variance are both Gaussian. Plugging these into 15, it is straightforward to show that the posterior distribution is also Gaussian. The posterior mean and covariance can be written down explicitly as
16where is an identity matrix of size with being the dimension of and is given by Now consider the case with . and Then, all three distributions in .15 are one-dimensional Gaussians, and the observation equals the truth plus noise. In such a case, and becomes a weighted summation of the prior mean and the observation with weights being and respectively. When the observational noise is much smaller than the prior uncertainty, i.e., , the posterior mean almost fully trusts the observation since , In contrast, if the observation is highly polluted, i.e., . then , and the observational information can almost be ignored, and the posterior mean nearly equals the prior mean. Essentially, the weights are fully determined by the uncertainties. On the other hand, the posterior variance in 16 is always no bigger than the prior variance indicating that the observation helps reduce the uncertainty. Panels (a)–(b) of Figure ,4 validate the above conclusions. In some applications, repeated measurements with independent noises are available, which further advance the reduction of the posterior uncertainty. For example, in the above scalar state variable case, when repeated observations are used, the observational operator becomes a vector and similarly for the noise According to Panels (a), (c), and (d), the posterior uncertainty reduces when the number of observations increases from . to Panel (e) shows that when . becomes large, the posterior mean converges to the truth, and the posterior variance decreases to zero. Correspondingly, when characterizing the uncertainty reduction in the posterior distribution related to the prior via the relative entropy 7, the signal part converges to a constant while the dispersion part scales as See the online supplementary document for more details. .
When the forecast model and the observational operator are linear, and the noises are Gaussian, the above procedure is called the Kalman filter
In the following, UQ will be discussed in the context of Lagrangian DA (LaDA)
UQ in LaDA. Panel (a): uncertainty reduction in the signal and dispersion parts as a function of Panels (b)–(e): comparing the true flow field with the recovered ones using 2, 10, and 30 tracers. .

Lagrangian data assimilation (LaDA)
Lagrangian tracers are moving drifters, such as robotic instruments, balloons, sea ice floes, and even litter. They are often used to recover the flow field that drives their motions. However, recovering the entire flow field based solely on Lagrangian tracers is challenging. This is because tracers are usually sparse, which prevents a direct estimation of the flow velocity in regions with no observations. In addition, the measured quantity is the tracer displacements. Observational noise can propagate through the time derivative from displacement to velocity. A flow model, despite being generally turbulent, can provide prior knowledge about the possible range of velocity in the entire domain. Observational information dominates the state estimation at the locations covered by tracers. They also serve as the constraints, conditioned on which the uncertainty in estimating the flow field through the model in regions without observations can be significantly reduced. LaDA is widely used to provide a more accurate recovery of the flow field, facilitating the study of flow properties. For simplicity, assume the tracer velocities are equal to the underlying flow field. The LaDA scheme consists of two sets of equations: one for the observational process and the other for the flow model, described as follows:
17where is the number of tracers, and is a white noise, representing the observational noise. Each is a two-dimensional vector containing the displacements of the th tracer, and - is the two-dimensional velocity field (e.g., the surface of the ocean). Notably, the flow velocity in the observational process is typically a highly nonlinear function of displacement, making the LaDA a challenging nonlinear problem.
UQ in LaDA
One important UQ topic in LaDA is quantifying the uncertainty reduction in the estimated flow field as a function of the number of tracers The comparison is between the posterior distribution from LaDA and the prior one. The latter is the statistical equilibrium solution (i.e., the attractor) of the chaotic flow model. It is the best inference of the flow field in the absence of observations. With two distributions involved in the comparison, the relative entropy .6 is a natural choice to measure the uncertainty reduction.
Under certain conditions
The Role of Uncertainty in Diagnostics
Parameter estimation
Consider the following two-dimensional model
18which is a linear system with respect to both the parameter and the state variable Assume the observational data of . and and their time derivatives and are available at time for Define a matrix . which takes values at time , To estimate the parameters . linear regression can be easily applied to find the least-squares solution, ,
19where Writing down the component-wise form of .19 yields
20Now consider a different situation, where only and are observed. This is typical in many applications where only partial observations are available. Since estimating the parameter requires the information of as shown in 20, the state of at each time needs to be estimated, which naturally introduces uncertainty. Assume the estimated state of is given by a Gaussian distribution, and is written into the Reynolds decomposition form as The parameter estimation problem can be regarded as repeatedly drawing samples from the distribution of . plugging them into ,20, and then taking the average for the evaluation of the terms involving to reach the maximum likelihood solution. In light of the fact from 13 that the above procedure essentially gives the following modified version of ,20,
21where the term arrives due to the average of the nonlinear function in 20. This is somewhat surprising at a glance since the underlying model 18 is linear. Nevertheless, the formula in 21 is intuitive. Consider estimating from with data at only one time instant If . is deterministic, then If . contains uncertainty and satisfies a Gaussian distribution, then the reciprocal distribution is no longer Gaussian. The variance of affects the mean value which is different from , Following the same logic, as . on the right-hand side of the least-squares solution 20 is nonlinear, the additional term appears in 20, which affects the parameter estimation skill. Therefore, uncertainty can play a significant role even in diagnosing a linear system where nonlinearity appears in the diagnostics (parameter estimation formula).
As a numerical illustration, assume enough data is generated from 18 with the true parameters and When the uncertainty, namely the variance of estimating . is given by , and at all times the estimated parameter of , becomes , and , respectively. As the estimated parameter becomes less accurate, the residual term in the regression increases, accounting for the effect of the input uncertainty. ,
In practice, expectation-maximization (EM) iterative algorithms can be applied to alternatively update the estimated parameter values and recover the unobserved states of
Eddy identification
Let us now explore how simple UQ tools advance the study of realistic problems. Oceanic eddies are dynamic rotating structures in the ocean. The primary goal is to show that nonlinearity in the eddy diagnostic will make UQ play a crucial role. Providing a rigorous definition of an eddy and discussing the pros and cons of different eddy identification methods are not the main focus here. Mesoscale eddies are major drivers of the transport of momentum, heat, and mass, as well as biochemical and biomass transport and production in the ocean. The study of ocean eddies is increasingly important due to climate change and the vital role eddies play in the rapidly changing polar regions and the global climate system.
Due to the complex spatial and dynamical structure of eddies there is no universal criterion for eddy identification. The Okubo-Weiss (OW) parameter
where the normal strain, the shear strain, and the relative vorticity are given by
respectively, with the two-dimensional velocity field and the shorthand notation When the OW parameter is negative, the relative vorticity is larger than the strain components, indicating vortical flow. The OW parameter is an Eulerian quantity based solely on a snapshot of the ocean velocity field. There are other quantities to identify eddies, such as the Lagrangian descriptor .
Eddy identification using the OW parameter. Panel (a): the OW parameter based on the true flow field. Panel (b): the OW parameter based on the estimated mean flow field from LaDA using a small number of the observed tracers. Panel (c): the expected value of the OW parameter based on multiple flow fields sampled from the posterior distribution.

The use of these eddy identification diagnostics requires knowing the exact flow field. However, uncertainties may appear in state estimation, for example, in the marginal ice zone when the ocean is estimated via the LaDA using a limited number of ice floe trajectories. Eddy identification exploiting the mean estimate of the flow field can lead to large biases.
The contribution of the uncertainty in affecting the OW parameter can be seen by applying the Reynolds decomposition to each component in 22, e.g., By sampling multiple realizations from the posterior distribution of the estimated flow field, the expectation of the OW parameter values applying to each sampled flow field is given by .
23On the right-hand side of 23, is the OW parameter applying to the estimated mean flow field. The remaining terms are all nonlinear functions of the fluctuation part due to the uncertainty in state estimation. In general, as most of the eddy identification criteria are nonlinear with respect to the flow field as in 22, changing the order of taking the expectation and applying the eddy diagnostic criterion will significantly impact the results. Notably, even though 21 and 23 are in entirely different contexts, they share the same essence that uncertainty plays a significant role when nonlinear terms appear in the diagnostics.
Panels (b)–(c) of Figure 6 numerically illustrate the difference between and where the flow field is inferred from LaDA using a small number of the observed tracers. In the presence of such a large uncertainty, the main contribution of the estimated flow field comes from the fluctuation part, while the mean estimation has weak amplitudes. Consequently, very few eddies are identified using , The expected value . improves the diagnostic results. However, it is still far from the truth in Panel (a) due to the large uncertainty of computed by using different realizations of sampled from the posterior distribution of the LaDA. This is again similar to the intrinsic inaccuracy in the parameter estimation problem 21 when uncertainty appears. Notably, collecting associated with different sampled flow realizations allows us to compute the PDF which provides a probabilistic view of eddy identification. Such a probabilistic eddy identification framework allows for assigning a probability of the occurrence of each eddy and the PDFs of lifetime and size of each eddy ,
UQ in Advancing Efficient Modeling
The governing equations of many complex turbulent systems are given by nonlinear partial differential equations (PDEs). The spectral method remains one of the primary schemes for finding numerical solutions. For simplicity, consider a scalar field which satisfies a PDE with quadratic nonlinearities, as in many fluids and geophysical problems. Assume Fourier basis functions , are utilized, where is the wavenumber, and is the spatial coordinate. Denote by the time series of the spectral mode associated with wavenumber Apply a finite-dimensional truncation to retain only the modes . The resulting equation of . typically has the following form,
24where represents the resolution of the model, is the imaginary unit, and are both real, while is complex. The first term on the right-hand side of 24 is linear, representing the effect of damping/dissipation and phase The second term is deterministic forcing. The last term sums over all the quadratic nonlinear interactions projected to mode . Typically, a large number of such nonlinear terms will appear in the summation, which is one of the main computational costs in solving .24.
Comparison of the simulations from a nonlinear PDE model and a stochastic model, showing the latter can reproduce the forecast statistics of the former. Panels (a) and (c): snapshots of stream functions of the upper-layer ocean. Panels (b) and (d): time series of mode These are two different realizations, so there is no pointwise correspondence between them in the snapshots or time series. .

Due to the turbulence of these systems, many practical tasks, such as the statistical forecast and DA, require obtaining a forecast distribution by repeatedly running the governing equation. Each run is already quite costly, so the ensemble forecast is usually computationally prohibitive. Therefore, developing an appropriate stochastic surrogate model is desirable, aiming to significantly accelerate computational efficiency. Stochasticity can mimic many of the features in turbulent systems. Since the goal is to reach the forecast statistics instead of a single individual forecast trajectory, once appropriate UQ is applied to guide the development of the stochastic surrogate model, it can reproduce the statistical features of the underlying nonlinear deterministic system.
In many applications, a large portion of the energy is explained by only a small number of the Fourier modes. Yet, the entire set of the Fourier modes has to be solved together in the direct numerical simulation to guarantee numerical stability. Therefore, reducing the computational cost by using stochastic surrogate models is twofold. First, in the governing equation of each mode, the heavy computational burden of calculating the summation of a large number of nonlinear terms will be replaced by computing a few much cheaper stochastic terms. Second, as the governing equation of each mode becomes independent when the nonlinear coupling is replaced by the stochastic terms, only the leading few Fourier modes need to be retained in such an approximate stochastic system, saving a large amount of computational storage. This also allows a larger numerical integration time step since stiffness usually comes from the governing equations of the small-scale modes. One simple stochastic model is to replace all the nonlinear terms in the governing equation of each mode by a single stochastic term representing white noise, which works well if the statistics of the mode are nearly Gaussian. The resulting stochastic model reads
For systems with strong non-Gaussian statistics, other systematic methods can be applied to develop stochastic surrogate models
UQ plays a crucial role in calibrating the stochastic surrogate model. Specifically, the parameters in the stochastic surrogate model are optimized so that the two models have the same forecast uncertainty, which is crucial for DA and ensemble prediction. This is often achieved by matching a set of key statistics of the two models, especially the equilibrium PDFs and the decorrelation time. For 25, simple closed analytic formulae are available for model calibration in reproducing the forecast uncertainty, allowing it to be widely used in many practical problems. See the online supplementary document for more details.
Panels (a) of Figure 7 shows a snapshot of the stream function from a two-layer quasi-geostrophic (QG) model, which is given by a set of nonlinear PDEs with the spatial resolution of each layer being In comparison, Panel (c) shows a snapshot of the spatial field by running a set of stochastic surrogate models, each calibrated by capturing the forecast uncertainty of one Fourier mode of the two-layer QG model .
Conclusions
This paper exploits simple examples to provide the basic concepts of UQ. It does not aim to provide a single, comprehensive definition, as various communities currently have different interpretations of UQ. Nevertheless, the information quantities 1 and 6 are natural measurements to quantify the uncertainty. As a developing field within applied mathematics and interdisciplinary research, UQ continues to evolve. One of the goals of this paper is to offer new insights into how these ideas can be applied across different fields, helping to reveal the commonalities and practical advantages of diverse approaches. As uncertainty is ubiquitous, incorporating UQ into analysis or strategic planning is essential to facilitate understanding nature and problem-solving in almost all disciplines. UQ has become an important tool in tackling observational data
Supplementary document and codes availability
The arXiv version of this article includes a supplementary document (https://arxiv.org/abs/2408.01823), which contains a comprehensive tutorial of many numerical examples shown in this article and beyond. The codes for these examples, written in both MATLAB and Python, are available from GitHub at https://github.com/marandmath/UQ_tutorial_code.
Acknowledgments
Nan Chen and Marios Andreou are grateful to acknowledge the support of the Office of Naval Research (ONR) N00014-24-1-2244 and the Army Research Office (ARO) W911NF-23-1-0118. Stephen Wiggins acknowledges the financial support provided by the EPSRC Grant No. EP/P021123/1 and the support of the William R. Davis ’68 Chair in the Department of Mathematics at the United States Naval Academy. The authors thank Dr. Jeffrey Covington for helping with some of the figures.
References
[ ABN16] - Mark Asch, Marc Bocquet, and Maëlle Nodet, Data assimilation, Fundamentals of Algorithms, vol. 11, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2016. Methods, algorithms, and applications, DOI 10.1137/1.9781611974546.pt1. MR3602006,
Show rawAMSref
\bib{asch2016data}{book}{ author={Asch, Mark}, author={Bocquet, Marc}, author={Nodet, Ma\"{e}lle}, title={Data assimilation}, series={Fundamentals of Algorithms}, volume={11}, note={Methods, algorithms, and applications}, publisher={Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA}, date={2016}, pages={xvii+306}, isbn={978-1-611974-53-9}, review={\MR {3602006}}, doi={10.1137/1.9781611974546.pt1}, }
[ AJS08] - Amit Apte, Christopher KRT Jones, and AM Stuart, A Bayesian approach to Lagrangian data assimilation, Tellus A: Dynamic Meteorology and Oceanography 60 (2008), no. 2, 336–347.,
Show rawAMSref
\bib{apte2008bayesian}{article}{ author={Apte, Amit}, author={Jones, Christopher~KRT}, author={Stuart, AM}, title={A {B}ayesian approach to {L}agrangian data assimilation}, date={2008}, journal={Tellus A: Dynamic Meteorology and Oceanography}, volume={60}, number={2}, pages={336\ndash 347}, }
[ BHTG21] - Amy Braverman, Jonathan Hobbs, Joaquim Teixeira, and Michael Gunson, Post hoc uncertainty quantification for remote sensing observing systems, SIAM/ASA J. Uncertain. Quantif. 9 (2021), no. 3, 1064–1093, DOI 10.1137/19M1304283. MR4296767,
Show rawAMSref
\bib{braverman2021post}{article}{ author={Braverman, Amy}, author={Hobbs, Jonathan}, author={Teixeira, Joaquim}, author={Gunson, Michael}, title={Post hoc uncertainty quantification for remote sensing observing systems}, journal={SIAM/ASA J. Uncertain. Quantif.}, volume={9}, date={2021}, number={3}, pages={1064--1093}, review={\MR {4296767}}, doi={10.1137/19M1304283}, }
[ CCW22] - Jeffrey Covington, Nan Chen, and Monica M. Wilhelmus, Bridging gaps in the climate observation network: A physics-based nonlinear dynamical interpolation of lagrangian ice floe measurements via data-driven stochastic models, Journal of Advances in Modeling Earth Systems 14 (2022), no. 9, e2022MS003218.,
Show rawAMSref
\bib{covington2022bridging}{article}{ author={Covington, Jeffrey}, author={Chen, Nan}, author={Wilhelmus, Monica~M.}, title={Bridging gaps in the climate observation network: A physics-based nonlinear dynamical interpolation of lagrangian ice floe measurements via data-driven stochastic models}, date={2022}, journal={Journal of Advances in Modeling Earth Systems}, volume={14}, number={9}, pages={e2022MS003218}, }
[ CCWL24] - Jeffrey Covington, Nan Chen, Stephen Wiggins, and Evelyn Lunasin, Probabilistic eddy identification with uncertainty quantification, Preprint, arXiv:2405.12342, 2024.,
Show rawAMSref
\bib{covington2024probabilistic}{eprint}{ author={Covington, Jeffrey}, author={Chen, Nan}, author={Wiggins, Stephen}, author={Lunasin, Evelyn}, title={Probabilistic eddy identification with uncertainty quantification}, date={2024}, arxiv={2405.12342}, }
[ Che23] - Nan Chen, Stochastic methods for modeling and predicting complex dynamical systems—uncertainty quantification, state estimation, and reduced-order models, Synthesis Lectures on Mathematics and Statistics, Springer, Cham, 2023, DOI 10.1007/978-3-031-22249-8. MR4572946,
Show rawAMSref
\bib{chen2023stochastic}{book}{ author={Chen, Nan}, title={Stochastic methods for modeling and predicting complex dynamical systems---uncertainty quantification, state estimation, and reduced-order models}, series={Synthesis Lectures on Mathematics and Statistics}, publisher={Springer, Cham}, date={2023}, pages={xvi+198}, isbn={978-3-031-22248-1}, isbn={978-3-031-22249-8}, review={\MR {4572946}}, doi={10.1007/978-3-031-22249-8}, }
[ CMT14] - Nan Chen, Andrew J. Majda, and Xin T. Tong, Information barriers for noisy Lagrangian tracers in filtering random incompressible flows, Nonlinearity 27 (2014), no. 9, 2133–2163, DOI 10.1088/0951-7715/27/9/2133. MR3247074,
Show rawAMSref
\bib{chen2014information}{article}{ author={Chen, Nan}, author={Majda, Andrew J.}, author={Tong, Xin T.}, title={Information barriers for noisy Lagrangian tracers in filtering random incompressible flows}, journal={Nonlinearity}, volume={27}, date={2014}, number={9}, pages={2133--2163}, issn={0951-7715}, review={\MR {3247074}}, doi={10.1088/0951-7715/27/9/2133}, }
[ CT06] - Thomas M. Cover and Joy A. Thomas, Elements of information theory, 2nd ed., Wiley-Interscience [John Wiley & Sons], Hoboken, NJ, 2006. MR2239987,
Show rawAMSref
\bib{cover1999elements}{book}{ author={Cover, Thomas M.}, author={Thomas, Joy A.}, title={Elements of information theory}, edition={2}, publisher={Wiley-Interscience [John Wiley \& Sons], Hoboken, NJ}, date={2006}, pages={xxiv+748}, isbn={978-0-471-24195-9}, isbn={0-471-24195-4}, review={\MR {2239987}}, }
[ DS11] - M. Dashti and A. M. Stuart, Uncertainty quantification and weak approximation of an elliptic inverse problem, SIAM J. Numer. Anal. 49 (2011), no. 6, 2524–2542, DOI 10.1137/100814664. MR2873245,
Show rawAMSref
\bib{dashti2011uncertainty}{article}{ author={Dashti, M.}, author={Stuart, A. M.}, title={Uncertainty quantification and weak approximation of an elliptic inverse problem}, journal={SIAM J. Numer. Anal.}, volume={49}, date={2011}, number={6}, pages={2524--2542}, issn={0036-1429}, review={\MR {2873245}}, doi={10.1137/100814664}, }
[ Gar04] - C. W. Gardiner, Handbook of stochastic methods for physics, chemistry and the natural sciences, 3rd ed., Springer Series in Synergetics, vol. 13, Springer-Verlag, Berlin, 2004, DOI 10.1007/978-3-662-05389-8. MR2053476,
Show rawAMSref
\bib{gardiner2004handbook}{book}{ author={Gardiner, C. W.}, title={Handbook of stochastic methods for physics, chemistry and the natural sciences}, series={Springer Series in Synergetics}, volume={13}, edition={3}, publisher={Springer-Verlag, Berlin}, date={2004}, pages={xviii+415}, isbn={3-540-20882-8}, review={\MR {2053476}}, doi={10.1007/978-3-662-05389-8}, }
[ Gra11] - Robert M. Gray, Entropy and information theory, Springer Science & Business Media, 2011.,
Show rawAMSref
\bib{gray2011entropy}{book}{ author={Gray, Robert~M.}, title={Entropy and information theory}, publisher={Springer Science \& Business Media}, date={2011}, }
[ Jef73] - Harold Jeffreys, Scientific inference, Cambridge University Press, 1973.,
Show rawAMSref
\bib{jeffreys1973scientific}{book}{ author={Jeffreys, Harold}, title={Scientific inference}, publisher={Cambridge University Press}, date={1973}, }
[ Kal60] - R. E. Kalman, A new approach to linear filtering and prediction problems, Trans. ASME Ser. D. J. Basic Engrg. 82 (1960), no. 1, 35–45. MR3931993,
Show rawAMSref
\bib{kalman1960new}{article}{ author={Kalman, R. E.}, title={A new approach to linear filtering and prediction problems}, journal={Trans. ASME Ser. D. J. Basic Engrg.}, volume={82}, date={1960}, number={1}, pages={35--45}, issn={0021-9223}, review={\MR {3931993}}, }
[ Lor63] - Edward N. Lorenz, Deterministic nonperiodic flow, J. Atmospheric Sci. 20 (1963), no. 2, 130–141, DOI 10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2. MR4021434,
Show rawAMSref
\bib{lorenz1963deterministic}{article}{ author={Lorenz, Edward N.}, title={Deterministic nonperiodic flow}, journal={J. Atmospheric Sci.}, volume={20}, date={1963}, number={2}, pages={130--141}, issn={0022-4928}, review={\MR {4021434}}, doi={10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2}, }
[ LSZ15] - Kody Law, Andrew Stuart, and Konstantinos Zygalakis, Data assimilation, Texts in Applied Mathematics, vol. 62, Springer, Cham, 2015. A mathematical introduction, DOI 10.1007/978-3-319-20325-6. MR3363508,
Show rawAMSref
\bib{law2015data}{book}{ author={Law, Kody}, author={Stuart, Andrew}, author={Zygalakis, Konstantinos}, title={Data assimilation}, series={Texts in Applied Mathematics}, volume={62}, note={A mathematical introduction}, publisher={Springer, Cham}, date={2015}, pages={xviii+242}, isbn={978-3-319-20324-9}, isbn={978-3-319-20325-6}, review={\MR {3363508}}, doi={10.1007/978-3-319-20325-6}, }
[ MB12] - Andrew J. Majda and Michal Branicki, Lessons in uncertainty quantification for turbulent dynamical systems, Discrete and Continuous Dynamical Systems 32 (2012), no. 9, 3133–3221.,
Show rawAMSref
\bib{majda2012lessons}{article}{ author={Majda, Andrew~J.}, author={Branicki, Michal}, title={Lessons in uncertainty quantification for turbulent dynamical systems}, date={2012}, issn={1078-0947}, journal={Discrete and Continuous Dynamical Systems}, volume={32}, number={9}, pages={3133\ndash 3221}, url={https://www.aimsciences.org/article/id/4bc1cfda-d8e9-4f8c-b9d7-d82194279792}, }
[ MKC02] - Andrew Majda, Richard Kleeman, and David Cai, A mathematical framework for quantifying predictability through relative entropy, Methods Appl. Anal. 9 (2002), no. 3, 425–444, DOI 10.4310/MAA.2002.v9.n3.a8. Special issue dedicated to Daniel W. Stroock and Srinivasa S. R. Varadhan on the occasion of their 60th birthday. MR2023134,
Show rawAMSref
\bib{majda2002mathematical}{article}{ author={Majda, Andrew}, author={Kleeman, Richard}, author={Cai, David}, title={A mathematical framework for quantifying predictability through relative entropy}, note={Special issue dedicated to Daniel W. Stroock and Srinivasa S. R. Varadhan on the occasion of their 60th birthday}, journal={Methods Appl. Anal.}, volume={9}, date={2002}, number={3}, pages={425--444}, issn={1073-2772}, review={\MR {2023134}}, doi={10.4310/MAA.2002.v9.n3.a8}, }
[ MMQ19] - Andrew J. Majda, M. N. J. Moore, and Di Qi, Statistical dynamical model to predict extreme events and anomalous features in shallow water waves with abrupt depth change, Proc. Natl. Acad. Sci. USA 116 (2019), no. 10, 3982–3987, DOI 10.1073/pnas.1820467116. MR3923854,
Show rawAMSref
\bib{majda2019statistical}{article}{ author={Majda, Andrew J.}, author={Moore, M. N. J.}, author={Qi, Di}, title={Statistical dynamical model to predict extreme events and anomalous features in shallow water waves with abrupt depth change}, journal={Proc. Natl. Acad. Sci. USA}, volume={116}, date={2019}, number={10}, pages={3982--3987}, issn={0027-8424}, review={\MR {3923854}}, doi={10.1073/pnas.1820467116}, }
[ MQ18] - Andrew J. Majda and Di Qi, Strategies for reduced-order models for predicting the statistical responses and uncertainty quantification in complex turbulent dynamical systems, SIAM Rev. 60 (2018), no. 3, 491–549, DOI 10.1137/16M1104664. MR3841156,
Show rawAMSref
\bib{majda2018strategies}{article}{ author={Majda, Andrew J.}, author={Qi, Di}, title={Strategies for reduced-order models for predicting the statistical responses and uncertainty quantification in complex turbulent dynamical systems}, journal={SIAM Rev.}, volume={60}, date={2018}, number={3}, pages={491--549}, issn={0036-1445}, review={\MR {3841156}}, doi={10.1137/16M1104664}, }
[ Mül06] - Peter Müller, The equations of oceanic motions, Cambridge University Press, 2006.,
Show rawAMSref
\bib{muller2006equations}{book}{ author={M{\"u}ller, Peter}, title={The equations of oceanic motions}, publisher={Cambridge University Press}, date={2006}, }
[ Oku70] - Akira Okubo, Horizontal dispersion of floatable particles in the vicinity of velocity singularities such as convergences, Deep sea research and oceanographic abstracts, 1970, pp. 445–454.,
Show rawAMSref
\bib{okubo1970horizontal}{inproceedings}{ author={Okubo, Akira}, title={Horizontal dispersion of floatable particles in the vicinity of velocity singularities such as convergences}, organization={Elsevier}, date={1970}, booktitle={Deep sea research and oceanographic abstracts}, volume={17}, pages={445\ndash 454}, }
[ PRS13] - Omiros Papaspiliopoulos, Gareth O. Roberts, and Osnat Stramer, Data augmentation for diffusions, J. Comput. Graph. Statist. 22 (2013), no. 3, 665–688, DOI 10.1080/10618600.2013.783484. MR3173736,
Show rawAMSref
\bib{papaspiliopoulos2013data}{article}{ author={Papaspiliopoulos, Omiros}, author={Roberts, Gareth O.}, author={Stramer, Osnat}, title={Data augmentation for diffusions}, journal={J. Comput. Graph. Statist.}, volume={22}, date={2013}, number={3}, pages={665--688}, issn={1061-8600}, review={\MR {3173736}}, doi={10.1080/10618600.2013.783484}, }
[ Sha48] - C. E. Shannon, A mathematical theory of communication, Bell System Tech. J. 27 (1948), 379–423, 623–656, DOI 10.1002/j.1538-7305.1948.tb01338.x. MR26286,
Show rawAMSref
\bib{shannon1948mathematical}{article}{ author={Shannon, C. E.}, title={A mathematical theory of communication}, journal={Bell System Tech. J.}, volume={27}, date={1948}, pages={379--423, 623--656}, issn={0005-8580}, review={\MR {26286}}, doi={10.1002/j.1538-7305.1948.tb01338.x}, }
[ VKGF16] - Rahel Vortmeyer-Kley, Ulf Gräwe, and Ulrike Feudel, Detecting and tracking eddies in oceanic flow fields: a Lagrangian descriptor based on the modulus of vorticity, Nonlinear Processes in Geophysics 23 (2016), no. 4, 159–173.,
Show rawAMSref
\bib{vortmeyer2016detecting}{article}{ author={Vortmeyer-Kley, Rahel}, author={Gr{\"a}we, Ulf}, author={Feudel, Ulrike}, title={Detecting and tracking eddies in oceanic flow fields: a {L}agrangian descriptor based on the modulus of vorticity}, date={2016}, journal={Nonlinear Processes in Geophysics}, volume={23}, number={4}, pages={159\ndash 173}, }
[ Wei91] - John Weiss, The dynamics of enstrophy transfer in two-dimensional hydrodynamics, Phys. D 48 (1991), no. 2-3, 273–294, DOI 10.1016/0167-2789(91)90088-Q. MR1102165,
Show rawAMSref
\bib{weiss1991dynamics}{article}{ author={Weiss, John}, title={The dynamics of enstrophy transfer in two-dimensional hydrodynamics}, journal={Phys. D}, volume={48}, date={1991}, number={2-3}, pages={273--294}, issn={0167-2789}, review={\MR {1102165}}, doi={10.1016/0167-2789(91)90088-Q}, }
Credits
Figures 1–7 are courtesy of the authors.
Photo of Nan Chen is courtesy of Nan Chen.
Photo of Stephen Wiggins is courtesy of Stephen Wiggins.
Photo of Marios Andreou is courtesy of Marios Andreou.