Begell House Inc.
International Journal for Uncertainty Quantification
IJUQ
2152-5080
3
4
2013
PRIOR AND POSTERIOR ROBUST STOCHASTIC PREDICTIONS FOR DYNAMICAL SYSTEMS USING PROBABILITY LOGIC
271-288
10.1615/Int.J.UncertaintyQuantification.2012003641
James L.
Beck
Division of Engineering and Applied Science, California Institute of Technology, Pasadena, California 91125, USA
Alexandros
Taflanidis
Department of Civil and Environmental Engineering and Earth Sciences,
University of Notre Dame, 156 Fitzpatrick Hall, Notre Dame, IN 46556, USA
dynamical systems
stochastic modeling
robust stochastic analysis
system identification
Bayesian updating
model class assessment
stochastic simulation
An overview is given of a powerful unifying probabilistic framework for treating modeling uncertainty, along with input uncertainty, when using dynamic models to predict the response of a system during its design or operation. This framework uses probability as a multivalued conditional logic for quantitative plausible reasoning in the presence of uncertainty due to incomplete information. The fundamental probability models that represent the system's uncertain behavior are specified by the choice of a stochastic system model class: a set of input–output probability models for the system and a prior probability distribution over this set that quantifies the relative plausibility of each model. A model class can be constructed from a parametrized deterministic system model by stochastic embedding which utilizes Jaynes' principle of maximum information entropy. Robust predictive analyses use the entire model class with the probabilistic predictions of each model being weighted by its prior probability, or if response data are available, by its posterior probability from Bayes' theorem for the model class. Additional robustness to modeling uncertainty comes from combining the robust predictions of each model class in a set of competing candidates weighted by the prior or posterior probability of the model class, the latter being computed from Bayes' theorem. This higher-level application of Bayes' theorem automatically applies a quantitative Ockham razor that penalizes the data-fit of more complex model classes that extract more information from the data. Robust predictive analyses involve integrals over high-dimensional spaces that usually must be evaluated numerically by Laplace's method of asymptotic approximation or by Markov chain Monte Carlo methods. These computational tools are demonstrated in an illustrative example involving the vertical dynamic response of a car being driven along a rough road.
AN OVERVIEW OF INVERSE MATERIAL IDENTIFICATION WITHIN THE FRAMEWORKS OF DETERMINISTIC AND STOCHASTIC PARAMETER ESTIMATION
289-319
10.1615/Int.J.UncertaintyQuantification.2012003668
Miguel A.
Aguilo
Optimization and Uncertainty Quantification, Sandia National Laboratories, P.O. Box 5800, MS 1318, Albuquerque, New Mexico 87185-1320, USA
Laura P.
Swiler
Optimization and Uncertainty Quantification Department, Center for Computing Research, Sandia National Laboratories, P.O. Box 5800, Albuquerque, New Mexico
87123-1320, USA
Angel
Urbina
Optimization and Uncertainty Quantification, Sandia National Laboratories, P.O. Box 5800, MS 1318, Albuquerque, New Mexico 87185-1320, USA
inverse problems
Bayesian calibration
maximum a posteriori estimate
error in constitutive equation
nonlinear least squares
regularization
This work investigates the problem of parameter estimation within the frameworks of deterministic and stochastic parameter
estimation methods. For the deterministic methods, we look at constrained and unconstrained optimization
approaches. For the constrained optimization approaches we study three different formulations: L2, error in constitutive
equation method (ECE), and the modified error in constitutive equation (MECE) method. We investigate these
formulations in the context of both Tikhonov and total variation (TV) regularization. The constrained optimization
approaches are compared with an unconstrained nonlinear least-squares (NLLS) approach. In the least-squares framework
we investigate three different formulations: standard, MECE, and ECE. With the stochastic methods, we first
investigate Bayesian calibration, where we use Monte Carlo Markov chain (MCMC) methods to calculate the posterior
parameter estimates. For the Bayesian methods, we investigate the use of a standard likelihood function, a likelihood
function that incorporates MECE, and a likelihood function that incorporates ECE. Furthermore, we investigate the
maximum a posteriori (MAP) approach. In the MAP approach, parameters′ full posterior distribution are not generated
via sampling; however, parameter point estimates are computed by searching for the values that maximize the parameters′
posterior distribution. Finally, to achieve dimension reduction in both the MCMC and NLLS approaches, we
approximate the parameter field with radial basis functions (RBF). This transforms the parameter estimation problem
into one of determining the governing parameters for the RBF.
EFFICIENT NUMERICAL METHODS FOR STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS THROUGH TRANSFORMATION TO EQUATIONS DRIVEN BY CORRELATED NOISE
321-339
10.1615/Int.J.UncertaintyQuantification.2012003670
Ju
Ming
Department of Scientific Computing, Florida State University, Tallahassee, Florida 32306-4120
Max
Gunzburger
Department of Scientific Computing, Florida State University, Tallahassee, Florida 32306-4120
finite element methods
Monte-Carlo method
Karhunen-Loeve expansion
Smolyak quadrature rule
A procedure is provided for the efficient approximation of solutions of a broad class of stochastic partial differential equations (SPDEs), that is, partial differential equations driven by additive white noise. The first step is to transform the given SPDE into an equivalent SPDE driven by a correlated random process, specifically, the Ornstein-Uhlenbeck process. This allows for the use of truncated Karhunen-Loeve expansions and sparse-grid methods for the efficient and accurate approximation of the input stochastic process in terms of few random variables. Details of the procedure are given and its efficacy is demonstrated through computational experiments involving the stochastic heat equation and the stochastic Navier-Stokes equations.
AN ENSEMBLE KALMAN FILTER USING THE CONJUGATE GRADIENT SAMPLER
357-370
10.1615/Int.J.UncertaintyQuantification.2012003889
Johnathan M.
Bardsley
Department of Mathematical Sciences, The University of Montana, Missoula, Montana 59812-0864, USA
Antti
Solonen
Lappeenranta University of Technology, Laboratory of Applied Mathematics
Albert
Parker
Center for Biofilm Engineering, Montana State University, Bozeman, Montana, 59717, USA
Heikki
Haario
Lappeenranta University of Technology, Department of Computational and Process
Engineering, Lappeenranta, Finland and Finnish Meteorological Institute, Helsinki, Finland; Earth Observation Research, Finnish Meteorological Institute, Helsinki, 00560, Finland
Marylesa
Howard
Department of Mathematical Sciences, University of Montana, Missoula, Montana, 59812
ensemble Kalman filter
data assimilation
conjugate gradient iteration
conjugate gradient sampler
The ensemble Kalman filter (EnKF) is a technique for dynamic state estimation. EnKF approximates the standard extended Kalman filter (EKF) by creating an ensemble of model states whose mean and empirical covariance are then used within the EKF formulas. The technique has a number of advantages for large-scale, nonlinear problems. First, large-scale covariance matrices required within EKF are replaced by low-rank and low-storage approximations, making implementation of EnKF more efficient. Moreover, for a nonlinear state space model, implementation of EKF requires the associated tangent linear and adjoint codes, while implementation of EnKF does not. However, for EnKF to be effective, the choice of the ensemble members is extremely important. In this paper, we show how to use the conjugate gradient (CG) method, and the recently introduced CG sampler, to create the ensemble members at each filtering step. This requires the use of a variational formulation of EKF. The effectiveness of the method is demonstrated on both a large-scale linear, and a small-scale, nonlinear, chaotic problem. In our examples, the CG-EnKF performs better than the standard EnKF, especially when the ensemble size is small.
STATISTICAL SURROGATE MODELS FOR PREDICTION OF HIGH-CONSEQUENCE CLIMATE CHANGE
341-355
10.1615/Int.J.UncertaintyQuantification.2012003829
Richard V.
Field Jr.
Sandia National Laboratories, Albuquerque, New Mexico 87185, USA
Paul
Constantine
Colorado School of Mines
M.
Boslough
Sandia National Laboratories, Albuquerque, New Mexico 87185, USA
Bayesian Analysis
Climate Model
Karhunen-Loeve Expansion
Non-Gaussian Random Field
Risk Analysis
In safety engineering, performance metrics are defined using probabilistic risk assessments focused on the lowprobability, high-consequence tail of the distribution of possible events, as opposed to best estimates based on central tendencies. We frame the climate change problem and its associated risks in a similar manner. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We therefore propose the use of specialized statistical surrogate models (SSMs) for the purpose of exploring the probability law of various climate variables of interest. An SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field. The SSM can be calibrated to available spatial and temporal data from existing climate databases, e.g., the program for climate model diagnosis and intercomparison (PCMDI), or to a collection of outputs from a general circulation model (GCM), e.g., the community Earth system model (CESM) and its predecessors. Because of its reduced size and complexity, the realization of a large number of independent model outputs from an SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework is developed to provide quantitative measures of confidence, via Bayesian credible intervals, in the use of the proposed approach to assess these risks.