Begell House
International Journal for Uncertainty Quantification
International Journal for Uncertainty Quantification
2152-5080
5
5
2015
HIGH DIMENSIONAL SENSITIVITY ANALYSIS USING SURROGATE MODELING AND HIGH DIMENSIONAL MODEL REPRESENTATION
In this paper, a new non-intrusive method for the propagation of uncertainty and sensitivity analysis is presented. The method is based on the cut-HDMR approach, which is here derived in a different way and new conclusions are presented. The cut-HDMR approach decomposes the stochastic space into sub-domains, which are separately interpolated via a selected interpolation technique. This leads to a dramatic reduction of necessary samples for high dimensional spaces and decreases the influence of the Curse of Dimensionality. The proposed non-intrusive method is based on the coupling of an interpolation technique with the cut-HDMR (high dimension model representation) approach. The new conclusions obtained from the new derivation of the cut-HDMR approach allow one to interpolate each stochastic domain separately, including all stochastic variables and interactions between variables. Moreover, the same conclusions allow one to neglect non-important stochastic domains and therefore, drastically reduce the number of samples to detect and interpolate the higher order interactions. A new sampling strategy is introduced, which is based on a tensor product, but it uses the idea of Smoylak sparse grid for higher domains. For this work, the multi-dimensional Lagrange interpolation technique is selected and is applied for all parts of the cut-HDMR approach. However, the nature of the method allows one to use a combination of various interpolation techniques. The sensitivity analysis is performed on the surrogate model using the Monte Carlo sampling. In this work, the Sobol's approach is followed and sensitivity indices are established for each variable and interaction. Moreover, due to the obtained conclusions, the separate surrogate models allow one to visualize the uncertainty in the high dimensional space via histograms. The usage of a histogram for each stochastic domain allows one to establish full statistical properties of a given stochastic domain. This helps the user to better understand the stochastic propagation for the model of interest. The proposed interpolation technique and sensitivity analysis approach are tested on a simple example and applied on the well-known Borehole problem. Results of the proposed method are compared to the Monte Carlo sampling using the mean value and the standard deviation. Results of the sensitivity analysis of the Borehole case are compared to the literature results and the statistical visualization of each variable is provided.
Martin
Kubicek
University of Strathclyde, James Weir Building 75 Montrose Street Glasgow, G1IXJ, Scotland, United Kingdom
Edmondo
Minisci
University of Strathclyde, James Weir Building 75 Montrose Street Glasgow, G1IXJ, Scotland, United Kingdom
Marco
Cisternino
OPTIMAD Engineering s.r.l, Via Giacinto Collegno 18, 10143 Torino, Italy
393-414
A POSTERIORI ERROR ESTIMATION FOR A CUT CELL FINITE VOLUME METHOD WITH UNCERTAIN INTERFACE LOCATION
We study a simple diffusive process in which the diffusivity is discontinuous across an interface interior to the domain. In many situations, the location of the interface is measured at a small number of locations and these measurements contain error. Thus the location of the interface and the solution itself are subject to uncertainty. Further, the location of the interface may have a strong impact on the accuracy of the solution. A Monte Carlo approach is employed which requires solving a large number of sample problems, each with a different interface location. To solve these problems, a mixed finite element cut-cell method has been developed that does not require the mesh to conform to the interface. An efficient adjoint-based a posteriori technique is used to estimate the error in a quantity of interest for each sample problem. This error has a component due to the numerical approximation of the diffusive process and a component arising from the uncertainty in the interface location. A recognition of these separate sources of error is necessary in order to construct effective adaptivity strategies.
J. B.
Collins
Department of Mathematics, Chemistry, and Physics, West Texas A&M University, Canyon, Texas 79016, USA
Donald
Estep
Department of Statistics, Colorado State University, Fort Collins, Colorado 80523-1877, USA
Simon
Tavener
Department of Mathematics, Colorado State University, Fort Collins, Colorado 80523, USA
415-432
AN UNCERTAINTY VISUALIZATION TECHNIQUE USING POSSIBILITY THEORY: POSSIBILISTIC MARCHING CUBES
This paper opens the discussion about using fuzzy measure theory for isocontour/isosurface extraction in the field of uncertainty visualization. Specifically, we propose an uncertain marching cubes algorithm in the framework of possibility theory, called possibilistic marching cubes. The proposed algorithm uses the dual measures−possibility and necessity−to represent the uncertainty in the spatial location of isocontour/isosurface, which is propagated from the uncertainty in ensemble data. In addition, a novel parametric way of constructing marginal possibility distribution is proposed so that the epistemic uncertainty due to the limited size of the ensemble is considered. The effectiveness of the proposed possibilistic marching cubes algorithm is demonstrated using 2D and 3D examples.
Yanyan
He
Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UTAH 84112, USA
Mahsa
Mirzargar
Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UTAH 84112, USA
Sophia
Hudson
Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UTAH 84112, USA
Robert M.
Kirby
Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UTAH 84112, USA; School of Computing, University of Utah, Salt Lake City, UTAH 84112, USA
Ross T.
Whitaker
Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UTAH 84112, USA; School of Computing, University of Utah, Salt Lake City, UTAH 84112, USA
433-451
A GRADIENT-ENHANCED SPARSE GRID ALGORITHM FOR UNCERTAINTY QUANTIFICATION
Adjoint-based gradient information has been successfully incorporated to create surrogate models of the output of expensive computer codes. Exploitation of these surrogates offers the possibility of uncertainty quantification, optimization and parameter estimation at reduced computational cost. Presently, when we look for a surrogate method to include gradient information, the most common choice is gradient-enhanced Kriging (GEK). As a competitor, we develop a novel method: gradient-enhanced sparse grid interpolation. Results for two test functions, the Rosenbrock function and a test function based on the drag of a transonic airfoil with random shape deformations, show that the gradient-enhanced sparse grid interpolation is a reliable surrogate that can incorporate the gradient information efficiently for high-dimensional problems.
Jouke H. S.
de Baar
Mathematical Sciences Institute, Australian National University, John Dedman Building 27, Union Lane, Canberra, ACT, 2601, Australia
Brendan
Harding
Mathematical Sciences Institute, Australian National University, John Dedman Building 27, Union Lane, Canberra, ACT, 2601, Australia
453-468
RANDOM PREDICTOR MODELS FOR RIGOROUS UNCERTAINTY QUANTIFICATION
This paper proposes techniques for constructing linear parametric models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, random predictors models (RPMs) yield a random variable. The strategies proposed yield models in which the mean, the variance, and the range of the model's parameters, thus, of the random process describing the output, are rigorously prescribed. As such, these strategies encompass all RPMs conforming to the prescription of these metrics (e.g., random variables and probability boxes describing the model's parameters, and random processes describing the output). Strategies for calculating optimal RPMs by solving a sequence of optimization programs are developed. The RPMs are optimal in the sense that they yield the tightest output ranges containing all (or, depending on the formulation, most) of the observations. Extensions that enable eliminating the effects of outliers in the data set are developed. When the data-generating mechanism is stationary, the data are independent, and the optimization program(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the reliability of the prediction, which is the probability that a future observation would fall within the predicted output range, is bounded rigorously using Scenario Optimization Theory. This framework does not require making any assumptions on the underlying structure of the data-generating mechanism.
Luis G.
Crespo
NASA Langley Research Center, MS 308, Hampton, Virginia 23681, USA
Sean P.
Kenny
NASA Langley Research Center, MS 308, Hampton, Virginia 23681, USA
Daniel P.
Giesy
NASA Langley Research Center, MS 308, Hampton, Virginia 23681, USA
469-489