Доступ предоставлен для: Guest
Портал Begell Электронная Бибилиотека e-Книги Журналы Справочники и Сборники статей Коллекции
International Journal for Uncertainty Quantification
Импакт фактор: 3.259 5-летний Импакт фактор: 2.547 SJR: 0.417 SNIP: 0.8 CiteScore™: 1.52

ISSN Печать: 2152-5080
ISSN Онлайн: 2152-5099

Свободный доступ

International Journal for Uncertainty Quantification

DOI: 10.1615/Int.J.UncertaintyQuantification.v1.i2.30
pages 147-161


Bin Liang

Department of Civil and Environmental Engineering, Vanderbilt University, Nashville, TN 37235

Sankaran Mahadevan

Department of Civil and Environmental Engineering, Vanderbilt University, Nashville, TN 37235


Multiple sources of errors and uncertainty arise in mechanics computational models and contribute to the uncertainty in the final model prediction. This paper develops a systematic error quantification methodology for computational models. Some types of errors are deterministic, and some are stochastic. Appropriate procedures are developed to either correct the model prediction for deterministic errors or to account for the stochastic errors through sampling. First, input error, discretization error in finite element analysis (FEA), surrogate model error, and output measurement error are considered. Next, uncertainty quantification error, which arises due to the use of sampling-based methods, is also investigated. Model form error is estimated based on the comparison of corrected model prediction against physical observations and after accounting for solution approximation errors, uncertainty quantification errors, and experimental errors (input and output). Both local and global sensitivity measures are investigated to estimate and rank the contribution of each source of error to the uncertainty in the final result. Two numerical examples are used to demonstrate the proposed methodology by considering mechanical stress analysis and fatigue crack growth analysis.

KEYWORDS: error quantification, uncertainty quantification, sensitivity analysis, finite elements, discretization, surrogate mode


Computational models are widely used by engineers to capture the behavior of physical systems. For large systems, computational models are usually constructed based on sparse experimental data, sometimes even with no full-scale experiments. As a result, errors arise due to lack of data and lack of knowledge about system behavior. Numerical approaches used to solve the model equations also produce errors due to various assumptions and approximations. Natural variability in many physical variables, and data uncertainty due to sparse data and measurement errors, add further uncertainty in the model prediction.

The motivation of this paper is to develop a methodology that provides quantitative information regarding the relative contribution of various sources of error and uncertainty to the overall model prediction uncertainty. Such information can guide decisions regarding model improvement (e.g., model refinement, additional data collection) so as to enhance both accuracy and confidence in the prediction. The information sought is in the form of rankings of the various errors and uncertainty sources that contribute to the model prediction uncertainty. It is more advantageous to spend resources toward reducing an error with a higher ranking than one with a lower ranking. The rankings are based on systematic sensitivity analysis, which is possible only after quantifying the effect of each error source on the model prediction uncertainty.

The error in a computational model prediction consists of two parts: model form error (ϵmodel) and solution approximation error or numerical error (ϵnum) [1-4]. The model form error depends on whether the selected model correctly represents the real phenomenon (e.g., small deformation versus large deformation model, linear elastic versus elastoplastic model, or Euler versus Navier-Stokes equation). The solution approximation error arises when numerically solving the model equations. In other words, the model form error is related to the question—Did I choose the correct equation?—which is answered using validation experiments, while the solution approximation error is related to the question—Did I solve the equation correctly?—which is answered through verification studies.

Mathematical theory and methods have been discussed in [1] for quantifying numerical error and model form error in computational mechanics models, but these methods require access to the original partial differential equations (PDEs) of the system. A simplified approach to error quantification using a commercial computational code as a black box has been developed in [5]. If we denote ypred and ytrue as model prediction and the true response of the physical system, respectively, then we have


Note that the numerical error depends on the choice of the model form; thus, the two errors are not independent. The numerical error ϵnum is a nonlinear combination of various components [5]. This paper first considers three typical numerical error components and their quantification and combination, including input error, discretization error in FEA, and surrogate model error.

Assume yc is the model prediction corrected for the numerical error sources, and ypred is the raw model prediction. Besides yc, in order to quantify the model form error, observed data (yobs) are needed. However, there is a difference between yobs and ytrue, which is called output measurement error (ϵom). Thus, we have


Model form error can be quantified based on Eqs. (1) and (2). Implementation details are discussed in Section 4.

One concern in this paper is how to obtain a model prediction yc corrected for numerical error sources. Among all errors, some errors are stochastic, such as input error and surrogate model error, and some errors are deterministic, such as discretization error in FEA. In this paper, a simple but efficient approach is developed to obtain yc. The basic idea is to quantify and correct for each error where it arises. Stochastic error is corrected for by adding its randomly sampled values to the original result. Deterministic error is corrected for by directly adding it to the corresponding result. For example, to correct for the discretization error, every time a particular FEA result is obtained, the corresponding discretization error is calculated, added to the original result, and the corrected FEA result is used in further computation to obtain yc.

In addition to the model form and solution approximation errors mentioned above, another error arises due to Monte Carlo sampling used in the error quantification procedure itself. This error is referred to here as uncertainty quantification (UQ) error. For example, when estimating the cumulative distribution function (CDF) of a random variable from sparse data, there is error in the CDF value, and methods to quantify this UQ error are available in [6]. Then, if more samples are generated by the inverse CDF method using the CDF estimated from sparse data, then the UQ error is propagated as sampling error to the newly generated samples. An approach is developed in Section 3 to quantify this sampling error. This method is particularly useful in quantifying model form error (Section 4).

After a probabilistic framework to manage all sources of uncertainty and error is established, sensitivity analyses are performed in Section 5 to assess the contribution of each source of uncertainty and error to the overall uncertainty in the corrected model prediction. The sensitivity analysis result can be used to guide resource allocation for different activities, such as model fidelity improvement, data collection, etc., according to the importance ranking of errors in orders to trade off between accuracy and computational/experimental effort. The proposed methods are demonstrated using two numerical examples in Section 6.

The contributions of this paper can be summarized as follows:

  1. A systematic methodology for error and uncertainty quantification and propagation in computational mechanics models is developed. Previous literature has developed methods to quantify the discretization error and to propagate input randomness through computational models. However, the combination of various error and uncertainty sources is not straightforward: some are additive, some multiplicative, some nonlinear, and some even nested. Also, some errors are deterministic and some are stochastic. The methodology in this paper provides a template to track the propagation of various error and uncertainty sources through the computational model.
  2. The error and uncertainty quantification methodology itself has an error due to the use of limited number of samples when large finite element models are used; therefore, this UQ error is quantified. A methodology is proposed in this paper to also quantify the propagation of this UQ error through further calculations in model prediction and assessment, and is used in the quantification of model form error.
  3. Sensitivity analysis methods are developed to identify the contribution of each error and uncertainty source to the overall uncertainty in the model prediction. Previous literature in global sensitivity analysis has only considered the effect of input random variables, and this paper extends the methodology to include data uncertainty and model errors. The sensitivity information is helpful in identifying the dominant contributors to model prediction uncertainty and in guiding resource allocation for model improvement. The sensitivity analysis method is further enhanced to compare the contributions of both deterministic and stochastic errors on the same level, in order to facilitate model improvement decision making.


Three typical sources of numerical error (i.e., solution approximation error) and their quantification methods are discussed in this section, namely, input error, discretization error in FEA, and surrogate model error. In this paper, to quantify the numerical error, instead of comparing the original model prediction ypred with known solutions, it is compared to the corrected model prediction yc. Generally, if the error is deterministic, then it will be corrected by adding it to its corresponding quantity; if the error is stochastic, then it will be accounted for by including its uncertainty in the output through sampling. After the corrected model prediction is obtained, the numerical error can be estimated as


The raw model prediction ypred is either deterministic or stochastic, depending on the model inputs. However, yc and ϵnum are random variables even for deterministic input, if both stochastic and deterministic error sources contribute to the overall numerical error and corrected model prediction.

2.1 Input Error (ϵin)

In a validation exercise, the inputs to the computational model should ideally have the same values as those to the physical system. However, the inputs to the physical system model are subject to experimental variability and measurement error; therefore, there is a discrepancy between what is measured or reported as the input to the physical system and its actual value. The use of this measured value in the computational model gives rise to an error in the model prediction, because the model and the actual system had different inputs.

If no prior information is available, then the input errors are represented by random variables based on knowledge about experimental variability, measurement process, and instrument errors. Usually, normal distributions with a mean value of zero or a constant bias are assumed. The input error can be accounted for by including its uncertainty in the final model output (i.e., by propagating the randomness of input error through the computational model). Suppose that a computational model has the form ypred = f(x,x ,...,x ), in which x1,x2,,xm are model inputs and ypred is model prediction. Then, the model output that accounts for the input errors is


where are the (stochastic) input errors of each input variable. Equation (4) is evaluated through Monte Carlo sampling of the error terms.

2.2 Discretization Error (ϵh)

Many engineering problems involve solving differential equations numerically using a discretization method, such as the finite difference or finite element methods. The approximation error due to coarse discretization of the domain is denoted as discretization error. Different methods have been proposed for a posteriori error estimation in boundary value problems in mechanics, including explicit error estimation methods [7], element residual methods [8], recovery based methods [9], and extrapolation methods [10]. A comprehensive review of a posteriori error estimation is given by Ainsworth and Oden [11]. But most methods provide relative error and are only useful for adaptive mesh refinement, not for quantifying the actual error [4]. A convenient method to approximate the actual error is the Richardson extrapolation [12-15], especially when a commercial finite element code is used.

In this paper, the Richardson extrapolation method is employed to quantify the discretization error for the sake of illustration. The emphasis of this paper is more on error combination than on methods to quantify individual errors; therefore, other preferred methods can also be used in the proposed overall framework. The following concerns should be noted while using the Richardson extrapolation method: (i) it performs well for one-dimensional cases, but is less reliable for higher dimensions; (ii) it was first developed for the finite difference method, and its result is less credible for the finite element method; and (iii) it requires the model result to be monotonically convergent as the mesh is refined, while in many problems the result converges with fluctuation.

In the Richardson extrapolation method, in order to quantify the discretization error ϵh in an FEA result φ1 with mesh size h1, two more FEA runs are needed to obtain φ2 with a finer mesh h2 and φ3 with the finest mesh h3. Then, the discretization error ϵh is given by ϵh = (φ1 - φ2)/(rp - 1). As a result, the exact solution can be expressed as


Here, r = h2/h1 = h3/h2 is the mesh refinement ratio, and the order of convergence p can be estimated as p = ln[(φ3 - φ2)/(φ2 - φ1)]lnr.

As illustration, consider the deflection of a slender cantilever beam with a tip load F = 3.92 N at the free end. The length of the beam is L = 30 cm. The beam has a rectangular cross section of width b = 3.04 cm and height h = 0.0078 cm. The Young’s modulus is 200 GPa, and Poisson’s ratio is 0.3. Ignoring the self-weight of the beam, the theoretical result for the deflection at the free end is available in [16] as yth = -12.16 cm. This theoretical result has also been experimentally validated in [16].

A finite element model is created for the beam in ANSYS, including a large deflection effect. The problem is solved with three mesh sizes (7.5, 3.75, and 1.875 cm) and the corresponding deflections are 12.22, 12.17, and 12.16 cm, respectively. The Richardson extrapolation method is used to estimate the exact solution. The mesh refinement ratio r and the order of convergence p are calculated to be 0.5 and 2.32, respectively. The estimated exact solution is calculated using Eq. (5) to be -12.16 cm, which matches the theoretical solution, thus illustrating the efficacy and accuracy of the Richardson extrapolation method for quantifying discretization error.

2.3 Surrogate Model Prediction Error (ϵsu)

Some engineering analyses—uncertainty quantification, sensitivity analysis, optimization etc.,—require repeated runs of the finite element model, which can be prohibitively expensive. Therefore, a surrogate model, which is usually computationally much cheaper than FEA, is constructed to provide a closed-form relationship between the inputs and outputs of the original model or system. The difference between surrogate model prediction and the original model prediction is denoted as surrogate model error ϵsu. Because the true response of the original model is unknown at an untrained point (within the bounds of the surrogate model training), the surrogate model error is treated as a stochastic quantity.

In this study, for the sake of illustration, a polynomial chaos expansion (PCE) [17] is used as a surrogate to the time-consuming finite element analysis in order to generate enough Monte Carlo samples needed for quantifying the uncertainty in the model output. PCE is a regression-based surrogate model that represents the output of a model with a series expansion in terms of standard random variables (SRVs). Consider a model y = f(x) in which y is the output from a numerical simulation f(x) and x = [x1,x2,...,xk]T is a vector of input variables that follow certain probability distributions. Suppose PCE is constructed to replace f(x) using n multidimensional Hermite polynomials as basis functions:


where ξ is a vector of independent standard normal random variables that are corresponding to the original input [18]. ϕ(·) = {φ0(·),φ1(·),...,φn(·)}T are the Hermite basis functions, and Θ = {θ0, θ1,...,θn}T are the corresponding coefficients that can be estimated by the least-squares method. A collocation point method can be used to efficiently select training points where the original model is evaluated [19]. Suppose that m training points (ξi,yi),i = 1,2,...,m are available. Under the Gauss-Markov assumption [20], the surrogate model error ϵsu asymptotically follows a normal distribution with zero mean and variance given by


where Ψ = {ϕ(ξ1), ϕ(ξ2),...,ϕ(ξm)}T and s2 = [1/(m - n)] ∑mi=1 [yi - ΘTϕ(ξi)]2.

As illustration, consider the slender cantilever beam problem described in Section 2.2. Assume the concentrated force follows a normal distribution with mean 3.5 N and standard deviation 0.4 N. Fifteen samples of F are generated and 15 corresponding beam tip deflections are calculated using the finite element model with mesh size h = 7.5 cm. The first nine results are used to construct a first-order PCE surrogate model, and the last six are used to verify the PCE model, which is then used to predict the beam tip deflection over the range of force from 3 to 4.5 N. The error associated with each prediction is calculated based on which 90% confidence bounds are constructed. The results are plotted in Fig. 1, showing excellent agreement between the surrogate model prediction and the original FEA predictions at the validation points.

FIG. 1: Surrogate model prediction and confidence bounds.

In order to account for the surrogate model error, random samples of ϵsu are generated and added to the surrogate model prediction. As a result, instead of only one, a number of sample values of surrogate model predictions are obtained for a single input and all sample values should be used in the succeeding analyses to obtain a number of sample values of the final model prediction. This approach to account for the surrogate model prediction error also works for other stochastic surrogate models such as the Gaussian process model [21], which provide an estimate of the variance of prediction.


Section 2 discussed errors that arise in the solution procedure and methods to quantify them. Even the error quantification process, which requires uncertainty propagation analysis (e.g., Monte Carlo sampling), has error due to the limited number of samples in practical analysis. We denote the uncertainty quantification error ϵuq as the difference between the empirical CDF value and the “true” CDF value. In this section, first an error estimation method is developed to quantify ϵuq. Then the propagation of ϵuq is studied when samples are generated using the empirical CDF and used in further analysis.

3.1 Error in Empirical CDF

Suppose that an empirical CDF F(x) is constructed from Ns samples of a random variable X. Let FX(x) refer to the corrected CDF that includes the error in the empirical CDF. If n Monte Carlo samples are smaller than a specific value x, then n follows a binomial distribution [22] n ~B[NsFX(x), √ NsFX(x)[1 - FX(x)]], considering that the value of each sample is a result of Bernoulli trial. The binomial distribution approaches the normal distribution as Ns increases. When Ns > 20, the normal distribution n ~N(x) is a good approximation to the original binomial distribution [6]. Noting that Fx(x) = n/N, we have


Therefore ϵuq, the error associated with the CDF value FX(x) can be represented as a normal random variable with zero mean and variance σ2, which can be estimated from


Note that the variance of this error is actually a function of x, and it goes to zero at both ends of the CDF curve. Or we can directly treat FX(x) as a random variable with the empirical CDF value as the mean and variance σ2 as in Eq. (9).

For example, suppose X ~ N(0,1) and 21 samples of X are available. The empirical CDF and the 90% confidence bounds are constructed using the above method. The result is compared to the true CDF curve in Fig. 2, and shows that the CDF curve lies between the confidence bounds.

FIG. 2: Empirical CDF with 90% confidence bounds versus true CDF.

3.2 Propagation of ϵuq

If more realizations of x are generated from the empirical CDF to calculate the distribution of another variable y = g(x), then ϵuq is propagated through the sampling procedure and results in sampling error ϵs in the new samples of x. If an inverse CDF method is used for sampling, then an approximate method for quantifying the sampling error ϵs is proposed.

In Fig. 3, let (ϵuq)i and (ϵuq)i' be the errors corresponding to the CDF values at xi and xi + (ϵs)i, respectively. From Eq. (9), we know that (ϵuq)i ~ N[0,σ(xi)] and (ϵuq)i'  ~ N{0, σ[xi + (ϵs)i]}. If we let (ϵuq)i and (ϵuq)i' have the same percentile value [assuming (ϵs)i is very small], then we have


In Eq. (10), (ϵs)i is the unknown quantity to be estimated. By using Eq. (10), one can calculate (ϵs)i for a given (ϵuq)i. That is, samples of (ϵs)i can be obtained by sampling (ϵuq)i. Thus, the corrected value of xi is [xi + (ϵs)i], which includes the sampling error (ϵs)i. Now y = g(x) is calculated using the corrected realization.

FIG. 3: Sampling error quantification.

In the context of this paper, the model form error statistics are obtained through Eq. (10), using samples of corrected model prediction yc and observations yobs (details in Section 4). The method developed in this section offers a generic approach to resampling from sparse data and includes the extra uncertainty due to UQ error in the new samples. A kernel density may be used to obtain a smoothed empirical CDF before applying this method, if desired.

In Section 4, the proposed method is used in model form error estimation to include the error caused by a limited number of experimental observations yobs and samples of the corrected model output yc.


As mentioned in Section 1, model form error is calculated using Eqs. (1) and (2), by comparing model predictions against experimental observations of the same output quantity, for the “same” set of inputs. In this case, output measurement error needs to be taken into account.

From Eqs. (1) and (2), we have


Rearranging the terms gives


Equation (12) is evaluated by sampling each of the three terms on the right-hand side, and samples of ϵmodel are then obtained. The output measurement error ϵom is commonly assumed to be a normal random variable with zero mean and an assumed variance [20], based on the measurement process and equipment.

However, in most cases only the distribution of ϵom might be available, and only a few point-valued data are available for yobs. Moreover, if the procedure of computing yc requires considerable computational effort, then only a limited number of samples can be obtained. A resampling approach is needed to generate more samples of yobs and yc from sparse data, but resampling could also cause another error. Therefore, the approach developed in Section 3 for quantifying the sampling error is used. Then, Eq. (12) becomes


where (ϵs)obs and (ϵs)c are the errors due to the limited number of samples of yobs and yc, respectively; these errors are calculated using Eq. (10).

After the model form error statistics are calculated using Eq. (13), for future prediction purposes, the overall corrected prediction that includes both solution approximation error and model form error may be computed as yc + ϵmodel. In this expression, yc includes the contributions of various solution approximation errors and input error, and ϵmodel includes the contributions of model form error, UQ error and output measurement error.


The previous sections developed a methodology to quantify different errors in model prediction. In this section, sensitivity analysis is performed to estimate the contribution of each error source to the model prediction uncertainty. Previous studies in stochastic sensitivity analysis have only considered the effect of input random variables; this paper extends those methods to include sources of solution approximation error. The sensitivity analysis is done in terms of the contribution to the uncertainty in the corrected model prediction ytrue [defined in Eq. (3)]. Because the UQ error and model form error are computed only after the computation of yc, this sensitivity analysis is only with respect to solution approximation errors and does not include UQ error or model form error.

Two local and two global sensitivity measures are studied in this section. A generic expression Y =f(X1,X2,...,Xn ) is used in the discussion below, where Xi can be either model input, error, or model parameter with uncertainty, and Y refers to the corrected model output yc that considers all the contributing error sources.

5.1 Local Sensitivity Analysis

Two local sensitivity analysis methods are considered. The first one is variance-based, and the second one is entropy-based.

5.1.1 Change in Variance

The change of variance in the model prediction due to the ith input is defined as


Equation (14) provides a measure of the change in variance of Y if the ith input is ignored. Ignoring Xi means fixing it at its mean value (usually zero) for a stochastic error, or not correcting for a deterministic error. The second term in the right-hand side of Eq. (14) is the conditional variance of Y given Xi, and is calculated over X~i (i.e., all X except Xi). The greater the value of (Δσ2)i is, the more the importance Xi is. However, this measure is only in terms of variance and ignores other uncertainty information, such as mean, skewness, kurtosis, etc. Also, the result depends on where Xi is fixed, and in occasional cases, it is possible that Var(Y|Xi = xi) > Var(Y ) [23]. But due to its simplicity, change of variance (Δσ2)i is still an applicable scalar measurement of the effect due to each contributing source of uncertainty. Becasue Xi is fixed at a single value, the change of variance is a local sensitivity index.

5.1.2 Kullback-Leibler Divergence

The Kullback-Leibler divergence (K-L divergence) [24], adapted from information theory, measures the difference between two probability density functions p(x) and q(x), in the sense of the relative information entropy (uncertainty) of p(x) with respect to q(x). It is defined as


The K-L divergence is non-negative, and it is zero if and only if p(x) and q(x) are exactly the same. It is sensitive to both differences in mean value and in variance.

The K-L divergence has been use in sensitivity analysis [25] to measure the contribution of an individual source to the uncertainty in the model prediction. In this paper, the K-L divergence DKLXi[Y|Xi = xi||Y] is used to compare the difference over the entire distributions of Y and the conditional Y|Xi = xi) , in which Xi is fixed at a particular value (zero for deterministic error and mean value for stochastic error). A larger value of the KL divergence implies that Y is more sensitive to Xi.

5.2 Global Sensitivity Analysis

Instead of fixing Xi at a single value, we can average the conditional variance VarX~i(Y|Xi) over the entire distribution of Xi. This is denoted as EXi[VarX~i(Y|Xi)], and it no longer depends on where Xi is fixed, so that it is “global” over the range of Xi. Starting with this and based on variance decomposition [26], the variance of Y can be decomposed into two terms, with respect of Xi  [23],


Here both terms are complementary to each other. Either a smaller first term or a bigger second term indicates a more important Xi. By normalizing the second term, we obtain the main effect sensitivity index,


which is always between 0 and 1. The main effect is also referred to as first-order effect. Note that a low main effect index does not imply that the variable is not important. A variable with a low main effect might make a bigger contribution to the model output through interaction with other variables. Therefore, a more comprehensive sensitivity measure that includes the main effect and the interaction effect is the total effect index. If we swap Xi and X~i in Eq. (16), another way of decomposing the variance of Y is


By normalizing the first term, we obtain the total effect sensitivity index,


The total effect index becomes valuable when the sum of individual main effect indices is not close to 1, which implies that strong interaction effects exist among variables.

5.3 Deterministic and Stochastic Errors in Sensitivity Analysis

It was mentioned in Section 2 that deterministic errors (such as discretization error) are corrected, whereas stochastic errors are accounted for through sampling, in order to compute the corrected model prediction yc. With such a strategy, only local sensitivity measures can be calculated corresponding to deterministic errors, which obviously have only fixed values for fixed model inputs. The corrected model prediction is calculated with and without correcting for the deterministic error, and the corresponding change in variance or K-L distance can be computed.

For stochastic errors, it is more appropriate to compute global sensitivity measures instead of at one particular value (typically, the mean). However, this creates a difficulty in comparing the relative contributions of the deterministic versus stochastic errors to the overall model prediction uncertainty, when resource allocation decisions are needed for activities such as model refinement and data collection. Therefore, an approximate approach is proposed below to compute global sensitivity measures for even deterministic errors.

In global sensitivity analysis with stochastic variables, samples of the variables are generated based on their distributions. But deterministic variables do not have a distribution to sample from, such as discretization error ϵh. An approximate approach is to obtain samples of ϵh corresponding to randomly sampled stochastic inputs to the FEA model; these samples of ϵh are used to construct the distribution of ϵh.

Another interesting problem occurs in calculating global sensivity measures with respect to the surrogate model error ϵsu , even though it is stochastic. This is because the distribution of ϵsu is local at a particular prediction of the surrogate model, which depends on the input. The overall distribution of ϵsu is not available over the entire range of the input, thus hindering global sensivity analysis. To overcome this difficulty, an approximate approach is to obtain samples of ϵsu corresponding to sample inputs to the surrogate model; these samples of ϵsu are used to construct the overall distribution of ϵsu.

In summary, the need to compare the various sources of error with the same sensitivity measure creates two issues. Local sensitivity measures are not appropriate for stochastic errors, and calculation of global sensitivity measures is not straightforward for deterministic errors. The approximation in this section offers a convenient way of including discretization error and surrogate model error into global sensitivity analysis, thus making the various error sources comparable under the same sensitivity measure.


6.1 Cantilever Beam

The tip deflection δ of a cantilever beam shown in Fig. 4 is of interest. The beam has length L = 192 in., cross-sectional moment of inertia I = 300in.3 and Poissons ratio υ = 0.3. The Young’s modulus E is assumed to be a normal random variable with mean value 29000 Ksi and COV 0.06. The self-weight w(x) is assumed to be a one-dimensional Gaussian random field, with mean value 75 lb/in. and COV 0.05, as well as an exponential covariance function defined by Cx) = σ2 exp(-|Δx|/b) , in which σ2 is the variance and b is the correlation length of the random field. In this example, b is assumed to be equal to the length of the beam. The concentrated load P at the beam tip is the input to the model, which is a normal random variable with mean 1000 lb and COV 0.16.

FIG. 4: Cantilever beam.

For the sake of illustration, assume that an error ϵP arises when measuring the input P, which is also a normal random variable with zero mean and a variance equal to 20% of variance of P [27]. Similarly, the output (i.e., deflection) measurement error is also assumed to be a normal random variable with zero mean and standard deviation of 0.0 1in.

A finite element model is constructed with four beam elements of equal length, and a first-order polynomial chaos expansion with Hermite bases is constructed as the surrogate computational model,


Nine training points are selected by the collocation method to run the FEA model for training the surrogate model [19]. Then, the surrogate model is used to generate samples of model predictions by sampling the inputs. The distribution of model prediction is estimated from the samples to have a mean value of -1.74 and standard deviation of 1.23 × 10-1. This is the raw model prediction.

6.1.1 Numerical Errors and Sensitivity

In this example, the numerical error in the model prediction is assumed to come from three error sources: discretization error in FEA ϵh, surrogate model error ϵsu, and input error ϵP. First, the Richardson extrapolation technique is used to correct the discretization error in FEA. For each of the nine training points, two more FEM runs are made with eight elements and 16 elements, respectively (original number of elements is four) to calculate ϵh. A new set of nine training points is obtained by adding the discretization error to the original training points and a new surrogate model PCEh is built with the corrected training data. Then, ϵP and ϵsu are also included and, finally, the corrected model prediction δc is given by


The numerical error can be calculated as ϵnum = δc - δpred, where δpred is the raw model prediction. By randomly generating samples of ϵnum, the mean and standard deviation of ϵnum are estimated to be -4.01 × 10-3 and 1.78 × 10-1, respectively.

Using the corrected model prediction δc all four sensitivity analyses are performed on the three errors. The results are shown in Table 1. The first two sensitivity indices are local, and the next two are global. From the sensitivity ranking in Table 1, it is seen that the global measures indicate that the discretization error affects the model solution the most, whereas the local measures give the input measurement error the highest rank.

TABLE 1: Sensitivity analysis

Change of Variance KL Distance Main Effect Total Effect
Error Index Ranking Index Ranking Index Ranking Index Ranking
ϵP 5.35 × 10-3 1 10.44 1 0.0296 2 0.104 3
ϵh 2.11 × 10-5 2 5.48 2 0.1614 1 0.131 1
ϵsu 1.60 × 10-6 3 3.50 3 0.0115 3 0.126 2

6.1.2 Model Form Error and UQ Error Estimation

Model form error can be estimated using Eq. (13); this also needs to take the uncertainty quantification errors into account. Assume nine experimental observations of beam deflection δobs are available, and assume 20 samples of δc are taken from the error quantification procedure for resampling δc. Further assume that the output measurement error ϵom follows a normal distribution with zero mean and standard deviation 0.01. For comparison, samples of model form error without considering the uncertainty quantification error are also generated by ignoring the sampling errors in Eq. (13). The CDFs of model form error with and without considering uncertainty quantification error are plotted in Fig. 5a. One-thousand samples of model form error were generated for each case. The PDFs of sampling error for δobs and δc are also plotted in Fig. 5b. Statistical results are summarized in Table 2. Because less data is available for δobs than δc, the variance of sampling error in δobs is significantly greater than in δc. It is noted that including the UQ error (due to limited sampling) resulted in an increase in the variance of the estimated model form error because more randomness was introduced. Finally, because the mean and variance of ϵmodel with ϵuq are estimated from 1000 samples, errors arise in their estimates; these errors are very small and are ignored.

FIG. 5: Statistics of model form error and sampling errors.

TABLE 2: Statistics of model form error and sampling errors

Variable Mean Variance
ϵmodel without ϵuq 3.44 × 10-3 5.19 × 10-2
ϵmodel with ϵuq 2.74 × 10-3 5.76 × 10-2
ϵs for δobs -7.21 × 10-4 1.09 × 10-2
ϵs for δc -0.20 × 10-4 5.52 × 10-5

6.2 Crack Growth in an Airplane Wing Spar

In this example, quantification and sensitivity analysis of errors in crack growth prediction in part of an airplane wing is of interest. In crack growth analysis, sensitivity analysis of errors faces a major difficulty because different sample values of the same error are input into the analysis in each cycle. This issue will be discussed in detail later. The wing spar plays a key role in connecting the wing to the fuselage and is subjected to cyclic loading during flight. The final crack size aN that grows from an initial size a0 = 0.05 in. under a given load history is studied. Because of various uncertainties and errors arising in the analysis, aN is a random variable.

The analysis consists of two modules: structural stress analysis and fatigue crack growth analysis, as shown in Fig. 6. The errors and uncertainty considered in this example are input error, discretization error in FEA, surrogate model error, and uncertainty in the crack growth law parameter.

FIG. 6: Component analyses and associated sources of error and uncertainty.

The loads B and P are inputs to the structural model. The output of structural analysis is the first-principals stress in the critical region (hot spot). The analysis is carried out by finite element analysis. To quantify the discretization errors in the finite element analysis using Richardsons extrapolation, for each of the training points the finite element analysis is repeated with finer meshes with two levels of refinement. The refinement ratio is 0.75, which means the element size is multiplied by 0.75 in each refinement.

A surrogate model is constructed using a first-order polynomial chaos expansion with Hermite polynomial bases. The surrogate model is trained using FEA results and is used to replace the FEA in further analysis. The associated surrogate model error is stochastic.

Before proceeding with the crack growth analysis, the stress intensity factor is calculated first from the structural analysis result (in the context of linear elastic fracture mechanics). This is given by


where s is the stress at crack tip, a is the crack length, and β depends on geometry and crack length. In this example, β available from [28]. The Paris’ law is used here to illustrate the crack growth calculation


The parameter n is assumed to be deterministic at 3.2. Because of fitting error, the parameter C is assumed to be a random variable that follows a lognormal distribution, with a mean value of 1.0 × 10-7 and COV of 0.24.

6.2.1 Correction of Errors

The following states the procedure of calculating aN that is corrected for all error. At the beginning, a random sample of C is generated, which stays constant throughout all cycles. In each cycle, first the mean values of B and P are obtained from a given variable amplitude load history. Random samples of ϵB and ϵP are generated and added to B and P , respectively. The corrected B and P are then fed into the surrogate model, which is built with nine FEA results corrected for discretization error using the Richardson extrapolation. The surrogate model outputs the stress prediction s, as well as the associated prediction error. Next Eq. (22) is used to calculate the stress intensity factor, which is input to the crack growth law. To accelerate the computation, the increment of crack length da is calculated in blocks of every 10 cycles [i.e., dN is set to 10 in Eq. (23)]. Finally, the crack length is updated until the end of the load history. The analysis is repeated to generate 10,000 such samples of aN.

6.2.2 Results and Sensitivity Analysis

The aN obtained in Section 6.2.1 that is corrected for all errors is denoted as ac in the following discussion. However, the structure of the crack growth problem is not like that of the previous example due to the cycle-by-cycle analysis. A small example would explain this special case. Let ai+1 = g(ai) + ϵ be a crack growth model, in which ai and ai+1 are the crack sizes in the ith and (i + 1)th cycles, and ϵ is a random error. If the crack grows from size a0 to a3 in three cycles, then we have


in which ϵ(1)(2), and ϵ(3) are three samples of the error ϵ but are generated independently. Or we can say a3 is function of four input variables a3 = g3[a0(1)(2)(3)]. The existing sensitivity methods are only able to estimate the sensitivity of a3 with respect to each of ϵ(1), ϵ(2), and ϵ(3), individually. However, the sensitivity of a3 to the overall error ϵ is what we are actually interested in. An approximate approach to overcome this hurdle is to assume that all samples of the same error have the same value throughout all cycles. This makes it possible to compute the sensitivity with respect to a single error term ϵ over all cycles.

The results in Table 3 show that all four sensitivity measures are able to indicate that the final crack size is most sensitive to the parameter C in Paris law and is least sensitive to the discretization error in FEA. This implies that in order to achieve a more accurate prediction of the final crack size, a more precise parameter C (i.e., narrower scatter) is needed. In contrast, the discretization error in FEA is much smaller.

TABLE 3: Sensitivity analysis results

Change of variance KL distance Main effect Total effect
Error Index Ranking Index Ranking Index Ranking Index Ranking
ϵB 0.17 × 10-3 4 6.85 3 4.69 × 10-2 4 8.36 × 10-1 4
ϵP 0.32 × 10-3 3 6.29 4 4.18 × 10-1 2 9.13 × 10-1 2
C 2.95 × 10-3 1 37.54 1 9.69 × 10-1 1 9.88 × 10-1 1
ϵsu 2.82 × 10-3 2 36.63 2 3.80 × 10-1 3 9.09 × 10-1 3
ϵh 8.36 × 10-5 5 6.15 5 1.07 × 10-3 5 2.48 × 10-1 5


This paper studied some of the errors that arise in mechanics computational models. There are three major contributions in this paper. The first is that the errors are clearly separated, and a quantification method is developed for each of the errors, including model form error, uncertainty quantification error, and three typical sources of numerical (solution approximation) error. Some of these errors are random and some are deterministic. Deterministic error is corrected by adding it to the prediction, and random error is included in the model output through sampling. By correcting or accounting for all the errors the corrected model output is obtained. The corrected model outputs together with observed data are used to estimate model form error through sampling, where uncertainty quantification error arises. Therefore, a methodology to quantify the UQ error is developed (this is the second contribution). The error in the CDF from limited samples is first quantified; then the propagation of this error due to further sampling from the approximate CDF is also quantified.

The third major contribution of this paper is the extension of sensitivity analysis methods to rank the contribution of each error. Past work in global sensitivity analysis has only considered the influence of input random variables on output uncertainty; this paper extends the methods to include model errors. Methods for computing both local and global sensitivity measures are developed. Approximation approaches are developed for fatigue crack growth analysis that requires repetition of error terms in each cycle. The proposed error quantification and sensitivity analysis framework helps guide resource allocation for model refinement and data collection, balancing computational effort and prediction accuracy. The framework is general and can be extended to include other sources of error and other methods of quantifying individual errors.


The research reported in this paper was supported by funds from the Air Force Office of Scientific Research (Project Manager: Dr. Fariba Fahroo) through subcontract to Vextec Corporation (Investigators: Dr. Robert Tryon, Dr. Animesh Dey). The support is gratefully acknowledged. The authors are also grateful to Dr. Vadiraj Hombal and Shankar Sankararaman for valuable discussions and help.


1. Oden, J., Babuska, I., Nobile, F., Feng, Y., and Tempone, R., Theory and methodology for estimation and control of errors due to modeling, approximation, and uncertainty, Comput. Methods Appl. Mech. Eng., 194(2-5):195-204, 2005.

2. Oberkampf, W. and Trucano, T., Verification and validation in computational fluid dynamics, Prog. Aerospace Sci., 38(3):209-272, 2002.

3. Oberkampf, W., Sindir, M., and Conlisk, A., Guide for the verification and validation of computational fluid dynamics simulations, AIAA, 1998.

4. Rebba, R., Model validation and design under uncertainty, PhD thesis, Vanderbilt University, 2005.

5. Rebba, R., Mahadevan, S., and Huang, S., Validation and error estimation of computational models, Reliab. Eng. Syst. Safety, 91(10-11):1390-1397, 2006.

6. Shooman, M., Probabilistic Reliability: An Engineering Approach, McGraw-Hill, New York, 1968.

7. Babuška, I. and Rheinboldt, W., A-posteriori error estimates for the finite element method, Int. J. Numer. Methods Eng., 12(10):1597-1615, 1978.

8. Demkowicz, L., Oden, J., and Strouboulis, T., Adaptive finite elements for flow problems with moving boundaries. I—Variational principles and a posteriori estimates, Comput. Methods Appl. Mech. Eng., 46:217-251, 1984.

9. Zienkiewicz, O. and Zhu, J., A simple error estimator and adaptive procedure for practical engineerng analysis, Int. J. Numer. Methods Eng., 24(2):337-357, 1987.

10. Szabo, B., Mesh design for the p-version of the finite element method, Comput. Methods Appl. Mech. Eng., 55(1-2):181-197, 1986.

11. Ainsworth, M. and Oden, J., A posteriori error estimation in finite element analysis, Comput. Methods Appl. Mech. Eng., 142(1-2):1-88, 1997.

12. Richardson, L., The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam, Philos. Trans. R. Soc. London, Ser. A, 210:307-357, 1911.

13. Richards, S., Completed Richardson extrapolation in space and time, Commun. Numer. Methods Eng., 13(7):573-582, 1997.

14. Celik, I. and Karatekin, O., Numerical experiments on application of Richardson extrapolation with nonuniform grids, J. Fluids Eng., 119:584-590, 1997.

15. Alvin, K., Structural mechanics and materials—Method for treating discretization error in nondeterministic analysis, AIAA J., 38(5):910-916, 2000.

16. Beléndez, T., Neipp, C., and Beléndez, A., Large and small deflections of a cantilever beam, Eur. J. Phys., 23:371-379, 2002.

17. Ghanem, R. and Spanos, P., Stochastic Finite Elements: A Spectral Approach, Springer-Verlag, New York, 1991.

18. Isukapalli, S., Uncertainty analysis of transport-transformation models, PhD thesis, Rutgers, The State University of New Jersey, 1999.

19. Huang, S., Mahadevan, S., and Rebba, R., Collocation-based stochastic finite element analysis for random field problems, Prob. Eng. Mech., 22(2):194-205, 2007.

20. Seber, G. and Wild, C., Nonlinear Regression, Wiley, Hoboken, NJ, 1989.

21. Rasmussen, C., Gaussian processes in machine learning, Adv. Lect. Mach. Learning, 3174:63-71, 2004.

22. Haldar, A. and Mahadevan, S., Probability, Reliability, and Statistical Methods in Engineering Design, Wiley, Hoboken, NJ, 2000.

23. Saltelli, A. and Ratto, M., Global Sensitivity Analysis: The Primer, Wiley-Interscience, 2008.

24. Kullback, S. and Leibler, R., On information and sufficiency, Ann. Math. Stat., 22(1):79-86, 1951.

25. Liu, H., Chen, W., and Sudjianto, A., Relative entropy based method for probabilistic sensitivity analysis in engineering design, J. Mech. Des., 128:326-336, 2006.

26. Sobol, I., Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates, Math. Comput. Simul., 55(1-3):271-280, 2001.

27. Aeschliman, D. and Oberkampf, W., Experimental methodology for computational fluid dynamics code validation, Sandia Report No. SAND95-I 189 UC-706, 2006.

28. Anderson, T., Fracture Mechanics: Fundamentals and Applications, CRC Press, Boca Raton, 2005.