Импакт фактор:
3.259
5летний Импакт фактор:
2.547
SJR:
0.417
SNIP:
0.8
CiteScore™:
1.52
ISSN Печать: 21525080
Свободный доступ
Выпуски:

International Journal for Uncertainty Quantification
DOI: 10.1615/Int.J.UncertaintyQuantification.v1.i2.30
pages 147161 ERROR AND UNCERTAINTY QUANTIFICATION AND SENSITIVITY ANALYSIS IN MECHANICS COMPUTATIONAL MODELSAbstract Multiple sources of errors and uncertainty arise in mechanics computational models and contribute to the uncertainty in the final model prediction. This paper develops a systematic error quantification methodology for computational models. Some types of errors are deterministic, and some are stochastic. Appropriate procedures are developed to either correct the model prediction for deterministic errors or to account for the stochastic errors through sampling. First, input error, discretization error in finite element analysis (FEA), surrogate model error, and output measurement error are considered. Next, uncertainty quantification error, which arises due to the use of samplingbased methods, is also investigated. Model form error is estimated based on the comparison of corrected model prediction against physical observations and after accounting for solution approximation errors, uncertainty quantification errors, and experimental errors (input and output). Both local and global sensitivity measures are investigated to estimate and rank the contribution of each source of error to the uncertainty in the final result. Two numerical examples are used to demonstrate the proposed methodology by considering mechanical stress analysis and fatigue crack growth analysis. KEYWORDS: error quantification, uncertainty quantification, sensitivity analysis, finite elements, discretization, surrogate mode
1. INTRODUCTIONComputational models are widely used by engineers to capture the behavior of physical systems. For large systems, computational models are usually constructed based on sparse experimental data, sometimes even with no fullscale experiments. As a result, errors arise due to lack of data and lack of knowledge about system behavior. Numerical approaches used to solve the model equations also produce errors due to various assumptions and approximations. Natural variability in many physical variables, and data uncertainty due to sparse data and measurement errors, add further uncertainty in the model prediction. The motivation of this paper is to develop a methodology that provides quantitative information regarding the relative contribution of various sources of error and uncertainty to the overall model prediction uncertainty. Such information can guide decisions regarding model improvement (e.g., model refinement, additional data collection) so as to enhance both accuracy and confidence in the prediction. The information sought is in the form of rankings of the various errors and uncertainty sources that contribute to the model prediction uncertainty. It is more advantageous to spend resources toward reducing an error with a higher ranking than one with a lower ranking. The rankings are based on systematic sensitivity analysis, which is possible only after quantifying the effect of each error source on the model prediction uncertainty. The error in a computational model prediction consists of two parts: model form error (ϵ_{model}) and solution approximation error or numerical error (ϵ_{num}) [14]. The model form error depends on whether the selected model correctly represents the real phenomenon (e.g., small deformation versus large deformation model, linear elastic versus elastoplastic model, or Euler versus NavierStokes equation). The solution approximation error arises when numerically solving the model equations. In other words, the model form error is related to the question—Did I choose the correct equation?—which is answered using validation experiments, while the solution approximation error is related to the question—Did I solve the equation correctly?—which is answered through verification studies. Mathematical theory and methods have been discussed in [1] for quantifying numerical error and model form error in computational mechanics models, but these methods require access to the original partial differential equations (PDEs) of the system. A simplified approach to error quantification using a commercial computational code as a black box has been developed in [5]. If we denote y_{pred} and y_{true} as model prediction and the true response of the physical system, respectively, then we have
Note that the numerical error depends on the choice of the model form; thus, the two errors are not independent. The numerical error ϵ_{num} is a nonlinear combination of various components [5]. This paper first considers three typical numerical error components and their quantification and combination, including input error, discretization error in FEA, and surrogate model error. Assume y_{c} is the model prediction corrected for the numerical error sources, and y_{pred} is the raw model prediction. Besides y_{c}, in order to quantify the model form error, observed data (y_{obs}) are needed. However, there is a difference between y_{obs} and y_{true}, which is called output measurement error (ϵ_{om}). Thus, we have
Model form error can be quantified based on Eqs. (1) and (2). Implementation details are discussed in Section 4. One concern in this paper is how to obtain a model prediction y_{c} corrected for numerical error sources. Among all errors, some errors are stochastic, such as input error and surrogate model error, and some errors are deterministic, such as discretization error in FEA. In this paper, a simple but efficient approach is developed to obtain y_{c}. The basic idea is to quantify and correct for each error where it arises. Stochastic error is corrected for by adding its randomly sampled values to the original result. Deterministic error is corrected for by directly adding it to the corresponding result. For example, to correct for the discretization error, every time a particular FEA result is obtained, the corresponding discretization error is calculated, added to the original result, and the corrected FEA result is used in further computation to obtain y_{c}. In addition to the model form and solution approximation errors mentioned above, another error arises due to Monte Carlo sampling used in the error quantification procedure itself. This error is referred to here as uncertainty quantification (UQ) error. For example, when estimating the cumulative distribution function (CDF) of a random variable from sparse data, there is error in the CDF value, and methods to quantify this UQ error are available in [6]. Then, if more samples are generated by the inverse CDF method using the CDF estimated from sparse data, then the UQ error is propagated as sampling error to the newly generated samples. An approach is developed in Section 3 to quantify this sampling error. This method is particularly useful in quantifying model form error (Section 4). After a probabilistic framework to manage all sources of uncertainty and error is established, sensitivity analyses are performed in Section 5 to assess the contribution of each source of uncertainty and error to the overall uncertainty in the corrected model prediction. The sensitivity analysis result can be used to guide resource allocation for different activities, such as model fidelity improvement, data collection, etc., according to the importance ranking of errors in orders to trade off between accuracy and computational/experimental effort. The proposed methods are demonstrated using two numerical examples in Section 6. The contributions of this paper can be summarized as follows:
2. NUMERICAL ERROR QUANTIFICATION (ϵ_{num})Three typical sources of numerical error (i.e., solution approximation error) and their quantification methods are discussed in this section, namely, input error, discretization error in FEA, and surrogate model error. In this paper, to quantify the numerical error, instead of comparing the original model prediction y_{pred} with known solutions, it is compared to the corrected model prediction y_{c}. Generally, if the error is deterministic, then it will be corrected by adding it to its corresponding quantity; if the error is stochastic, then it will be accounted for by including its uncertainty in the output through sampling. After the corrected model prediction is obtained, the numerical error can be estimated as
The raw model prediction y_{pred} is either deterministic or stochastic, depending on the model inputs. However, y_{c} and ϵ_{num} are random variables even for deterministic input, if both stochastic and deterministic error sources contribute to the overall numerical error and corrected model prediction. 2.1 Input Error (ϵ_{in})In a validation exercise, the inputs to the computational model should ideally have the same values as those to the physical system. However, the inputs to the physical system model are subject to experimental variability and measurement error; therefore, there is a discrepancy between what is measured or reported as the input to the physical system and its actual value. The use of this measured value in the computational model gives rise to an error in the model prediction, because the model and the actual system had different inputs. If no prior information is available, then the input errors are represented by random variables based on knowledge about experimental variability, measurement process, and instrument errors. Usually, normal distributions with a mean value of zero or a constant bias are assumed. The input error can be accounted for by including its uncertainty in the final model output (i.e., by propagating the randomness of input error through the computational model). Suppose that a computational model has the form y_{pred} = f(x,x ,...,x ), in which x_{1},x_{2},,x_{m} are model inputs and y_{pred} is model prediction. Then, the model output that accounts for the input errors is
where are the (stochastic) input errors of each input variable. Equation (4) is evaluated through Monte Carlo sampling of the error terms. 2.2 Discretization Error (ϵ_{h})Many engineering problems involve solving differential equations numerically using a discretization method, such as the finite difference or finite element methods. The approximation error due to coarse discretization of the domain is denoted as discretization error. Different methods have been proposed for a posteriori error estimation in boundary value problems in mechanics, including explicit error estimation methods [7], element residual methods [8], recovery based methods [9], and extrapolation methods [10]. A comprehensive review of a posteriori error estimation is given by Ainsworth and Oden [11]. But most methods provide relative error and are only useful for adaptive mesh refinement, not for quantifying the actual error [4]. A convenient method to approximate the actual error is the Richardson extrapolation [1215], especially when a commercial finite element code is used. In this paper, the Richardson extrapolation method is employed to quantify the discretization error for the sake of illustration. The emphasis of this paper is more on error combination than on methods to quantify individual errors; therefore, other preferred methods can also be used in the proposed overall framework. The following concerns should be noted while using the Richardson extrapolation method: (i) it performs well for onedimensional cases, but is less reliable for higher dimensions; (ii) it was first developed for the finite difference method, and its result is less credible for the finite element method; and (iii) it requires the model result to be monotonically convergent as the mesh is refined, while in many problems the result converges with fluctuation. In the Richardson extrapolation method, in order to quantify the discretization error ϵ_{h} in an FEA result φ_{1} with mesh size h_{1}, two more FEA runs are needed to obtain φ_{2} with a finer mesh h_{2} and φ_{3} with the finest mesh h_{3}. Then, the discretization error ϵ_{h} is given by ϵ_{h} = (φ_{1}  φ_{2})/(r^{p}  1). As a result, the exact solution can be expressed as
Here, r = h_{2}/h_{1} = h_{3}/h_{2} is the mesh refinement ratio, and the order of convergence p can be estimated as p = ln[(φ_{3}  φ_{2})/(φ_{2}  φ_{1})]lnr. As illustration, consider the deflection of a slender cantilever beam with a tip load F = 3.92 N at the free end. The length of the beam is L = 30 cm. The beam has a rectangular cross section of width b = 3.04 cm and height h = 0.0078 cm. The Young’s modulus is 200 GPa, and Poisson’s ratio is 0.3. Ignoring the selfweight of the beam, the theoretical result for the deflection at the free end is available in [16] as y_{th} = 12.16 cm. This theoretical result has also been experimentally validated in [16]. A finite element model is created for the beam in ANSYS, including a large deflection effect. The problem is solved with three mesh sizes (7.5, 3.75, and 1.875 cm) and the corresponding deflections are 12.22, 12.17, and 12.16 cm, respectively. The Richardson extrapolation method is used to estimate the exact solution. The mesh refinement ratio r and the order of convergence p are calculated to be 0.5 and 2.32, respectively. The estimated exact solution is calculated using Eq. (5) to be 12.16 cm, which matches the theoretical solution, thus illustrating the efficacy and accuracy of the Richardson extrapolation method for quantifying discretization error. 2.3 Surrogate Model Prediction Error (ϵ_{su})Some engineering analyses—uncertainty quantification, sensitivity analysis, optimization etc.,—require repeated runs of the finite element model, which can be prohibitively expensive. Therefore, a surrogate model, which is usually computationally much cheaper than FEA, is constructed to provide a closedform relationship between the inputs and outputs of the original model or system. The difference between surrogate model prediction and the original model prediction is denoted as surrogate model error ϵ_{su}. Because the true response of the original model is unknown at an untrained point (within the bounds of the surrogate model training), the surrogate model error is treated as a stochastic quantity. In this study, for the sake of illustration, a polynomial chaos expansion (PCE) [17] is used as a surrogate to the timeconsuming finite element analysis in order to generate enough Monte Carlo samples needed for quantifying the uncertainty in the model output. PCE is a regressionbased surrogate model that represents the output of a model with a series expansion in terms of standard random variables (SRVs). Consider a model y = f(x) in which y is the output from a numerical simulation f(x) and x = [x_{1},x_{2},...,x_{k}]^{T} is a vector of input variables that follow certain probability distributions. Suppose PCE is constructed to replace f(x) using n multidimensional Hermite polynomials as basis functions:
where ξ is a vector of independent standard normal random variables that are corresponding to the original input [18]. ϕ(·) = {φ_{0}(·),φ_{1}(·),...,φ_{n}(·)}^{T} are the Hermite basis functions, and Θ = {θ_{0}, θ_{1},...,θ_{n}}^{T} are the corresponding coefficients that can be estimated by the leastsquares method. A collocation point method can be used to efficiently select training points where the original model is evaluated [19]. Suppose that m training points (ξ_{i},y_{i}),i = 1,2,...,m are available. Under the GaussMarkov assumption [20], the surrogate model error ϵ_{su} asymptotically follows a normal distribution with zero mean and variance given by
where Ψ = {ϕ(ξ_{1}), ϕ(ξ_{2}),...,ϕ(ξ_{m})}^{T} and s^{2} = [1/(m  n)] ∑^{m}_{i=1} [y_{i}  Θ^{T}ϕ(ξ_{i})]^{2}. As illustration, consider the slender cantilever beam problem described in Section 2.2. Assume the concentrated force follows a normal distribution with mean 3.5 N and standard deviation 0.4 N. Fifteen samples of F are generated and 15 corresponding beam tip deflections are calculated using the finite element model with mesh size h = 7.5 cm. The first nine results are used to construct a firstorder PCE surrogate model, and the last six are used to verify the PCE model, which is then used to predict the beam tip deflection over the range of force from 3 to 4.5 N. The error associated with each prediction is calculated based on which 90% confidence bounds are constructed. The results are plotted in Fig. 1, showing excellent agreement between the surrogate model prediction and the original FEA predictions at the validation points. FIG. 1: Surrogate model prediction and confidence bounds. In order to account for the surrogate model error, random samples of ϵ_{su} are generated and added to the surrogate model prediction. As a result, instead of only one, a number of sample values of surrogate model predictions are obtained for a single input and all sample values should be used in the succeeding analyses to obtain a number of sample values of the final model prediction. This approach to account for the surrogate model prediction error also works for other stochastic surrogate models such as the Gaussian process model [21], which provide an estimate of the variance of prediction. 3. UNCERTAINTY QUANTIFICATION ERROR (ϵ_{uq})Section 2 discussed errors that arise in the solution procedure and methods to quantify them. Even the error quantification process, which requires uncertainty propagation analysis (e.g., Monte Carlo sampling), has error due to the limited number of samples in practical analysis. We denote the uncertainty quantification error ϵ_{uq} as the difference between the empirical CDF value and the “true” CDF value. In this section, first an error estimation method is developed to quantify ϵ_{uq}. Then the propagation of ϵ_{uq} is studied when samples are generated using the empirical CDF and used in further analysis. 3.1 Error in Empirical CDFSuppose that an empirical CDF F(x) is constructed from N_{s} samples of a random variable X. Let F_{X}(x) refer to the corrected CDF that includes the error in the empirical CDF. If n Monte Carlo samples are smaller than a specific value x, then n follows a binomial distribution [22] n ~B[N_{s}F_{X}(x), √ N_{s}F_{X}(x)[1  F_{X}(x)]], considering that the value of each sample is a result of Bernoulli trial. The binomial distribution approaches the normal distribution as N_{s} increases. When N_{s} > 20, the normal distribution n ~N(x) is a good approximation to the original binomial distribution [6]. Noting that F_{x}(x) = n/N, we have
Therefore ϵ_{uq}, the error associated with the CDF value F_{X}(x) can be represented as a normal random variable with zero mean and variance σ^{2}, which can be estimated from
Note that the variance of this error is actually a function of x, and it goes to zero at both ends of the CDF curve. Or we can directly treat F_{X}(x) as a random variable with the empirical CDF value as the mean and variance σ^{2} as in Eq. (9). For example, suppose X ~ N(0,1) and 21 samples of X are available. The empirical CDF and the 90% confidence bounds are constructed using the above method. The result is compared to the true CDF curve in Fig. 2, and shows that the CDF curve lies between the confidence bounds. FIG. 2: Empirical CDF with 90% confidence bounds versus true CDF. 3.2 Propagation of ϵ_{uq}If more realizations of x are generated from the empirical CDF to calculate the distribution of another variable y = g(x), then ϵ_{uq} is propagated through the sampling procedure and results in sampling error ϵ_{s} in the new samples of x. If an inverse CDF method is used for sampling, then an approximate method for quantifying the sampling error ϵ_{s} is proposed. In Fig. 3, let (ϵ_{uq})_{i} and (ϵ_{uq})_{i}' be the errors corresponding to the CDF values at x_{i} and x_{i} + (ϵ_{s})_{i}, respectively. From Eq. (9), we know that (ϵ_{uq})_{i} ~ N[0,σ(x_{i})] and (ϵ_{uq})_{i}' ~ N{0, σ[x_{i} + (ϵ_{s})_{i}]}. If we let (ϵ_{uq})_{i} and (ϵ_{uq})_{i}' have the same percentile value [assuming (ϵ_{s})_{i} is very small], then we have
In Eq. (10), (ϵ_{s})_{i} is the unknown quantity to be estimated. By using Eq. (10), one can calculate (ϵ_{s})_{i} for a given (ϵ_{uq})_{i}. That is, samples of (ϵ_{s})_{i} can be obtained by sampling (ϵ_{uq})_{i}. Thus, the corrected value of x_{i} is [x_{i} + (ϵ_{s})_{i}], which includes the sampling error (ϵ_{s})_{i}. Now y = g(x) is calculated using the corrected realization. FIG. 3: Sampling error quantification. In the context of this paper, the model form error statistics are obtained through Eq. (10), using samples of corrected model prediction y_{c} and observations y_{obs} (details in Section 4). The method developed in this section offers a generic approach to resampling from sparse data and includes the extra uncertainty due to UQ error in the new samples. A kernel density may be used to obtain a smoothed empirical CDF before applying this method, if desired. In Section 4, the proposed method is used in model form error estimation to include the error caused by a limited number of experimental observations y_{obs} and samples of the corrected model output y_{c}. 4. MODEL FORM ERROR ESTIMATIONAs mentioned in Section 1, model form error is calculated using Eqs. (1) and (2), by comparing model predictions against experimental observations of the same output quantity, for the “same” set of inputs. In this case, output measurement error needs to be taken into account. From Eqs. (1) and (2), we have
Rearranging the terms gives
Equation (12) is evaluated by sampling each of the three terms on the righthand side, and samples of ϵ_{model} are then obtained. The output measurement error ϵ_{om} is commonly assumed to be a normal random variable with zero mean and an assumed variance [20], based on the measurement process and equipment. However, in most cases only the distribution of ϵ_{om} might be available, and only a few pointvalued data are available for y_{obs}. Moreover, if the procedure of computing y_{c} requires considerable computational effort, then only a limited number of samples can be obtained. A resampling approach is needed to generate more samples of y_{obs} and y_{c} from sparse data, but resampling could also cause another error. Therefore, the approach developed in Section 3 for quantifying the sampling error is used. Then, Eq. (12) becomes
where (ϵ_{s})_{obs} and (ϵ_{s})_{c} are the errors due to the limited number of samples of y_{obs} and y_{c}, respectively; these errors are calculated using Eq. (10). After the model form error statistics are calculated using Eq. (13), for future prediction purposes, the overall corrected prediction that includes both solution approximation error and model form error may be computed as y_{c} + ϵ_{model}. In this expression, y_{c} includes the contributions of various solution approximation errors and input error, and ϵ_{model} includes the contributions of model form error, UQ error and output measurement error. 5. SENSITIVITY ANALYSISThe previous sections developed a methodology to quantify different errors in model prediction. In this section, sensitivity analysis is performed to estimate the contribution of each error source to the model prediction uncertainty. Previous studies in stochastic sensitivity analysis have only considered the effect of input random variables; this paper extends those methods to include sources of solution approximation error. The sensitivity analysis is done in terms of the contribution to the uncertainty in the corrected model prediction y_{true} [defined in Eq. (3)]. Because the UQ error and model form error are computed only after the computation of y_{c}, this sensitivity analysis is only with respect to solution approximation errors and does not include UQ error or model form error. Two local and two global sensitivity measures are studied in this section. A generic expression Y =f(X_{1},X_{2},...,X_{n} ) is used in the discussion below, where X_{i} can be either model input, error, or model parameter with uncertainty, and Y refers to the corrected model output y_{c} that considers all the contributing error sources. 5.1 Local Sensitivity AnalysisTwo local sensitivity analysis methods are considered. The first one is variancebased, and the second one is entropybased. 5.1.1 Change in VarianceThe change of variance in the model prediction due to the ith input is defined as
Equation (14) provides a measure of the change in variance of Y if the ith input is ignored. Ignoring X_{i} means fixing it at its mean value (usually zero) for a stochastic error, or not correcting for a deterministic error. The second term in the righthand side of Eq. (14) is the conditional variance of Y given X_{i}, and is calculated over X_{~i} (i.e., all X except X_{i}). The greater the value of (Δσ^{2})_{i} is, the more the importance X_{i} is. However, this measure is only in terms of variance and ignores other uncertainty information, such as mean, skewness, kurtosis, etc. Also, the result depends on where X_{i} is fixed, and in occasional cases, it is possible that Var(YX_{i} = x_{i}) > Var(Y ) [23]. But due to its simplicity, change of variance (Δσ^{2})_{i} is still an applicable scalar measurement of the effect due to each contributing source of uncertainty. Becasue X_{i} is fixed at a single value, the change of variance is a local sensitivity index. 5.1.2 KullbackLeibler DivergenceThe KullbackLeibler divergence (KL divergence) [24], adapted from information theory, measures the difference between two probability density functions p(x) and q(x), in the sense of the relative information entropy (uncertainty) of p(x) with respect to q(x). It is defined as
The KL divergence is nonnegative, and it is zero if and only if p(x) and q(x) are exactly the same. It is sensitive to both differences in mean value and in variance. The KL divergence has been use in sensitivity analysis [25] to measure the contribution of an individual source to the uncertainty in the model prediction. In this paper, the KL divergence D_{KLXi}[YX_{i} = x_{i}Y] is used to compare the difference over the entire distributions of Y and the conditional YX_{i} = x_{i}) , in which X_{i} is fixed at a particular value (zero for deterministic error and mean value for stochastic error). A larger value of the KL divergence implies that Y is more sensitive to X_{i}. 5.2 Global Sensitivity AnalysisInstead of fixing X_{i} at a single value, we can average the conditional variance Var_{X~i}(YX_{i}) over the entire distribution of X_{i}. This is denoted as E_{Xi}[Var_{X~i}(YX_{i})], and it no longer depends on where X_{i} is fixed, so that it is “global” over the range of X_{i}. Starting with this and based on variance decomposition [26], the variance of Y can be decomposed into two terms, with respect of X_{i} [23],
Here both terms are complementary to each other. Either a smaller first term or a bigger second term indicates a more important X_{i}. By normalizing the second term, we obtain the main effect sensitivity index,
which is always between 0 and 1. The main effect is also referred to as firstorder effect. Note that a low main effect index does not imply that the variable is not important. A variable with a low main effect might make a bigger contribution to the model output through interaction with other variables. Therefore, a more comprehensive sensitivity measure that includes the main effect and the interaction effect is the total effect index. If we swap X_{i} and X_{~i} in Eq. (16), another way of decomposing the variance of Y is
By normalizing the first term, we obtain the total effect sensitivity index,
The total effect index becomes valuable when the sum of individual main effect indices is not close to 1, which implies that strong interaction effects exist among variables. 5.3 Deterministic and Stochastic Errors in Sensitivity AnalysisIt was mentioned in Section 2 that deterministic errors (such as discretization error) are corrected, whereas stochastic errors are accounted for through sampling, in order to compute the corrected model prediction y_{c}. With such a strategy, only local sensitivity measures can be calculated corresponding to deterministic errors, which obviously have only fixed values for fixed model inputs. The corrected model prediction is calculated with and without correcting for the deterministic error, and the corresponding change in variance or KL distance can be computed. For stochastic errors, it is more appropriate to compute global sensitivity measures instead of at one particular value (typically, the mean). However, this creates a difficulty in comparing the relative contributions of the deterministic versus stochastic errors to the overall model prediction uncertainty, when resource allocation decisions are needed for activities such as model refinement and data collection. Therefore, an approximate approach is proposed below to compute global sensitivity measures for even deterministic errors. In global sensitivity analysis with stochastic variables, samples of the variables are generated based on their distributions. But deterministic variables do not have a distribution to sample from, such as discretization error ϵ_{h}. An approximate approach is to obtain samples of ϵ_{h} corresponding to randomly sampled stochastic inputs to the FEA model; these samples of ϵ_{h} are used to construct the distribution of ϵ_{h}. Another interesting problem occurs in calculating global sensivity measures with respect to the surrogate model error ϵ_{su} , even though it is stochastic. This is because the distribution of ϵ_{su} is local at a particular prediction of the surrogate model, which depends on the input. The overall distribution of ϵ_{su} is not available over the entire range of the input, thus hindering global sensivity analysis. To overcome this difficulty, an approximate approach is to obtain samples of ϵ_{su} corresponding to sample inputs to the surrogate model; these samples of ϵ_{su} are used to construct the overall distribution of ϵ_{su}. In summary, the need to compare the various sources of error with the same sensitivity measure creates two issues. Local sensitivity measures are not appropriate for stochastic errors, and calculation of global sensitivity measures is not straightforward for deterministic errors. The approximation in this section offers a convenient way of including discretization error and surrogate model error into global sensitivity analysis, thus making the various error sources comparable under the same sensitivity measure. 6. NUMERICAL EXAMPLES6.1 Cantilever BeamThe tip deflection δ of a cantilever beam shown in Fig. 4 is of interest. The beam has length L = 192 in., crosssectional moment of inertia I = 300in.^{3} and Poissons ratio υ = 0.3. The Young’s modulus E is assumed to be a normal random variable with mean value 29000 Ksi and COV 0.06. The selfweight w(x) is assumed to be a onedimensional Gaussian random field, with mean value 75 lb/in. and COV 0.05, as well as an exponential covariance function defined by C(Δx) = σ^{2} exp(Δx/b) , in which σ^{2} is the variance and b is the correlation length of the random field. In this example, b is assumed to be equal to the length of the beam. The concentrated load P at the beam tip is the input to the model, which is a normal random variable with mean 1000 lb and COV 0.16. FIG. 4: Cantilever beam. For the sake of illustration, assume that an error ϵ_{P} arises when measuring the input P, which is also a normal random variable with zero mean and a variance equal to 20% of variance of P [27]. Similarly, the output (i.e., deflection) measurement error is also assumed to be a normal random variable with zero mean and standard deviation of 0.0 1in. A finite element model is constructed with four beam elements of equal length, and a firstorder polynomial chaos expansion with Hermite bases is constructed as the surrogate computational model,
Nine training points are selected by the collocation method to run the FEA model for training the surrogate model [19]. Then, the surrogate model is used to generate samples of model predictions by sampling the inputs. The distribution of model prediction is estimated from the samples to have a mean value of 1.74 and standard deviation of 1.23 × 10^{1}. This is the raw model prediction. 6.1.1 Numerical Errors and SensitivityIn this example, the numerical error in the model prediction is assumed to come from three error sources: discretization error in FEA ϵ_{h}, surrogate model error ϵ_{su}, and input error ϵ_{P}. First, the Richardson extrapolation technique is used to correct the discretization error in FEA. For each of the nine training points, two more FEM runs are made with eight elements and 16 elements, respectively (original number of elements is four) to calculate ϵ_{h}. A new set of nine training points is obtained by adding the discretization error to the original training points and a new surrogate model PCE_{h} is built with the corrected training data. Then, ϵ_{P} and ϵ_{su} are also included and, finally, the corrected model prediction δ_{c} is given by
The numerical error can be calculated as ϵ_{num} = δ_{c}  δ_{pred}, where δ_{pred} is the raw model prediction. By randomly generating samples of ϵ_{num}, the mean and standard deviation of ϵ_{num} are estimated to be 4.01 × 10^{3} and 1.78 × 10^{1}, respectively. Using the corrected model prediction δ_{c} all four sensitivity analyses are performed on the three errors. The results are shown in Table 1. The first two sensitivity indices are local, and the next two are global. From the sensitivity ranking in Table 1, it is seen that the global measures indicate that the discretization error affects the model solution the most, whereas the local measures give the input measurement error the highest rank. TABLE 1: Sensitivity analysis
6.1.2 Model Form Error and UQ Error EstimationModel form error can be estimated using Eq. (13); this also needs to take the uncertainty quantification errors into account. Assume nine experimental observations of beam deflection δ_{obs} are available, and assume 20 samples of δ_{c} are taken from the error quantification procedure for resampling δ_{c}. Further assume that the output measurement error ϵ_{om} follows a normal distribution with zero mean and standard deviation 0.01. For comparison, samples of model form error without considering the uncertainty quantification error are also generated by ignoring the sampling errors in Eq. (13). The CDFs of model form error with and without considering uncertainty quantification error are plotted in Fig. 5a. Onethousand samples of model form error were generated for each case. The PDFs of sampling error for δ_{obs} and δ_{c} are also plotted in Fig. 5b. Statistical results are summarized in Table 2. Because less data is available for δ_{obs} than δ_{c}, the variance of sampling error in δ_{obs} is significantly greater than in δ_{c}. It is noted that including the UQ error (due to limited sampling) resulted in an increase in the variance of the estimated model form error because more randomness was introduced. Finally, because the mean and variance of ϵ_{model} with ϵ_{uq} are estimated from 1000 samples, errors arise in their estimates; these errors are very small and are ignored. FIG. 5: Statistics of model form error and sampling errors. TABLE 2: Statistics of model form error and sampling errors
6.2 Crack Growth in an Airplane Wing SparIn this example, quantification and sensitivity analysis of errors in crack growth prediction in part of an airplane wing is of interest. In crack growth analysis, sensitivity analysis of errors faces a major difficulty because different sample values of the same error are input into the analysis in each cycle. This issue will be discussed in detail later. The wing spar plays a key role in connecting the wing to the fuselage and is subjected to cyclic loading during flight. The final crack size a_{N} that grows from an initial size a_{0} = 0.05 in. under a given load history is studied. Because of various uncertainties and errors arising in the analysis, a_{N} is a random variable. The analysis consists of two modules: structural stress analysis and fatigue crack growth analysis, as shown in Fig. 6. The errors and uncertainty considered in this example are input error, discretization error in FEA, surrogate model error, and uncertainty in the crack growth law parameter. FIG. 6: Component analyses and associated sources of error and uncertainty. The loads B and P are inputs to the structural model. The output of structural analysis is the firstprincipals stress in the critical region (hot spot). The analysis is carried out by finite element analysis. To quantify the discretization errors in the finite element analysis using Richardsons extrapolation, for each of the training points the finite element analysis is repeated with finer meshes with two levels of refinement. The refinement ratio is 0.75, which means the element size is multiplied by 0.75 in each refinement. A surrogate model is constructed using a firstorder polynomial chaos expansion with Hermite polynomial bases. The surrogate model is trained using FEA results and is used to replace the FEA in further analysis. The associated surrogate model error is stochastic. Before proceeding with the crack growth analysis, the stress intensity factor is calculated first from the structural analysis result (in the context of linear elastic fracture mechanics). This is given by
where s is the stress at crack tip, a is the crack length, and β depends on geometry and crack length. In this example, β available from [28]. The Paris’ law is used here to illustrate the crack growth calculation
The parameter n is assumed to be deterministic at 3.2. Because of fitting error, the parameter C is assumed to be a random variable that follows a lognormal distribution, with a mean value of 1.0 × 10^{7} and COV of 0.24. 6.2.1 Correction of ErrorsThe following states the procedure of calculating a_{N} that is corrected for all error. At the beginning, a random sample of C is generated, which stays constant throughout all cycles. In each cycle, first the mean values of B and P are obtained from a given variable amplitude load history. Random samples of ϵ_{B} and ϵ_{P} are generated and added to B and P , respectively. The corrected B and P are then fed into the surrogate model, which is built with nine FEA results corrected for discretization error using the Richardson extrapolation. The surrogate model outputs the stress prediction s, as well as the associated prediction error. Next Eq. (22) is used to calculate the stress intensity factor, which is input to the crack growth law. To accelerate the computation, the increment of crack length da is calculated in blocks of every 10 cycles [i.e., dN is set to 10 in Eq. (23)]. Finally, the crack length is updated until the end of the load history. The analysis is repeated to generate 10,000 such samples of a_{N}. 6.2.2 Results and Sensitivity AnalysisThe a_{N} obtained in Section 6.2.1 that is corrected for all errors is denoted as a_{c} in the following discussion. However, the structure of the crack growth problem is not like that of the previous example due to the cyclebycycle analysis. A small example would explain this special case. Let a_{i+1} = g(a_{i}) + ϵ be a crack growth model, in which a_{i} and a_{i+1} are the crack sizes in the ith and (i + 1)th cycles, and ϵ is a random error. If the crack grows from size a_{0} to a_{3} in three cycles, then we have
in which ϵ^{(1)},ϵ^{(2)}, and ϵ^{(3)} are three samples of the error ϵ but are generated independently. Or we can say a_{3} is function of four input variables a_{3} = g_{3}[a_{0},ϵ^{(1)},ϵ^{(2)},ϵ^{(3)}]. The existing sensitivity methods are only able to estimate the sensitivity of a_{3} with respect to each of ϵ^{(1)}, ϵ^{(2)}, and ϵ^{(3)}, individually. However, the sensitivity of a_{3} to the overall error ϵ is what we are actually interested in. An approximate approach to overcome this hurdle is to assume that all samples of the same error have the same value throughout all cycles. This makes it possible to compute the sensitivity with respect to a single error term ϵ over all cycles. The results in Table 3 show that all four sensitivity measures are able to indicate that the final crack size is most sensitive to the parameter C in Paris law and is least sensitive to the discretization error in FEA. This implies that in order to achieve a more accurate prediction of the final crack size, a more precise parameter C (i.e., narrower scatter) is needed. In contrast, the discretization error in FEA is much smaller. TABLE 3: Sensitivity analysis results
7. CONCLUSIONThis paper studied some of the errors that arise in mechanics computational models. There are three major contributions in this paper. The first is that the errors are clearly separated, and a quantification method is developed for each of the errors, including model form error, uncertainty quantification error, and three typical sources of numerical (solution approximation) error. Some of these errors are random and some are deterministic. Deterministic error is corrected by adding it to the prediction, and random error is included in the model output through sampling. By correcting or accounting for all the errors the corrected model output is obtained. The corrected model outputs together with observed data are used to estimate model form error through sampling, where uncertainty quantification error arises. Therefore, a methodology to quantify the UQ error is developed (this is the second contribution). The error in the CDF from limited samples is first quantified; then the propagation of this error due to further sampling from the approximate CDF is also quantified. The third major contribution of this paper is the extension of sensitivity analysis methods to rank the contribution of each error. Past work in global sensitivity analysis has only considered the influence of input random variables on output uncertainty; this paper extends the methods to include model errors. Methods for computing both local and global sensitivity measures are developed. Approximation approaches are developed for fatigue crack growth analysis that requires repetition of error terms in each cycle. The proposed error quantification and sensitivity analysis framework helps guide resource allocation for model refinement and data collection, balancing computational effort and prediction accuracy. The framework is general and can be extended to include other sources of error and other methods of quantifying individual errors. ACKNOWLEDGMENTSThe research reported in this paper was supported by funds from the Air Force Office of Scientific Research (Project Manager: Dr. Fariba Fahroo) through subcontract to Vextec Corporation (Investigators: Dr. Robert Tryon, Dr. Animesh Dey). The support is gratefully acknowledged. The authors are also grateful to Dr. Vadiraj Hombal and Shankar Sankararaman for valuable discussions and help. REFERENCES1. Oden, J., Babuska, I., Nobile, F., Feng, Y., and Tempone, R., Theory and methodology for estimation and control of errors due to modeling, approximation, and uncertainty, Comput. Methods Appl. Mech. Eng., 194(25):195204, 2005. 2. Oberkampf, W. and Trucano, T., Verification and validation in computational fluid dynamics, Prog. Aerospace Sci., 38(3):209272, 2002. 3. Oberkampf, W., Sindir, M., and Conlisk, A., Guide for the verification and validation of computational fluid dynamics simulations, AIAA, 1998. 4. Rebba, R., Model validation and design under uncertainty, PhD thesis, Vanderbilt University, 2005. 5. Rebba, R., Mahadevan, S., and Huang, S., Validation and error estimation of computational models, Reliab. Eng. Syst. Safety, 91(1011):13901397, 2006. 6. Shooman, M., Probabilistic Reliability: An Engineering Approach, McGrawHill, New York, 1968. 7. Babuška, I. and Rheinboldt, W., Aposteriori error estimates for the finite element method, Int. J. Numer. Methods Eng., 12(10):15971615, 1978. 8. Demkowicz, L., Oden, J., and Strouboulis, T., Adaptive finite elements for flow problems with moving boundaries. I—Variational principles and a posteriori estimates, Comput. Methods Appl. Mech. Eng., 46:217251, 1984. 9. Zienkiewicz, O. and Zhu, J., A simple error estimator and adaptive procedure for practical engineerng analysis, Int. J. Numer. Methods Eng., 24(2):337357, 1987. 10. Szabo, B., Mesh design for the pversion of the finite element method, Comput. Methods Appl. Mech. Eng., 55(12):181197, 1986. 11. Ainsworth, M. and Oden, J., A posteriori error estimation in finite element analysis, Comput. Methods Appl. Mech. Eng., 142(12):188, 1997. 12. Richardson, L., The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam, Philos. Trans. R. Soc. London, Ser. A, 210:307357, 1911. 13. Richards, S., Completed Richardson extrapolation in space and time, Commun. Numer. Methods Eng., 13(7):573582, 1997. 14. Celik, I. and Karatekin, O., Numerical experiments on application of Richardson extrapolation with nonuniform grids, J. Fluids Eng., 119:584590, 1997. 15. Alvin, K., Structural mechanics and materials—Method for treating discretization error in nondeterministic analysis, AIAA J., 38(5):910916, 2000. 16. Beléndez, T., Neipp, C., and Beléndez, A., Large and small deflections of a cantilever beam, Eur. J. Phys., 23:371379, 2002. 17. Ghanem, R. and Spanos, P., Stochastic Finite Elements: A Spectral Approach, SpringerVerlag, New York, 1991. 18. Isukapalli, S., Uncertainty analysis of transporttransformation models, PhD thesis, Rutgers, The State University of New Jersey, 1999. 19. Huang, S., Mahadevan, S., and Rebba, R., Collocationbased stochastic finite element analysis for random field problems, Prob. Eng. Mech., 22(2):194205, 2007. 20. Seber, G. and Wild, C., Nonlinear Regression, Wiley, Hoboken, NJ, 1989. 21. Rasmussen, C., Gaussian processes in machine learning, Adv. Lect. Mach. Learning, 3174:6371, 2004. 22. Haldar, A. and Mahadevan, S., Probability, Reliability, and Statistical Methods in Engineering Design, Wiley, Hoboken, NJ, 2000. 23. Saltelli, A. and Ratto, M., Global Sensitivity Analysis: The Primer, WileyInterscience, 2008. 24. Kullback, S. and Leibler, R., On information and sufficiency, Ann. Math. Stat., 22(1):7986, 1951. 25. Liu, H., Chen, W., and Sudjianto, A., Relative entropy based method for probabilistic sensitivity analysis in engineering design, J. Mech. Des., 128:326336, 2006. 26. Sobol, I., Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates, Math. Comput. Simul., 55(13):271280, 2001. 27. Aeschliman, D. and Oberkampf, W., Experimental methodology for computational fluid dynamics code validation, Sandia Report No. SAND95I 189 UC706, 2006. 28. Anderson, T., Fracture Mechanics: Fundamentals and Applications, CRC Press, Boca Raton, 2005. 
Портал  Begell Электронная Бибилиотека  eКниги  Журналы  Справочники и Сборники статей  Коллекции 