Suscripción a Biblioteca: Guest
International Journal for Uncertainty Quantification

Publicado 6 números por año

ISSN Imprimir: 2152-5080

ISSN En Línea: 2152-5099

The Impact Factor measures the average number of citations received in a particular year by papers published in the journal during the two preceding years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) IF: 1.7 To calculate the five year Impact Factor, citations are counted in 2017 to the previous five years and divided by the source items published in the previous five years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) 5-Year IF: 1.9 The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. Immediacy Index: 0.5 The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. Eigenfactor: 0.0007 The Journal Citation Indicator (JCI) is a single measurement of the field-normalized citation impact of journals in the Web of Science Core Collection across disciplines. The key words here are that the metric is normalized and cross-disciplinary. JCI: 0.5 SJR: 0.584 SNIP: 0.676 CiteScore™:: 3 H-Index: 25

Indexed in



DESIGN UNDER UNCERTAINTY EMPLOYING STOCHASTIC EXPANSION METHODS

Michael S. Eldred

Optimization and Uncertainty Quantification Department, Sandia National Laboratories,1 Albuquerque, NM 87185-1318

Abstract

Nonintrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification due to their fast convergence properties and ability to produce functional representations of stochastic variability. PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, cubature, or Smolyak sparse grid approaches. SC, on the other hand, forms interpolation functions for known coefficients and requires the use of structured collocation point sets derived from tensor product or sparse grids. Once PCE or SC representations have been obtained for a response metric of interest, analytic expressions can be derived for the moments of the expansion and for the design derivatives of these moments, allowing for efficient design under uncertainty formulations involving moment control (e.g., robust design). This paper presents two approaches for moment design sensitivities, one involving a single response function expansion over the full range of both the design and uncertain variables and one involving response function and derivative expansions over only the uncertain variables for each instance of the design variables. These two approaches present trade-offs involving expansion dimensionality, global versus local validity, collocation point data requirements, and L2 (mean, variance, probability) versus L (minima, maxima) interrogation requirements. Given this capability for analytic moments and moment sensitivities, bilevel, sequential, and multifidelity formulations for design under uncertainty are explored. Computational results are presented for a set of algebraic benchmark test problems, with attention to design formulation, stochastic expansion type, stochastic sensitivity approach, and numerical integration method.

KEYWORDS: stochastic optimization, computational design, polynomial chaos, stochastic collocation, stochastic sensitivity analysis


1. Introduction

Uncertainty quantification (UQ) is the process of determining the effect of input uncertainties on response metrics of interest. These input uncertainties may be characterized as either aleatory uncertainties, which are irreducible variabilities inherent in nature, or epistemic uncertainties, which are reducible uncertainties resulting from a lack of knowledge. Because sufficient data are available for characterizing aleatory uncertainties, probabilistic methods are commonly used for computing response distribution statistics based on input probability distribution specifications. Conversely, for epistemic uncertainties, data are generally too sparse to support objective probabilistic input descriptions, leading either to subjective probabilistic descriptions (e.g., assumed priors in Bayesian analysis) or nonprobabilistic methods based on interval specifications.

One technique for the analysis of aleatory uncertainties using probabilistic methods is the polynomial chaos expansion (PCE) approach to UQ. For smooth functions (i.e., analytic, infinitely differentiable) in L2 (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinement for integrated statistical quantities of

interest, such as mean, variance, and probability. In this work, generalized polynomial chaos using the Wiener-Askey scheme [1] provides a foundation in which Hermite, Legendre, Laguerre, Jacobi, and generalized Laguerre orthogonal polynomials are used for modeling the effect of continuous uncertain variables described by normal, uniform, exponential, β, and γ probability distributions, respectively.2 These polynomial selections are optimal for these distribution types since they are orthogonal with respect to an inner product weighting function that corresponds3 to the probability density functions for these continuous distributions. Orthogonal polynomials can be computed for any positive weight function; thus, these five classical orthogonal polynomials may be augmented with numerically generated polynomials for other probability distributions (e.g., for lognormal, extreme value, and histogram distributions). When independent standard random variables are used (or computed through transformation), the variable expansions are uncoupled, allowing the polynomial orthogonality properties to be applied on a per-dimension basis. This allows one to mix and match the polynomial basis used for each variable without interference with the spectral projection scheme for the response.

In non intrusive PCE, simulations are used as black boxes and the calculation of chaos expansion coefficients for response metrics of interest is based on a set of simulation response evaluations. To calculate these response PCE coefficients, two primary classes of approaches have been proposed: spectral projection and linear regression. The spectral projection approach projects the response against each basis function using inner products and employs the polynomial orthogonality properties to extract each coefficient. Each inner product involves a multidimensional integral over the support range of the weighting function, which can be evaluated numerically using sampling, tensor-product quadrature, Smolyak sparse grid [2], or cubature [3] approaches. The linear regression approach uses a single linear least-squares solution to solve for the set of PCE coefficients that best match a set of response values obtained from either a design of computer experiments (“point collocation” [4]) or from the subset of tensor Gauss points with highest product weight (“probabilistic collocation” [5]).

Stochastic collocation (SC) [6] is a second stochastic expansion approach that is closely related to PCE. As for PCE, exponential convergence rates can be obtained under order refinement for integrated statistical quantities of interest, provided that the response functions are smooth with finite variance. The primary distinction is that, whereas PCE estimates coefficients for known orthogonal polynomial basis functions, SC forms Lagrange interpolation functions for known coefficients. Interpolation is performed on structured grids such as tensor-product or sparse grids. Starting from a tensor-product multidimensional Lagrange interpolant, we have the feature that the ith interpolation polynomial is 1 at collocation point i and 0 for all other collocation points, leading to the use of expansion coefficients that are just the response values at each of the collocation points. Sparse interpolants are weighted sums of these tensor interpolants; however, they are only interpolatory for sparse grids based on fully nested rules and will exhibit some interpolation error at the collocation points for sparse grids based on non-nested rules. A key to maximizing performance with SC is performing collocation using the Gauss points and weights from the same optimal orthogonal polynomials used in PCE. For use of standard Gauss integration rules (not nested variants such as Gauss-Patterson or Genz-Keister) within tensor-product quadrature, tensor PCE expansions, and tensor SC interpolants are equivalent in that identical polynomial approximations are generated [7]. Moreover, this equivalence can be extended to sparse grids based on standard Gauss rules, provided that a sparse PCE is formed based on a weighted sum of tensor expansions [8].

Once PCE or SC representations have been obtained for the response metrics of interest, analytic expressions can be derived for the moments of the expansions (from integration over the aleatory/probabilistic random variables) as well as for various sensitivity measures. Local sensitivities (i.e., derivatives) and global sensitivities [9] (i.e., ANOVA, variance-based decomposition) of the response metrics may be computed with respect to the expansion variables, and local sensitivities of probabilistic response moments may be computed with respect to other nonprobabilistic variables (e.g., design or epistemic uncertain variables). This latter capability allows for efficient design under uncertainty and mixed aleatory-epistemic UQ formulations involving moment control or bounding. This paper presents two approaches for calculation of sensitivities of moments with respect to nonprobabilistic dimensions (design or epistemic), one involving response function expansions over both probabilistic and nonprobabilistic variables and one involving response derivative expansions over only the probabilistic variables. The ability to compute analytic statistics and their design derivatives using these two approaches enables bilevel, sequential, and multifidelity formulations to design under uncertainty. Relative to similar design optimization approaches based on local reliability UQ methods [10, 11], it is expected that new approaches based on stochastic expansions will provide fast convergence and mitigate algorithmic robustness issues due to nonsmoothness, multimodality, and high degrees of nonlinearity in the response metrics of interest.

Sections 2-4 overview the foundational components of stochastic expansions, Section 5 describes stochastic sensitivity analysis approaches and their usage within several design under uncertainty formulations, Section 6 presents computational results for several benchmark test problems, and Section 7 provides concluding remarks.

2. Polynomial Basis

2.1 Orthogonal Polynomials in the Askey Scheme

Table 1 shows the set of classical orthogonal polynomials that provides an optimal basis for different continuous probability distribution types. It is derived from the family of hypergeometric polynomials known as the Askey scheme [12], for which the Hermite polynomials originally employed by Wiener [13] are a subset. The optimality of these basis selections derives from their orthogonality with respect to weighting functions that correspond to the probability density functions (PDFs) of the continuous distributions when placed in a standard form. The density and weighting functions differ by a constant factor due to the requirement that the integral of the PDF over the support range is one.

TABLE 1: Linkage between standard forms of continuous probability distributions and Askey scheme of continuous hyper geometric polynomials.

Distribution Density function Polynomial Weight function Support range
Normal Hermite He n(x) e-x2 /2 [-∞,∞]
Uniform 1/2 Legendre Pn(x) 1 [-1,1]
Beta Jacobi Pn(α,β)(x) (1 -x)α(1 + x)β [-1,1]
Exponential e-x Laguerre Ln(x) e-x [0,∞]
Gamma Gen. Laguerre Ln(α)(x) xαe-x [0,∞]

Note that Legendre is a special case of Jacobi for α = β = 0, Laguerre is a special case of generalized Laguerre for α = 0, Γ(a) is the Gamma function which extends the factorial function to continuous values, and B(a,b) is the β function defined as B(a,b) = [Γ(a)Γ(b)]/[Γ(a + b)]. Some care is necessary when specifying the α and β parameters for the Jacobi and generalized Laguerre polynomials because the orthogonal polynomial conventions [14] differ from the common statistical PDF conventions. The former conventions are used in Table 1.

2.2 Numerically Generated Orthogonal Polynomials

If all random inputs can be described using independent normal, uniform, exponential, β, and γ distributions, then generalized PCE can be directly applied. If correlation or other distribution types are present, then additional techniques are required. One solution is to employ nonlinear variable transformations as described in Section 3.3.1 such that an Askey basis can be applied in the transformed space. This can be effective as shown in [15], but convergence rates are typically degraded. In addition, correlation coefficients are warped by the nonlinear transformation [16], and simple expressions for these transformed correlation values are not always readily available. An alternative is to numerically generate the orthogonal polynomials (using Gauss-Wigert [17], discretized Stieltjes [18], Chebyshev [18], or Gramm-Schmidt [19] approaches) and then compute their Gauss points and weights (using the Golub-Welsch [20] tridiagonal eigensolution). These solutions are optimal for given random variable sets having arbitrary PDFs and eliminate the need to induce additional nonlinearity through variable transformations, but performing this process for general joint density functions with correlation is a topic of ongoing research (refer to Section 3.3 for additional details).

2.3 Interpolation Polynomials

Lagrange polynomials interpolate a set of points in a single dimension using the functional form

(1)

where it is evident that Lj is 1 at ξ = ξj, is 0 for each of the points ξ = ξk, and has order m- 1.

For interpolation of a response function R in one dimension over m points, the expression

(2)

reproduces the response values rj) at the interpolation points and smoothly interpolates between these values at other points. For interpolation in multiple dimensions, a tensor-product approach can be used wherein

(3)

where i = (m1,m2,...,mn) are the number of nodes used in the n-dimensional interpolation and ξji is the jth point in the ith direction. As will be seen later (Section 4.1.3), interpolation on sparse grids involves a summation of these tensor products with varying i levels.

3. Stochastic Expansion Methods

3.1 Generalized Polynomial Chaos

The set of polynomials from Sections 2.1-2.2 are used as an orthogonal basis to approximate the functional form between the stochastic response output and each of its random inputs. The chaos expansion for a response R takes the form

(4)

where the random vector dimension is unbounded and each additional set of nested summations indicates an additional order of polynomials in the expansion. This expression can be simplified by replacing the order-based indexing with a term-based indexing

(5)

where there is a one-to-one correspondence between ai1i2...in and αj and between Bni1i2,...,ξin) and Ψj(ξ). Each of the Ψj(ξ) are multivariate polynomials that involve products of the one-dimensional polynomials. For example, a multivariate Hermite polynomial B(ξ) of order n is defined from

(6)

which can be shown to be a product of one-dimensional Hermite polynomials involving a multi-index mij,

(7)

In the case of a mixed basis, the same multi-index definition is employed although the one-dimensional polynomials ψmij are heterogeneous in type.

3.1.1 Expansion Truncation and Tailoring

In practice, one truncates the infinite expansion at a finite number of random variables and a finite expansion order

(8)

Traditionally, the polynomial chaos expansion includes a complete basis of polynomials up to a fixed total-order specification. That is, for an expansion of total order p involving n random variables, the multi-index defining the set of Ψj is constrained by

(9)

The total number of terms Nt for this expansion is given by

(10)

This traditional approach will be referred to as a “total-order expansion.”

An important alternative approach is to employ a “tensor-product expansion,” in which polynomial order bounds are applied on a per-dimension basis (no total-order bound is enforced) and all combinations of the one-dimensional polynomials are included. That is, the multi-index defining the set of Ψj is constrained by

(11)

where pi is the polynomial order bound for the ith dimension. In this case, the total number of terms Nt is

(12)

It is apparent from Eq. (12) that the tensor-product expansion readily supports anisotropy in polynomial order for each dimension, since the polynomial order bounds for each dimension can be specified independently. It is also feasible to support anisotropy with total-order expansions, through pruning polynomials that satisfy the total-order bound but violate individual per-dimension bounds [the number of these pruned polynomials would then be subtracted from Eq. (10)]. Finally, custom tailoring of the expansion form can also be explored, e.g., to closely synchronize with monomial coverage in sparse grids through use of a summation of tensor expansions (see Section 4.1.3). In all cases, the specifics of the expansion are codified in the multi-index, and subsequent machinery for estimating response values and statistics from the expansion can be performed in a manner that is agnostic to the specific expansion form.

3.2 Stochastic Collocation

The SC expansion is formed as a sum of a set of multidimensional Lagrange interpolation polynomials, one polynomial per unique collocation point. Because these polynomials have the feature of being equal to 1 at their particular collocation point and 0 at all other points,4 the coefficients of the expansion are just the response values at each of the collocation points. This can be written as

(13)

where the set of Np collocation points involves a structured multidimensional grid [a tensor-product grid as in Eq. (3) or a Smolyak sparse grid]. There is no need for tailoring of the expansion form as there is for PCE (i.e., to synchronize the expansion polynomials with the set of integrable monomials) because the polynomials that appear in the expansion are determined by the Lagrange construction [Eq. (1)]. That is, any tailoring or refinement of the expansion occurs through the selection of points in the interpolation grid and the polynomial orders of the basis are adapted implicitly.

3.3 Transformations to Independent Standard Variables

Polynomial chaos and stochastic collocation are expanded using polynomials that are functions of independent standard random variables ξ. Thus, a key component of either approach is performing a transformation of variables from the original random variables x to independent standard random variables ξ and then applying the stochastic expansion in the transformed space. This notion of independent standard space is extended over the notion of “u-space” used in reliability methods [10, 11] in that it extends the standardized set beyond standard normals. For distributions that are already independent, three different approaches are of interest, as follows:

  1. Extended basis: For each Askey distribution type, employ the corresponding Askey basis (Table 1). For non-Askey types, numerically generate an optimal polynomial basis for each independent distribution as described in Section 2.2. With usage of the optimal basis corresponding to each of the random variable types, we can exploit basis orthogonality under expectation [e.g., Eq. (16)] without requiring a transformation of variables, thereby avoiding inducing additional nonlinearity that could slow convergence.
  2. Askey basis: For non-Askey types, perform a nonlinear variable transformation from a given input distribution to the most similar Askey basis. For example, lognormal distributions might employ a Hermite basis in a transformed standard normal space and loguniform, triangular, and histogram distributions might employ a Legendre basis in a transformed standard uniform space. All distributions then employ the Askey orthogonal polynomials and their associated Gauss points/weights.
  3. Wiener basis: For non-normal distributions, employ a nonlinear variable transformation to standard normal distributions. All distributions then employ the Hermite orthogonal polynomials and their associated Gauss points/weights.

For dependent distributions, we must first perform a nonlinear variable transformation to uncorrelated standard normal distributions due to the independence of decorrelated Gaussians. This involves the Nataf transformation, described in Section 3.3.1. We then have the following choices:

  1. Single transformation: Following the Nataf transformation to independent standard normal distributions, employ the Wiener basis in the transformed space.
  2. Double transformation: From independent standard normal space, transform back to either the original marginal distributions or the desired Askey marginal distributions and employ an extended or Askey basis, respectively, in the transformed space. Independence is maintained, but the nonlinearity of the Nataf transformation is at least partially mitigated.

The results in Section 6 all employ a single transformation for dependent variables in combination with an Askey basis for independent variables.

3.3.1 Nataf Transformation

The transformation from correlated non-normal distributions to uncorrelated standard normal distributions is denoted as ξ = T(x) with the reverse transformation denoted as x = T-1(ξ). These transformations are nonlinear, in general, and possible approaches include the Rosenblatt [21], Nataf [16], and Box-Cox [22] transformations. The results in this paper employ the Nataf transformation, which is suitable for the common case when marginal distributions and a correlation matrix are provided, but full joint distributions are not known.5 The Nataf transformation occurs in the following two steps. To transform between the original correlated x-space variables and correlated standard normals (“z-space”), a cumulative distribution function (CDF) matching condition is applied for each of the marginal distributions,

(14)

where Φ() is the standard normal cumulative distribution function and F() is the cumulative distribution function of the original probability distribution. Then, to transform between correlated z-space variables and uncorrelated ξ-space variables, the Cholesky factor L of a modified correlation matrix is used,

(15)

where the original correlation matrix for non-normals in x-space has been modified to represent the corresponding “warped” correlation in z-space [16].

4. Non-Intrusive Methods for Expansion Formation

The major practical difference between PCE and SC is that, in PCE, one must estimate the coefficients for known basis functions, whereas in SC, one must form the interpolants for known coefficients. PCE estimates its coefficients using any of the following approaches: random sampling, tensor-product quadrature, Smolyak sparse grids, cubature, or linear regression. In SC, the multidimensional interpolants need to be formed over structured data sets, such as point sets from quadrature or sparse grids; approaches based on random sampling may not be used.

4.1 Spectral Projection

The spectral projection approach projects the response against each basis function using inner products and employs the polynomial orthogonality properties to extract each coefficient. Similar to a Galerkin projection, the residual error from the approximation is rendered orthogonal to the selected basis. From Eq. (8), taking the inner product of both sides with respect to Ψj and enforcing orthogonality yields

(16)

where each inner product involves a multidimensional integral over the support range of the weighting function. In particular, Ω = Ω1 ⊗...⊗ Ωn, with possibly unbounded intervals Ωj and the tensor product form (ξ) = Πni=1ii) of the joint probability density (weight) function. The denominator in Eq. (16) is the norm squared of the multivariate orthogonal polynomial, which can be computed analytically using the product of univariate norms squared

(17)

where the univariate inner products have simple closed-form expressions for each polynomial in the Askey scheme [14] and are readily computed as part of the numerically-generated solution procedures described in Section 2.2. Thus, the primary computational effort resides in evaluating the numerator, which is evaluated numerically using sampling, quadrature, cubature, or sparse grid approaches (and this numerical approximation leads to use of the term “pseudo-spectral” by some investigators).

4.1.1 Sampling

In the sampling approach, the integral evaluation is equivalent to computing the expectation (mean) of the response-basis function product [the numerator in Eq. (16)] for each term in the expansion when sampling within the density of the weighting function. This approach is only valid for PCE, and because sampling does not provide any particular monomial coverage guarantee, it is common to combine this coefficient estimation approach with a traditional total-order chaos expansion. In computational practice, coefficient estimations based on sampling benefit from first estimating the response mean (the first PCE coefficient) and then removing the mean from the expectation evaluations for all subsequent coefficients.

4.1.2 Tensor Product Quadrature

In quadrature-based approaches, the simplest general technique for approximating multidimensional integrals, as in Eq. (16), is to employ a tensor product of one-dimensional quadrature rules. Because there is little benefit to the use of nested quadrature rules in the tensor-product case, we choose Gaussian abscissas [i.e., the zeros of polynomials that are orthogonal with respect to a density function weighting (e.g., Gauss-Hermite, Gauss-Legendre, Gauss-Laguerre, generalized Gauss-Laguerre, Gauss-Jacobi, or numerically generated Gauss rules)].

We first introduce an index i+, i ≥ 1. Then, for each value of i, let {ξ1i,...,ξmii}⊂ Ωi be a sequence of abscissas for quadrature on Ωi. For fC0i) and n = 1, we introduce a sequence of one-dimensional quadrature operators

(18)

with mi given. When utilizing Gaussian quadrature, Eq. (18) integrates exactly all polynomials of degree less than or equal to 2mi- 1, for each i = 1,...,n. Given an expansion order p, the highest-order coefficient evaluations [Eq. (16)] can be assumed to involve integrands of at least polynomial order 2p (Ψ of order p and R modeled to order p) in each dimension such that a minimal Gaussian quadrature order of p + 1 will be required to obtain good accuracy in these coefficients.

Now, in the multivariate case n > 1, for each fC0(Ω) and the multi-index i = (i1,...,in) ∈ +n we define the full tensor product quadrature formulas

(19)

Clearly, the above product needs Πnj=1 mij function evaluations. Therefore, when the number of input random variables is small, full tensor-product quadrature is a very effective numerical tool. On the other hand, approximations based on tensor-product grids suffer from the curse of dimensionality because the number of collocation points in a tensor grid grows exponentially fast in the number of input random variables. For example, if Eq. (19) employs the same order for all random dimensions, mij = m, then Eq. (19) requires mn function evaluations.

In [23], it is demonstrated that close synchronization of expansion form with the monomial resolution of a particular numerical integration technique can result in significant performance improvements. In particular, the traditional approach of exploying a total-order PCE neglects a significant portion of the monomial coverage for a tensor-product quadrature approach, and one should rather employ a tensor-product PCE [Eq. (12)] to provide improved synchronization and more effective usage of the Gauss point evaluations. When the quadrature points are standard Gauss rules (i.e., no Clenshaw-Curtis, Gauss-Patterson, or Genz-Keister nested rules), it has been shown that PCE and SC result in identical polynomial forms [7], completely eliminating a performance gap that exists between total-order PCE and SC [23].

4.1.3 Smolyak Sparse Grids

If the number of random variables is moderately large, then one should rather consider sparse tensor product spaces as first proposed by Smolyak [2] and further investigated by [6, 24-27] that reduce dramatically the number of collocation points, while preserving a high level of accuracy.

Here we follow the notation in [6] to describe the Smolyak isotropic formulas (w,n), where w is a level that is not directly dependent on dimension.6 The Smolyak formulas are just linear combinations of the product formulas in Eq. (19) with the following key property: only products with a relatively small number of points are used. With 0 = 0 and for i ≥ 1 define

(20)

and we set |i| = i1 + + in. Then the isotropic Smolyak quadrature formula is given by

(21)

Equivalently, formula Eq. (21) can be written as [28]

(22)

For each index set i of levels, linear or nonlinear growth rules are used to define the corresponding one-dimensional quadrature orders. The following growth rules are employed for indices i ≥ 1:

(23)

(24)

(25)

where the nonlinear growth rules for Clenshaw-Curtis and Gauss-Patterson take full advantage of the point nesting in these rules, and the linear growth rules for Gaussian quadrature take advantage of, at most, “weak” nesting (e.g., reuse of the center point).

Examples of isotropic sparse grids, constructed from the fully nested Clenshaw-Curtis abscissas and the weakly nested Gauss-Legendre abscissas are shown in Fig. 1, where Ω = [-1,1]2 and both Clenshaw-Curtis and Gauss-Legendre employ nonlinear growth7 from Eqs. (23) and (24), respectively. There, we consider a two-dimensional parameter space and a maximum level w = 5 [sparse grid A (5,2)]. To see the reduction in function evaluations with respect to full tensor product grids, we also include a plot of the corresponding Clenshaw-Curtis isotropic full tensor grid having the same maximum number of points in each direction, namely 25 + 1 = 33.

FIG. 1: Variance of the radar cross section for the kite, using stochastic collocation. (left) For low-frequency perturbations and (right) for high-frequency perturbations. For each type of perturbation, the top figure shows the full plot, while the bottom figure shows a close-up where the curves show negative variance.

In [23], it is demonstrated that the synchronization of total-order PCE with the monomial resolution of a sparse grid is imperfect, and that sparse grid SC consistently outperforms sparse grid PCE when employing the sparse grid to directly evaluate the integrals in Eq. (16). In this paper, we depart from the use of sparse integration of total-order expansions and instead employ a linear combination of tensor expansions [8]. That is, we compute separate tensor polynomial chaos expansions for each of the underlying tensor quadrature grids (for which there is no synchronization issue) and then sum them using the Smolyak combinatorial coefficient [from Eq. (22) in the isotropic case]. This improves accuracy, preserves the PCE/SC consistency property described in Section 4.1.2, and also simplifies PCE for the case of anisotropic sparse grids described next.

For anisotropic Smolyak sparse grids, a dimension preference vector is used to emphasize important stochastic dimensions. Given a mechanism for defining anisotropy, we can extend the definition of the sparse grid from that of Eq. (22) to weight the contributions of different index set components. First, the sparse grid index set constraint becomes

(26)

where γ is the minimum of the dimension weights γk, k = 1 to n. The dimension weighting vector γ amplifies the contribution of a particular dimension index within the constraint and is therefore inversely related to the dimension preference (higher weighting produces lower index set levels). For the isotropic case of all γk = 1, it is evident that we reproduce the isotropic index constraint w + 1 ≤|i|≤ w + n (note the change from < to ≤). Second, the combinatorial coefficient for adding the contribution from each of these index sets is modified as described in [29].

4.1.4 Cubature

Cubature rules [3, 30] are specifically optimized for multidimensional integration and are distinct from tensor-products and sparse grids in that they are not based on combinations of one-dimensional Gauss quadrature rules. They have the advantage of improved scalability to large numbers of random variables, but are restricted in integrand order and require homogeneous random variable sets (achieved via transformation). For example, optimal rules for integrands of 2, 3, and 5 and either Gaussian or uniform densities allow low-order polynomial chaos expansions (p = 1 or 2) that are useful for global sensitivity analysis including main effects and, for p = 2, all two-way interactions.

4.1.5 Linear Regression

The linear regression approach uses a single linear least-squares solution of the form

(27)

to solve for the complete set of PCE coefficients α that best match a set of response values R. The set of response values is obtained either by performing a design of computer experiments within the density function of ξ (point collocation [4, 31]) or from a subset of tensor quadrature points with highest product weight (probabilistic collocation [5]). In either case, each row of the matrix Ψ contains the Nt multivariate polynomial terms Ψj evaluated at a particular ξ sample. An oversampling is recommended in the case of random samples ([31] recommends 2Nt samples), resulting in a least-squares solution for the overdetermined system. As for sampling-based coefficient estimation, this approach is only valid for PCE and does not require synchronization with monomial coverage; thus, it is common to combine this coefficient estimation approach with a traditional total-order chaos expansion in order to keep sampling requirements low. In this case, simulation requirements for this approach scale as [r(n + p)!]/n!p! (r is an oversampling factor with typical values 1 ≤ r ≤ 2), which can be significantly more affordable than isotropic tensor-product quadrature [scales as (p + 1)n for standard Gauss rules] for larger problems. Finally, additional regression equations can be obtained through the use of derivative information (gradients and Hessians) from each collocation point, which can aid in scaling with respect to the number of random variables, particularly for adjoint-based derivative approaches.

5. Design Under Uncertainty Using Stochastic Expansions

5.1 Stochastic Sensitivity Analysis

Stochastic expansion methods have a number of convenient analytic features that make them attractive for use within analyses that extend beyond traditional probabilistic UQ, such as local and global sensitivity analysis (SA), mixed aleatory/epistemic UQ, and design under uncertainty algorithms. First, moments of the response expansion are available analytically. Second, the response expansions are readily differentiated with respect to their expansion variables (local SA), and terms may be reorganized to provide Sobol' sensitivities from a variance-based decomposition [9, 32] (global SA). Finally, response moment expressions may be differentiated with respect to auxilliary nonprobabilistic variables, enabling gradient-based design under uncertainty (the subject of this paper) or gradient-based interval-estimation for epistemic UQ [33]. For application to design under uncertainty, analytic moments and their design sensitivities are described in Sections 5.1.1-5.1.4.

5.1.1 Analytic Moments

Mean and covariance of polynomial chaos expansions are available in simple closed form,

(28)

(29)

where the norm squared of each multivariate polynomial is computed from Eq. (17). These expressions provide exact moments of the expansions, which converge under refinement to moments of the true response functions.

Similar expressions can be derived for stochastic collocation,

(30)

(31)

where we have simplified the expectation of Lagrange polynomials constructed at Gauss points and then integrated at these same Gauss points. For tensor grids and sparse grids with fully nested rules, these expectations leave only the weight corresponding to the point for which the interpolation value is 1, such that the final equalities in Eqs. (30) and (31) hold precisely. For sparse grids with non-nested rules, however, interpolation error exists at the collocation points, such that these final equalities hold only approximately. In this case, we have the choice of computing the moments based on sparse numerical integration or based on the moments of the (imperfect) sparse interpolant, where small differences may exist prior to numerical convergence. In this paper, we employ the former approach; i.e., the right-most expressions in Eqs. (30) and (31) are employed for all tensor and sparse cases irregardless of nesting. Subsequent sensitivity derivations are also based on this choice.

5.1.2 Local Sensitivity Analysis: First-Order Probabilistic Expansions

With the introduction of nonprobabilistic variables s (for example, design variables or epistemic uncertain variables), a polynomial chaos expansion only over the random variables ξ has the functional relationship

(32)

For computing design sensitivities of response mean and variance, the ij indices may be dropped from Eqs. (28) and (29), simplifying to

(33)

Sensitivities of Eq. (33) with respect to the nonprobabilistic variables are as follows, where independence of s and ξ is assumed:

(34)

(35)

where

(36)

has been used. Because of independence, the coefficients calculated in Eq. (36) may be interpreted as either the derivatives of the expectations or the expectations of the derivatives, or more precisely, the nonprobabilistic sensitivities of the chaos coefficients for the response expansion or the chaos coefficients of an expansion for the nonprobabilistic sensitivities of the response. The evaluation of integrals involving dR/ds extends the data requirements for the PCE approach to include response sensitivities at each of the sampled points for the quadrature, sparse grid, sampling, or point collocation coefficient estimation approaches. The resulting expansions are valid only for a particular set of nonprobabilistic variables and must be recalculated each time the nonprobabilistic variables are modi- fied.

Similarly for stochastic collocation,

(37)

leads to

(38)

(39)

(40)

5.1.3 Local Sensitivity Analysis: Zeroth-Order Combined Expansions

Alternatively, a stochastic expansion can be formed over both ξ and s. Assuming a bounded domain sL ≤s≤sU (with no implied probability content) for the nonprobabilistic variables, a Legendre chaos basis would be appropriate for each of the dimensions in s within a polynomial chaos expansion,

(41)

In this case, sensitivities for the mean and variance do not require response sensitivity data, but this comes at the cost of forming the PCE over additional dimensions. For this combined variable expansion, the mean and variance are evaluated by performing the expectations over only the probabilistic expansion variables, which eliminates the polynomial dependence on ξ, leaving behind the desired polynomial dependence of the moments on s,

(42)

(43)

The remaining polynomials may then be differentiated with respect to s. In this approach, the combined PCE is valid for the full nonprobabilistic variable range (sL ≤s≤sU) and does not need to be updated for each change in nonprobabilistic variables, although adaptive localization techniques (i.e., trust region model management approaches) can be employed when improved local accuracy of the sensitivities is required [34].

Similarly for stochastic collocation,

(44)

leads to

(45)

(46)

where the remaining polynomials not eliminated by the expectation over ξ are again differentiated with respect to s.

5.1.4 Inputs and Outputs

There are two types of nonprobabilistic variables for which sensitivities must be calculated: “augmented,” where the nonprobabilistic variables are separate from and augment the probabilistic variables, and “inserted,” where the nonprobabilistic variables define distribution parameters for the probabilistic variables. Any inserted nonprobabilistic variable sensitivities must be handled using Eqs. (34), (35), (39) and (40), where dR/ds is calculated as (dR/dx)(dx/ds) and dx/ds is the Jacobian of the variable transformation x = T-1(ξ) with respect to the inserted nonprobabilistic variables. In addition, parameterized polynomials (generalized Gauss-Laguerre, Jacobi, and numerically generated polynomials) may introduce a dΨ/ds or dL/ds dependence for inserted s that will introduce additional terms in the sensitivity expressions.

Although moment sensitivities directly enable robust design optimization and interval estimation formulations that seek to control or bound response variance, control or bounding of reliability requires sensitivities of tail statistics. In this work, the sensitivity of simple moment-based approximations [10] to CDF and complementary cumulative distribution function (CCDF) mappings are employed for this purpose,

(47)

(48)

such that it is straightforward to form approximate design sensitivities of reliability index β (forward reliability mapping z→ β) or response level z (inverse reliability mapping βz) from the moment design sensitivities and the specified levels β or z . Extending beyond these simple approaches to support probability and generalized reliability metrics is a subject of current work [35].

5.2 Optimization Formulations

Given the capability to compute analytic statistics of the response along with design sensitivities of these statistics, bilevel, sequential, and multifidelity approaches for design under uncertainty are pursued, with application to common formulations for reliability-based design and robust design. The bilevel approach directly optimizes statistical results from uncertainty analyses, whereas the sequential and multifidelity approaches seek to reduce the expense resulting from nested iteration by applying surrogate modeling indirections (data fits and multifidelity modeling) to the uncertainty analysis results. These indirections then require the application of trust region model management to manage the use of surrogates within the sequential and multifidelity optimization processes. In the sections to follow, we will simplify the semantics and refer to the first-order probabilistic expansions of Section 5.1.2 as “uncertain expansions” and the zeroth-order combined expansions of Section 5.1.3 as “combined expansions.”

5.2.1 Bilevel

The simplest and most direct approach is to employ the analytic statistics and design derivatives from Section 5.1 directly within an optimization loop. This is known as a bilevel approach, because there is an inner-level uncertainty analysis nested within an outer-level optimization.

Consider the common reliability-based design example of a deterministic objective function f (e.g., weight, cost) with a constraint on the reliability index β,

(49)

where β is computed relative to a prescribed threshold response value z (e.g., a failure threshold), is constrained by a prescribed reliability level β (minimum allowable reliability in the design), and is either a CDF or CCDF index, depending on the definition of the failure domain [i.e., defined from whether the associated failure probability is cumulative, p(gz ), or complementary cumulative, p(g > z )].

Another common example is robust design in which the constraint enforcing a reliability lower-bound has been replaced with a constraint enforcing a variance upper bound σ2 (maximum allowable variance in the design):

(50)

Solving these problems using a bilevel approach involves computing β and dβ/ds for Eq. (49) or σ2 and dσ2/ds for Eq. (50) for each set of design variables s passed from the optimizer. This approach is explored for both uncertain and combined expansions using PCE and SC.

5.2.2 Sequential

An alternative design under uncertainty approach is the sequential approach, in which additional efficiency is sought through breaking the nested relationship of the UQ and optimization loops. The general concept is to iterate between optimization and uncertainty quantification, updating the optimization goals based on the most recent uncertainty assessment results. This approach is common within the reliability methods community, for which the updating strategy may be based on safety factors [36] or other approximations [37].

A particularly effective approach for updating the optimization goals is to use data-fit surrogate models, and in particular, local Taylor series models allow direct insertion of stochastic sensitivity analysis capabilities. Using local reliability methods, first-order Taylor series approximations were explored in [10] and second-order Taylor series approximations were investigated in [11]. In both cases, a trust-region model management framework [34] is used to adaptively manage the extent of the approximations and ensure convergence of the optimization process. Surrogate models are used for both the objective and the constraint functions, although the use of surrogates is only required for the functions containing statistical results; deterministic functions may remain explicit if desired.

In particular, trust-region surrogate-based optimization for reliability-based design employs surrogate models of f and β within a trust region Δk centered at sc,

(51)

and trust-region surrogate-based optimization for robust design employs surrogate models of f and σ2 within a trust region Δk centered at sc,

(52)

Second-order local surrogates may also be employed, in which case, the objectives for Eqs. (51) and (52) become

(53)

and the constraints become

(54)

(55)

The Hessians ∇s2f, ∇s2β, and ∇s2σ2 are typically approximated from an accumulation of curvature information using quasi-Newton updates, such as Broyden-Fletcher-Goldfarb-Shanno (BFGS) or symmetric rank one (SR1) [38]. The sequential approach will be explored for uncertain expansions using PCE and SC.

5.2.3 Multifidelity

The multifidelity design under uncertainty approach is another trust-region surrogate-based approach. Instead of the surrogate UQ model being a simple data fit (in particular, a first-/second-order Taylor series model) of the truth UQ model results, distinct UQ models of differing fidelity are now employed. This differing UQ fidelity could stem from the fidelity of the underlying simulation model, the fidelity of the UQ algorithm, or both. In this paper, the focus is placed on the fidelity of the UQ algorithm. For reliability-based multifidelity methods, this could entail varying fidelity in approximating assumptions [e.g., mean-value first-order second-moment (MVFOSM) [10] for low fidelity, or second-order reliability method (SORM) [11] for high fidelity], and for stochastic expansion-based multifidelity methods, it could involve differences in selected levels of p and h refinement.

In this paper, UQ fidelity is defined to be pointwise accuracy in the design space and the high-fidelity truth model is taken to be the uncertain expansion PCE/SC model, with validity only at a single design point. The low-fidelity model, whose validity over the design space will be adaptively controlled, will be either the combined expansion PCE/SC model, with validity over a range of design parameters, or the MVFOSM reliability method, with validity only at a single design point. The combined expansion low-fidelity approach will span the current trust region of the design space and will be reconstructed for each new trust region. Trust region adaptation will ensure that the combined expansion approach remains sufficiently accurate for design purposes. By taking advantage of the design space spanning, one can eliminate the cost of multiple low-fidelity UQ analyses within the trust region, with fallback to the greater accuracy and higher expense of the uncertain expansion approach when needed. The MVFOSM low-fidelity approximation must be reformed for each change in design variables, but it only requires a single evaluation of a response function and its derivative to approximate the response mean and variance from the input mean and covariance,

(56)

(57)

from which forward/inverse CDF/CCDF reliability mappings can be generated using Eqs. (47) and (48). This is the least expensive UQ option, but its limited accuracy8 may dictate the use of small trust regions, resulting in greater iterations to convergence. The expense of optimizing a combined expansion, on the other hand, is not significantly less than that of optimizing the high-fidelity UQ model, but its representation of global trends should allow the use of larger trust regions, resulting in reduced iterations to convergence. The design derivatives of each of the PCE/SC expansion models provide the necessary data to correct the low-fidelity model to first-order consistency with the high-fidelity model at the center of each trust region, ensuring convergence of the multifidelity optimization process to the high-fidelity optimum. Design derivatives of the MVFOSM statistics are currently evaluated numerically using forward finite differences.

Multifidelity optimization for reliability-based design can be formulated as

(58)

and multifidelity optimization for robust design can be formulated as

(59)

where the deterministic objective function is not approximated and and 2 are the approximated high-fidelity UQ results resulting from correction of the low-fidelity UQ results. In the case of an additive correction function,

(60)

(61)

where correction functions δ(s) enforcing first-order consistency [39] will be explored. Quasi-second-order correction functions [39] can also be explored, but care must be taken due to the different rates of curvature accumulation between the low- and high-fidelity models. In particular, because the low-fidelity model is evaluated more frequently than the high-fidelity model, it accumulates curvature information more quickly, such that enforcing quasi-second-order consistency with the high-fidelity model can be detrimental in the initial iterations of the algorithm.9 Instead, this consistency should only be enforced when sufficient high-fidelity curvature information has been accumulated (e.g., after n rank one updates).

6. Computational Results

Stochastic expansion, stochastic sensitivity analysis, and design under uncertainty capabilities have been implemented in DAKOTA [40], an open-source software framework for design and performance analysis of computational models on high-performance computers. This section presents computational results on the performance of design under uncertainty methods for several algebraic test problems, extending previous results presented in [41]. Algorithmic variations of interest include bilevel, sequential, or multifidelity optimization formulations; PCE or SC expansions; combined or uncertain expansion variables with associated stochastic sensitivity analysis; and tensor, sparse, or regression approaches to expansion calculation.

6.1 Rosenbrock

The two-dimensional Rosenbrock function is a popular test problem for gradient-based optimization algorithms due to its difficulty for first-order methods. It turns out that this is also a challenging problem for certain UQ methods (especially local reliability methods), because a particular response level contour involves a highly nonlinear curve that may encircle the mean point (leading to multiple most probable points of failure in local reliability methods). The function is a fourth-order polynomial of the form,

(62)

A three-dimensional plot of this function is shown in Fig. 2a, where both x1 and x2 range in value from -2 to 2. Figure 2b shows a contour plot where the encircling of a mean value at (0,0) is evident. Variables x1 and x2 are modeled as independent random variables using uniform and normal probability distributions, respectively. A linear variable transformation is used to account for scaling and Legendre and Hermite orthogonal polynomials (along with linear growth Gauss-Legendre and Gauss-Hermite integration rules) are employed in the transformed space. Although usage of nested Gauss-Patterson rules for x1 could be advantageous in the case of sparse grids, Gauss-Legendre rules are used for greater consistency of results.

FIG. 2: Rosenbrock’s function.

6.1.1 Design under Uncertainty

Because exact results can be readily obtained for Rosenbrock using low-order stochastic expansions, a simple design under uncertainty formulation is used to provide verification for both stochastic sensitivity formulations. Taking x1 to be a design variable with initial value -0.75 and bounds -2 ≤ x1 ≤ 2 and taking x2 to be a standard normal random variable (μ = 0,σ = 1), Table 2 shows the computational results for maximizing βCDF for z = 10 [see Eq. (47)] with either tensor-product quadrature (TPQ) orders or Smolyak sparse grid (SSG) levels as shown, where the levels have been selected to be the minimum required to exactly resolve the fourth-order polynomial. Within Table 2, the presence of “/” separates PCE-based and SC-based results when aggregated, “{,}” separates low- and high-fidelity settings when applicable, and “(,)” separates function and gradient evaluation counts. The combined expansion approaches form a single two-dimensional expansion (formed once total for bilevel and once per trust region for multifidelity) from function values for which both moment and moment sensitivity evaluations for all design variable values involve only postprocessing of the expansion, whereas the uncertain expansion approaches form a new one-dimensional expansion from function values and gradients for each new set of design variable values. Sequential results are shown for first-order and quasi-second-order Taylor series approximations (Section 5.2.2), and multifidelity results are shown for first-order additive corrections [Eq. (60)]. Quasi-second-order formulations employ SR1 updates. For each of the multifidelity approaches, the high-fidelity UQ model is the uncertain expansion approach, using the same settings as in its corresponding bilevel approach. The low-fidelity UQ model is either the combined expansion approach, again using the same settings as in its corresponding bilevel approach, or a MVFOSM UQ analysis. Each of the analytic stochastic sensitivity approaches has been verified against finite difference results, and all but the MVFOSM analyses employ these analytic sensitivities. NPSOL’s sequential quadratic programming (SQP) method [42] is used as the optimizer, with a consistent convergence tolerance of 10-6.

TABLE 2: PCE-based and SC-based design results, Rosenbrock test problem

Design approach Expansion variables Integration approach Evaluations (Fn, Grad) βCDF
PCE/SC Bilevel Uncertain TPQ m = 5 (15, 15) 2.0913
PCE/SC Bilevel Combined TPQ m = 5 (25, 0) 2.0913
PCE/SC Sequential 1 Uncertain TPQ m = 5 (15, 10) 2.0913
PCE/SC Sequential Q2 Uncertain TPQ m = 5 (15, 10) 2.0913
PCE/SC {Comb, Unc} Multifidelity 1 {Comb, Unc} TPQ m = 5 (40, 10) 2.0913
PCE/SC {MV, Unc} Multifidelity 1 Uncertain TPQ m = 5 (20, 16) 2.0913
PCE/SC Bilevel Uncertain SSG w = 1 (9, 9) 2.0913
PCE/SC Bilevel Combined SSG w = 2 (17, 0) 2.0913
PCE/SC Sequential 1 Uncertain SSG w = 1 (9, 6) 2.0913
PCE/SC Sequential Q2 Uncertain SSG w = 1 (9, 6) 2.0913
PCE/SC {Comb, Unc} Multifidelity 1 {Comb, Unc} SSG {w = 2, w = 1} (26, 6) 2.0913
PCE/SC {MV, Unc} Multifidelity 1 Uncertain SSG w = 1 (14, 12) 2.0913

For this problem, the functional input/output relationship is captured exactly and all techniques are equally successful in locating the optimum at the lower bound of x1. There are no differences of interest between PCE-based and SC-based results for this problem; thus, these results are all aggregated. Despite the low dimension, the SSG approaches are already slightly less expensive than comparable TPQ approaches. The sequential and multifidelity approaches require only a single trust-region iteration to achieve hard convergence (Karush-Kuhn-Tucker optimality conditions satisfied), such that inclusion of quasi-second-order approximations (which require at least two iterations to accumulate curvature information) provide no benefit. Because this simple problem converges so quickly, the additional overhead of the more complex sequential and multifidelity optimization approaches has little chance to provide dividends. Overall, the sequential approaches employing SSG are the most efficient techniques for this problem (highlighted in blue), followed closely by the bilevel uncertain expansion approaches employing SSG (highlighted in red), although the primary benefit of this test problem is verification; the more challenging test problems to follow will provide greater insight on accuracy and efficiency.

6.2 Short Column

This test problem involves the plastic analysis of a short column with rectangular cross section (width b and depth h) having uncertain material properties (yield stress Y ) and subject to uncertain loads (bending moment M and axial force P ) [43]. The limit state function is defined as

(63)

The distributions for P , M, and Y are (500,100), (2000,400), and Lognormal with (μ,σ) = (5, 0.5), respectively, with a correlation coefficient of 0.5 between P and M (uncorrelated otherwise). For P and M, a linear variable transformation is applied and, for Y , a nonlinear variable transformation is applied. In both cases, Hermite orthogonal polynomials and linear growth Gauss-Hermite integration rules are employed in the transformed standard normal space. When b and h are included in combined expansions, linear scaling, Legendre polynomials, and linear growth Gauss-Legendre integration rules are employed.

6.2.1 Design under Uncertainty

An objective function of cross-sectional area and a target reliability index of 2.5 (approximated from moments) are used in the design problem,

(64)

The initial design of (b,h) = (5,15) is infeasible and the optimization must add material to obtain the target reliability at the optimal design (b,h) = (8.1147,25.000) with an area of 202.87.

In order to explore scaling for a slightly higher dimensional problem, the PCE-based design studies augment TPQ and SSG approaches with linear regression (“point collocation”) using a factor of 2 oversampling (20 simulations for each second-order uncertain expansion over three variables, and 112 simulations for each third-order combined expansion over five variables). Other settings are the same as those described in Section 6.1.1: NPSOL SQP is the optimizer, SR1 updates are used for quasi-second-order approximations in sequential approaches, and multifidelity approaches employ uncertain expansions as the high-fidelity models and either combined expansions or MVFOSM as the low-fidelity models (with settings mirroring the corresponding bilevel settings). Table 3 shows the computational results, where “/”, “{,}”, and “(,)” indicate the separations described previously. For this problem, the functional input/output relationship is not captured exactly and performance differences are more readily evident. For the combined expansion results (highlighted in red), computational expense is competitive (the PCE Bilevel/Combined/Pt Colloc approach reports the lowest expense among all approaches), but these optima are not as accurate as those computed by the corresponding uncertain expansion approaches (highlighted in green). Under order/level refinement (not shown), the optima from the bilevel combined expansion approaches converge to the optima from the uncertain expansion approaches; however, it is expensive: PCE and SC combined expansions require SSG level = 5 at a cost of 4575 evaluations to achieve accuracy comparable to the uncertain expansion results. Another trend that can be identified is that the TPQ and SSG approaches outperform the regression approach in terms of accuracy, due both to the advantages of tensor and sparse expansions over total-order expansions and to the accuracy of explicit numerical integration over implicit least squares. Because the sequential approaches take more than a single iteration to converge, benefit from quasi-second-order approximations (highlighted in blue) is evident: accumulated curvature information converges the sequential iteration more quickly. In addition, the multifidelity machinery converges to the high-fidelity uncertain expansion results despite the optimizer being interfaced only with the low-fidelity combined expansion or MVFOSM UQ analyses. Among the approaches that converge with sufficient accuracy, MVFOSM-based multifidelity approaches (highlighted in magenta) provide the most efficient techniques for this problem, followed by the quasi-second-order sequential approaches (highlighted in blue), followed by the bilevel uncertain expansion approaches (highlighted in green).

TABLE 3: PCE- and SC-based design results, short column test problem

Design approach Expansion variables Integration approach Evaluations (Fn, Grad) Area βCDF
PCE Bilevel Uncertain Pt Colloc p = 2 (300, 300) 202.38 2.5001
PCE Bilevel Combined Pt Colloc p = 3 (112, 0) 206.12 2.5000
PCE Sequential 1 Uncertain Pt Colloc p = 2 (240, 120) 202.39 2.5000
PCE Sequential Q2 Uncertain Pt Colloc p = 2 (220, 120) 202.39 2.5000
PCE {Comb, Unc} Multifidelity 1 {Comb, Unc} Pt Colloc {p = 3, p = 2} (576, 120) 202.39 2.5000
PCE {MV, Unc} Multifidelity 1 Uncertain Pt Colloc p = 2 (204, 144) 202.39 2.5000
PCE/SC Bilevel Uncertain TPQ m = 3 (405, 405) 202.86 2.5001
PCE/SC Bilevel Combined TPQ m = 4 (1024, 0) 202.61 2.5000
PCE/SC Sequential 1 Uncertain TPQ m = 3 (324, 162) 202.86 2.5000
PCE/SC Sequential Q2 Uncertain TPQ m = 3 (297, 162) 202.86 2.5000
PCE/SC {Comb, Unc} Multifidelity 1 {Comb, Unc} TPQ {m = 4, m = 3} (3288, 108) 202.86 2.5000
PCE/SC {MV, Unc} Multifidelity 1 Uncertain TPQ m = 3 (253, 172) 202.86 2.5000
PCE/SC Bilevel Uncertain SSG w = 2 (465, 465) 202.87 2.5001
PCE/SC Bilevel Combined SSG w = 3 (341, 0) 201.67/199.46 2.5000
PCE/SC Sequential 1 Uncertain SSG w = 2 (372, 186) 202.86 2.5000
PCE/SC Sequential Q2 Uncertain SSG w = 2 (341, 186) 202.86 2.5000
PCE/SC {Comb, Unc} Multifidelity 1 {Comb, Unc} SSG {w = 3, w = 2} (992/1333, 155) 202.86 2.5000
PCE/SC {MV, Unc} Multifidelity 1 Uncertain SSG w = 2 (281, 188) 202.86 2.5000

6.2.2 Convergence Rates for Combined Expansions

To investigate the issue of the degraded convergence rates for combined expansions, Fig. 3 shows a comparison of convergence rates for L2 versus L metrics for the short column problem, where all five variables are used in combined expansions and the metrics only reflect differences in expansion postprocessing goals. The L metrics are maximum values for βCDF from applying NPSOL to only this metric (neglecting cross-sectional area) over the range of the design variables, where βCDF is determined from integrating over the three uncertain variables as in Eqs. (42), (43), (45), and (46). For L2 metrics, all five variables are treated as uncertain (b and h are treated as uniform random variables within their ranges) and convergence in the resulting βCDF value [determined from integrating over all five variables as in Eqs. (28)-(31)] is shown. All errors are computed relative to overkill solutions resulting from high-level grid refinements. It is evident that convergence rates for both L2 cases are more rapid than the L maxima, with approximately four orders-of-magnitude reduction in residuals for L compared to approximately nine orders-of-magnitude reduction in residuals for L2, despite the fact that both sets of metrics are computed from postprocessing of the same combined expansions. Thus, it can be inferred that it is not just the higher nonlinearity in b and h that slows the combined approaches; rather, the requirement of extrema is also a contributing factor. In particular, the pointwise accuracy required for L is more demanding than the integrated convergence required for L2, leading to the observation that distinguishing stochastic dimensions undergoing integration from those undergoing optimization can be important.

FIG. 3: Convergence rates for combined expansions in the short column test problem.

6.3 Cantilever Beam

The next test problem involves the simple uniform cantilever beam [36, 44] shown in Fig. 4. Random variables in the problem include the yield stress R and Youngs modulus E of the beam material and the horizontal and vertical loads, X and Y , which are modeled with independent normal distributions using (40000,2000), (2.9E7,1.45E6), (500,100), and (1000,100), respectively. Problem constants include L = 100 in. and D0 = 2.2535 in. The beam response metrics have the following analytic form for stress S and displacement D:

(65)

(66)

where they are placed in standard form using

(67)

(68)

such that negative g values indicate safe regions of the parameter space. For polynomial approximation of gS and gD, a linear variable transformation is used and Hermite orthogonal polynomials and linear growth Gauss-Hermite integration rules are employed in the transformed standard normal space. When w and t are included in combined expansions, linear scaling, Legendre polynomials, and linear growth Gauss-Legendre integration rules are employed for these variables. It is worth noting that: (i) gS is linear and gD is only mildly nonlinear in the uncertain variables, but both are highly nonlinear in the design variables w and t; and (ii) gD contains a singularity due to the infinite tails of E and therefore does not have finite variance; however, this occurs at 20 standard deviations and, given that Youngs modulus cannot physically be negative, its presence is not of practical or numerical interest.

FIG. 4: Cantilever beam test problem.

6.3.1 Design under Uncertainty

The design problem is to minimize the weight (or, equivalently, the cross-sectional area) of the beam subject to the displacement and stress constraints. When seeking three-sigma reliability levels on these constraints [reliability indices are CCDF in orientation and are approximated from moments as in Eq. (47)], the design problem can be summarized as follows:

(69)

For SC, results are presented for TPQ and SSG, and for PCE, results are shown for TPQ, SSG, and point collocation with a factor of two oversampling (30 simulations for each second-order uncertain expansion over four variables and 420 simulations for each fourth-order combined expansion over six variables). Again, NPSOL SQP is the optimizer, SR1 updates enable quasi-second-order approximations in sequential approaches, and multifidelity approaches employ uncertain expansions as the high-fidelity models and either combined expansions or MVFOSM as the low-fidelity models (with settings mirroring the corresponding bilevel settings). Table 4 shows the computational results starting from infeasible initial guess (w,t) = (2.5,2.5), where the fully converged optimal solution is (w,t) = (2.4460,3.8922) with area = 9.5202, βCCDFS = 3.0000, and βCCDFD = 3.2770. For this problem, the relationship of stress and displacement with respect to the uncertain variables is captured accurately enough by low-order uncertain expansions (Pt Colloc p = 2, TPQ m = 3, and SSG w = 2) to converge to the correct solution. The accuracy of the combined expansion results, however, is poor even using higher order expansions (highlighted in red). Each of the first-order sequential approaches also fail to converge accurately, whereas accumulation of curvature information in the quasi-second-order sequential approaches (highlighted in blue) mitigates this problem in each of these cases. The multifidelity approaches are successful in forcing the low-fidelity results toward the high-fidelity optima, with more accurate results obtained using MVFOSM as the low-fidelity model than for using the poorly converged combined expansions. Overall, the MVFOSM-based multifidelity approaches (highlighted in magenta) provide the most efficient techniques for this problem, followed by the bilevel uncertain expansion approaches (highlighted in green), followed by the quasi-second-order sequential approaches (highlighted in blue).

TABLE 4: PCE- and SC-based design results, cantilever beam test problem

Design approach Expansion variables Integration approach Evaluations (Fn, Grad) Area βCCDFS βCCDFD
PCE Bilevel Uncertain Pt Colloc p= 2 (330, 330) 9.5202 3.0000 3.2639
PCE Bilevel Combined Pt Colloc p= 4 (420, 0) 17.230 3.1728 1.0435
PCE Sequential 1 Uncertain Pt Colloc p= 2 (300, 150) 9.2802 2.6127 2.3473
PCE Sequential Q2 Uncertain Pt Colloc p= 2 (570, 300) 9.5202 3.0000 3.2637
PCE {Comb, Unc}Multifidelity 1 {Comb, Unc} Pt Colloc{p=4, p=2} (1890, 90) 9.2586 2.5110 3.3574
PCE {MV, Unc}Multifidelity 1 Uncertain Pt Colloc p= 2 (225, 165) 9.5202 3.0000 3.2639
PCE/SC Bilevel Uncertain TPQ m= 3 (891, 891) 9.5202 3.0000 3.2770
PCE/SC Bilevel Combined TPQ m= 3 (729, 0) 6.5432 6.8168 3.0000
PCE/SC Sequential 1 Uncertain TPQ m= 3 (810, 405) 9.2656 2.5877 2.2745
PCE/SC Sequential Q2 Uncertain TPQ m= 3 (1458, 729) 9.5202 3.0000 3.2770
PCE/SC {Comb, Unc}Multifidelity 1 {Comb, Unc} TPQ m= 3 (4374, 324) 9.2458 2.5164 3.2623
PCE/SC {MV, Unc}Multifidelity 1 Uncertain TPQ m= 3 (478, 318) 9.5202 3.0000 3.2770
PCE/SC Bilevel Uncertain SSG w = 2 (539, 539) 9.5202 3.0000 3.2770
PCE/SC Bilevel Combined SSG w = 4 (2381, 0) 9.1988/9.0785 3.0000 5.3265/6.0161
PCE/SC Sequential 1 Uncertain SSG w = 2 (490, 245) 9.2658 2.5882/2.5883 2.2765/2.2766
PCE/SC Sequential Q2 Uncertain SSG w = 2 (882, 441) 9.5202 3.0000 3.2769
PCE/SC {Comb, Unc}Multifidelity1 {Comb, Unc} SSG {w = 4, w = 2} (10063/12346,294/196) 9.5202/9.5250 3.0000/3.0053 3.2768/3.5034
PCE/SC {MV, Unc}Multifidelity 1 Uncertain SSG w = 2 (318, 222) 9.5202 3.0000 3.2770

6.3.2 Convergence Rates for Combined Expansions

The convergence rates for PCE/SC combined expansions have been severely degraded in this problem. To again explore relative convergence behavior for L2 versus L metrics computed from combined expansions, high-order sparse grids are needed. By observing that the stress and displacement metrics have much greater nonlinearity with respect to the design variables than with respect to the uncertain variables, anisotropy in these metrics can be exploited. In particular, an anisotropic SSG [see Eq. (26)] is used with dimension preference of {4,4,1,1,1,1} for {w,t,R,E,X,Y}, allowing the exploration of SSG levels up to w = 24 in six dimensions. Figure 5 shows the convergence behavior for stress reliability metrics, where the L metrics are the βCCDFS maxima (neglecting cross-sectional area and βCCDFD) from applying NPSOL over the range of the design variables [β computed from integrating over the four uncertain variables as in Eqs. (42), (43), (45), and (46)], and the L2 metrics are βCCDFS values from treating all six variables as uncertain [w and t treated as uniform random variables within their ranges, and β computed from integrating over all six variables as in Eqs. (28)-(31)]. All errors are again computed relative to overkill reference solutions. Similar to Fig. 3, significant differences in convergence rates are evident despite the use of the same combined expansions in the metric post-processing. Convergence rates for the L2 statistics are more rapid than the L maxima, with approximately nine orders of magnitude reduction in residuals for L2 (prior to saturation) compared to approximately four orders of magnitude reduction in residuals for L over the same span. From this, it can again be inferred that L metrics are more computationally demanding than L2 metrics, and the poor convergence of the combined expansions in this problem results from more than the higher degrees of nonlinearity in the design dimensions.

FIG. 5: Convergence rates for combined expansions in the cantilever stress test function.

6.4 Steel Column

The final algebraic test problem involves the trade-off between cost and reliability for a steel column [43]. The cost is defined as

(70)

where b, d, and h are the means of the flange breadth, flange thickness, and profile height, respectively. This problem demonstrates scaling to larger dimensional UQ problems as well as design variable insertion. Nine uncorrelated random variables are used in the problem to define the yield stress Fs (lognormal with μ/σ = 400/35 MPa), dead weight load P1 (normal with μ/σ = 500,000/50,000 N), variable load P2 (gumbel with μ/σ = 600,000/90,000 N), variable load P3 (gumbel with μ/σ = 600,000/90,000 N), flange breadth B (lognormal with μ/σ = b/3 mm), flange thickness D (lognormal with μ/σ = d/2 mm), profile height H (lognormal with μ/σ = h/5 mm), initial deflection F0 (normal with μ/σ = 30/10 mm), and Youngs modulus E (Weibull with μ/σ = 21,000/4200 MPa). The limit state has the following analytic form:

(71)

where

(72)

(73)

and the column length L is 7500 mm. For P1 and F0, a linear variable transformation is applied and, for the other seven random variables, a nonlinear variable transformation is applied; in all cases, Hermite orthogonal polynomials and linear growth Gauss-Hermite integration rules are employed in the transformed standard normal space.

As shown in [15], this problem has a singularity in the limit state out in the (heavy) tails of the input distributions due to subtractive cancellation in the denominator of Eq. (71). Unlike the singularity described in Section 6.3, this singularity is of numerical importance as variance will quickly diverge under expansion order refinement. Therefore, UQ convergence studies on moments or moment-based reliability indices are not meaningful as no reference solution exists. However, for a fixed resolution in the stochastic expansions, convergence of the design under uncertainty process is still meaningful.

6.4.1 Design under Uncertainty

This design problem demonstrates design variable insertion into random variable distribution parameters through the design of the mean flange breadth, flange thickness, and profile height. Because there are no augmented design variables in this problem, there is no combined expansion option and the number of potential formulations is reduced. The following design formulation maximizes the reliability subject to a cost constraint:

(74)

For this larger dimensional problem, the TPQ approach is not viable and results are reported for SC using SSG and for PCE using either SSG or point collocation with an oversampling ratio of 2. As for each of the previous test problems, NPSOL SQP is the optimizer, sequential approaches with quasi-second-order approximations employ SR1 updates, and multifidelity approaches employ uncertain expansions as the high-fidelity models (mirroring the corresponding bilevel settings). However, only the MVFOSM option is available for use as the low-fidelity UQ model. Table 5 shows the computational results. For this problem, all approaches converge accurately to the optimal design point at (b,d,h) = (200.,17.5,100.). Differences in the optimum βCDF value are evident between the point collocation and SSG solutions, with more accurate resolution (at higher expense) generally manifesting as higher variance and lower reliability index (βCDF = 3.2362 for SSG as opposed to βCDF = 3.2566 for point collocation). The quasi-second-order sequential approaches again show improvement over the first-order sequential approaches, and the multifidelity approach again succeeds in finding the high-fidelity optimum despite the optimizer only being interfaced with the low-fidelity MVFOSM UQ. Overall, the quasi-second-order sequential approaches (highlighted in blue) are the most efficient approaches, followed by the MVFOSM-based multifidelity approaches (highlighted in magenta).

TABLE 5: PCE- and SC-based design results, steel column, test problem

Design approach Expansion variables Integration approach Evaluations (Fn, Grad) βCDF Cost
PCE Bilevel Uncertain Pt Colloc p = 2 (1320, 1320) 3.2566 4000.0
PCE Sequential 1 Uncertain Pt Colloc p = 2 (990, 550) 3.2566 4000.0
PCE Sequential Q2 Uncertain Pt Colloc p = 2 (660, 330) 3.2566 4000.0
PCE {MV, Unc} Multifidelity 1 Uncertain Pt Colloc p = 2 (821, 491) 3.2566 4000.0
PCE/SC Bilevel Uncertain SSG w = 2 (2388, 2388) 3.2362 4000.0
PCE/SC Sequential 1 Uncertain SSG w = 2 (1791/995, 995/597) 3.2362 4000.0
PCE/SC Sequential Q2 Uncertain SSG w = 2 (1194, 597) 3.2362 4000.0
PCE/SC {MV, Unc} Multifidelity 1 Uncertain SSG w = 2 (1376/1369, 779/772) 3.2362 4000.0

7. Conclusions

This paper has investigated the usage of stochastic expansion methods, particularly the nonintrusive polynomial chaos expansion and Lagrange interpolation-based stochastic collocation, for computing statistics and design derivatives of statistics for several algebraic benchmark problems with known solutions. The primary distinction between these two stochastic methods is that PCE must estimate coefficients for a known basis of orthogonal polynomials (using sampling, linear regression, tensor-product quadrature, cubature, or Smolyak sparse grids) whereas SC must form an interpolant for known coefficients (using quadrature or sparse grids).

These UQ approaches are employed in design under uncertainty studies employing two stochastic sensitivity approaches, one based on expansions of response functions and their design sensitivities over uncertain variables and another based on combined expansions of response functions over design and uncertain variables. Although it is shown that both approaches are capable of exact results, computational experiments indicate that the former approach may be preferable for general usage. In two test problems employing rational functions, convergence rates for L2 integrated metrics were shown to be more than twice as fast as L metrics for postprocessing of the same combined expansions, indicating the need to distinguish dimensions undergoing integration from dimensions undergoing optimization. In particular, this implies restriction of stochastic expansion approximation to dimensions requiring L2 metrics (mean, variance, probability) and handling of dimensions requiring L metrics (minima and maxima) through other means (i.e., direct optimization without stochastic expansion approximation). For infinitely differentiable smooth problems, related work [33] has shown that L2 and L convergence rates are indistinguishable in this case and that combined expansions can reduce computational expense; however, this level of smoothness is too strong of an assumption in most applications.

The ability to efficiently compute moments and moment design sensitivities provides the foundation for exploration of bilevel, sequential, and multifidelity formulations to design under uncertainty. Quasi-second-order approaches are shown to be preferred to first-order approaches within sequential formulations, both in terms of computational efficiency and algorithmic robustness in locating the optimal design. Multifidelity approaches are shown to be capable of coercing the low-fidelity optimization to converge to the high-fidelity optimum, and an inexpensive but still representative low-fidelity UQ model is shown to be critically important to the overall efficiency of the process. The MVFOSM-based low-fidelity UQ model is highly successful in this regard. For the four test problems presented, the most efficient and accurate approach is either the MVFOSM-based multifidelity approach (short column and cantilever beam) or the quasi-second-order sequential approach (Rosenbrock and steel column). Thus, while the bilevel approach with SQP is highly effective and is itself based on solving an approximate second-order subproblem using quasi-Newton updates, benefit has been demonstrated in moving past these simpler bilevel approaches.

Areas for future work include improved support for reliability metrics through efficient tail probability estimation [35] (replacing the simple moment-based approximations used in this paper) and improved stochastic scalability through the use of adjoint derivative enhancement and adaptive stochastic refinement schemes.

ACKNOWLEDGMENTS

The author thanks Paul Constantine of Sandia for his insight related to tensor and sparse PCE, Clayton Webster of Florida Power & Light and John Burkardt of Virginia Tech for analysis and development of isotropic and anisotropic sparse grid capabilities used in this work, and Prof. Kurt Maute of the University of Colorado at Boulder for his insight on sensitivity analysis.

REFERENCES

1. Xiu, D. and Karniadakis, G. M., The wiener-askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput., 24(2):619-644, 2002.

2. Smolyak, S., Quadrature and interpolation formulas for tensor products of certain classes of functions, Dokl. Akad. Nauk SSSR, 4:240-243, 1963.

3. Stroud, A., Approximate Calculation of Multiple Integrals, Prentice Hall, Englewood Cliffs, NJ, 1971.

4. Walters, R. W., Towards stochastic fluid mechanics via polynomial chaos, Proceedings of 41st AIAA Aerospace Sciences Meeting and Exhibit, Paper No. AIAA-2003-0413, Reno, January 6-9, 2003.

5. Tatang, M., Direct incorporation of uncertainty in chemical and environmental engineering systems, PhD thesis, MIT, 1995.

6. Nobile, F., Tempone, R., and Webster, C. G., An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data, SIAM J. Numer Anal., 46(5):2411-2442, 2008.

7. Constantine, P. G., Gleich, D. F., and Iaccarino, G., Spectral methods for parameterized matrix equations, SIAM J. Matrix Anal. Appl., 31(5):2681-2699, 2010.

8. Constantine, P. G. and Eldred, M. S., Sparse polynomial chaos expansions, Int. J. Uncert. Quantif., in preparation.

9. Tang, G., Iaccarino, G., and Eldred, M. S., Global sensitivity analysis for stochastic collocation expansion, Proceedings of 12th AIAA Non-Deterministic Approaches Conference, Paper No. AIAA-2010-2922, Orlando, April 12-15, 2010.

10. Eldred, M. S., Agarwal, H., Perez, V. M., Wojtkiewicz, Jr., S. F., and Renaud, J. E., Investigation of reliability method formulations in DAKOTA/UQ, Struct. Infrastruct. Eng.: Maint., Man., Life-Cycle Des. Perform., 3(3):199-213, 2007.

11. Eldred, M. S. and Bichon, B. J., Second-order reliability formulations in DAKOTA/UQ, Proceedings of 47th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Paper No. AIAA-2006-1828, Newport, RI, May 1-4, 2006.

12. Askey, R. and Wilson, J., Some basic hypergeometric polynomials that generalize jacobi polynomials, Mem. Am. Math. Soc. 319, Providence, RI, 1985.

13. Wiener, N., The homogoeneous chaos, Am. J. Math., 60:897-936, 1938.

14. Abramowitz, M. and Stegun, I. A., Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover, New York, 1965.

15. Eldred, M. S., Webster, C. G., and Constantine, P., Evaluation of non-intrusive approaches for wiener-askey generalized polynomial chaos, Proceedings of 10th AIAA Nondeterministic Approaches Conference, Paper No. AIAA-2008-1892, Schaumburg, IL, Apr. 7-10, 2008.

16. Der Kiureghian, A. and Liu, P. L., Structural reliability under incomplete probability information, J. Eng. Mech., ASCE, 112(1):85-104, 1986.

17. Simpson, I., Numerical integration over a semi-infinite interval, using the lognormal distibution, Numer. Math., 31:71-76, 1978.

18. Gautschi, W., Orthogonal Polynomials: Computation and Approximation, Oxford University Press, New York, 2004.

19. Witteveen, J. A. S. and Bijl, H., Modeling arbitrary uncertainties using Gram-Schmidt polynomial chaos, Proceedings of 44th AIAA Aerospace Sciences Meeting and Exhibit, Paper No. AIAA-2006-0896, Reno, Jan. 9-12, 2006.

20. Golub, G. H. and Welsch, J. H., Caclulation of gauss quadrature rules, Math. Comput., 23(106):221-230, 1969.

21. Rosenblatt, M., Remarks on a multivariate transformation, Ann. Math. Stat., 23(3):470-472, 1952.

22. Box, G. E. P. and Cox, D. R., An analysis of transformations, J. Royal Stat. Soc., 26:211-252, 1964.

23. Eldred, M. S. and Burkardt, J., Comparison of non-intrusive polynomial chaos and stochastic collocation methods for uncertainty quantification, Proceedings of 47th AIAA Aerospace Sciences Meeting and Exhibit, Paper No. AIAA-2009-0976, Orlando, Jan. 5-8, 2009.

24. Gerstner, T. and Griebel, M., Numerical integration using sparse grids, Numer. Algorithms, 18(3-4):209-232, 1998.

25. Barthelmann, V., Novak, E., and Ritter, K., High dimensional polynomial interpolation on sparse grids, Adv. Comput. Math., 12(4):273-288, 2000.

26. Frauenfelder, P., Schwab, C., and Todor, R. A., Finite elements for elliptic problems with stochastic coefficients, Comput. Methods Appl. Mech. Eng., 194(2-5):205-228, 2005.

27. Xiu, D. and Hesthaven, J., High-order collocation methods for differential equations with random inputs, SIAM J. Sci. Comput., 27(3):1118-1139, 2005.

28. Wasilkowski, G. W. and Woźniakowski, H., Explicit cost bounds of algorithms for multivariate tensor product problems, J. Complex., 11:1-56, 1995.

29. Burkardt, J., The “combining coefficient” for anisotropic sparse grids, Tech. Report, Virginia Tech., Blacksburg, 2009.

30. Xiu, D., Numerical integration formulas of degree two, Appl. Numer. Math., 58:1515-1520, 2008.

31. Hosder, S., Walters, R. W., and Balch, M., Efficient sampling for non-intrusive polynomial chaos applications with multiple uncertain input variables, Proceedings of 48th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Paper No. AIAA-2007-1939, Honolulu, Apr. 23-26, 2007.

32. Sudret, B., Global sensitivity analysis using polynomial chaos expansions, Reliab. Eng. Syst. Safety, 93:964-979, 2008.

33. Eldred, M. S., Swiler, L. P., and Tang, G., Mixed aleatory-epistemic uncertainty quantification with stochastic expansions and optimization-based interval estimation, Reliab. Eng. Sys. Safety, to appear.

34. Eldred, M. S. and Dunlavy, D. M., Formulations for surrogate-based optimization with data fit, multifidelity, and reduced-order models, Proceedings of 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Paper No. AIAA-2006-7117, Portsmouth, Sept. 6-8, 2006.

35. Eldred, M. S. and Swiler, L. P., Towards goal-oriented stochastic design employing adaptive collocation methods, Proceedings of 13th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Paper No. AIAA-2010-9125, Fort Worth, Sept. 13-15, 2010.

36. Wu, Y.-T., Shin, Y., Sues, R., and Cesare, M., Safety-factor based approach for probability-based design optimization, Proceedings of 42rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Paper No. AIAA-2001-1522, Seattle, Apr. 16-19, 2001.

37. Du, X. and Chen, W., Sequential optimization and reliability assessment method for efficient probabilistic design, J. Mech. Design, 126:225-233, 2004.

38. Nocedal, J. and Wright, S. J., Numerical Optimization, Springer, New York, 1999.

39. Eldred, M. S., Giunta, A. A., and Collis, S. S., Second-order corrections for surrogate-based optimization with model hierarchies, Proceedings of 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Paper No. AIAA-2004-4457, Albany, Aug. 30-Sept. 1, 2004.

40. Eldred, M. S., Adams, B. M., Haskell, K., Bohnhoff, W. J., Eddy, J. P., Gay, D. M., Hart, W. E., Hough, P. D., Kolda, T. G., Swiler, L. P., and Watson, J.-P., DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis: Version 4.2 users manual, Tech. Report SAND2006-6337, Sandia National Laboratories, Albuquerque, 2008.

41. Eldred, M. S., Webster, C. G., and Constantine, P., Design under uncertainty employing stochastic expansion methods, Proceedings of 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Paper No. AIAA-2008-6001, Victoria, British Columbia, Sept. 10-12, 2008.

42. Gill, P. E., Murray, W., Saunders, M. A., and Wright, M. H., User’s guide for npsol 5.0: A fortran package for nonlinear programming, Tech. Report No. SOL 86-1, System Optimization Laboratory, Stanford University, Stanford (revised), 1998.

43. Kuschel, N. and Rackwitz, R., Two basic problems in reliability-based structural optimization, Math. Method Oper. Res., 46:309-333, 1997.

44. Sues, R., Aminpour, M., and Shin, Y., Reliability-based multidisciplinary optimization for aerospace systems, Proceedings of 42rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Paper No. AIAA-2001-1521, Seattle, Apr. 16-19, 2001.


1 Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security Administration under Contract DE-AC04-94AL85000.

2 Orthogonal polynomial selections also exist for discrete probability distributions, but are not explored here.

3 Identical support range; weight differs by at most a constant factor.

4 For tensor interpolants and sparse interpolants based on fully nested rules (e.g., Clenshaw-Curtis, Gauss-Patterson, Genz-Keister); sparse interpolants based on non-nested rules will exhibit some interpolation error at the collocation points.

5 If joint distributions are known, then the Rosenblatt transformation is preferred.

6 Other common formulations use a level q, where qn. We use w = q-n, where w ≥ 0 for all n.

7 We prefer linear growth for Gauss-Legendre, but employ nonlinear growth here for purposes of comparison.

8 MVFOSM is exact for linear functions with Gaussian inputs, but quickly degrades for nonlinear and/or non-Gaussian.

9 Analytic and numerical Hessians, when available, are instantaneous with no accumulation rate concerns.

Portal Digitalde Biblioteca Digital eLibros Revistas Referencias y Libros de Ponencias Colecciones Precios y Políticas de Suscripcione Begell House Contáctenos Language English 中文 Русский Português German French Spain