RT Journal Article
ID 718c817303f13640
A1 Huan, Xun
A1 Marzouk, Youssef
T1 GRADIENT-BASED STOCHASTIC OPTIMIZATION METHODS IN BAYESIAN EXPERIMENTAL DESIGN
JF International Journal for Uncertainty Quantification
JO IJUQ
YR 2014
FD 2014-10-17
VO 4
IS 6
SP 479
OP 510
K1 stochastic approximation
K1 sample average approximation
K1 polynomial chaos
K1 infinitesimal perturbation analysis
K1 optimal experimental design
K1 mutual information
K1 Bayesian inference
AB Optimal experimental design (OED) seeks experiments expected to yield the most useful data for some purpose. In
practical circumstances where experiments are time-consuming or resource-intensive, OED can yield enormous savings. We pursue OED for nonlinear systems from a Bayesian perspective, with the goal of choosing experiments that are optimal for parameter inference. Our objective in this context is the expected information gain in model parameters, which in general can only be estimated using Monte Carlo methods. Maximizing this objective thus becomes a stochastic optimization problem. This paper develops gradient-based stochastic optimization methods for the design of experiments on a continuous parameter space. Given a Monte Carlo estimator of expected information gain, we use infinitesimal perturbation analysis to derive gradients of this estimator.We are then able to formulate two gradient-based stochastic optimization approaches: (i) Robbins-Monro stochastic approximation, and (ii) sample average approximation combined with a deterministic quasi-Newton method. A polynomial chaos approximation of the forward model
accelerates objective and gradient evaluations in both cases.We discuss the implementation of these optimization methods, then conduct an empirical comparison of their performance. To demonstrate design in a nonlinear setting with partial differential equation forward models, we use the problem of sensor placement for source inversion. Numerical results yield useful guidelines on the choice of algorithm and sample sizes, assess the impact of estimator bias, and quantify tradeoffs of computational cost versus solution quality and robustness.
PB Begell House
LK http://dl.begellhouse.com/journals/52034eb04b657aea,21fe10c229b8ad74,718c817303f13640.html