%0 Journal Article
%A Huan, Xun
%A Marzouk, Youssef
%D 2014
%I Begell House
%K stochastic approximation, sample average approximation, polynomial chaos, infinitesimal perturbation analysis, optimal experimental design, mutual information, Bayesian inference
%N 6
%P 479-510
%R 10.1615/Int.J.UncertaintyQuantification.2014006730
%T GRADIENT-BASED STOCHASTIC OPTIMIZATION METHODS IN BAYESIAN EXPERIMENTAL DESIGN
%U http://dl.begellhouse.com/journals/52034eb04b657aea,21fe10c229b8ad74,718c817303f13640.html
%V 4
%X Optimal experimental design (OED) seeks experiments expected to yield the most useful data for some purpose. In
practical circumstances where experiments are time-consuming or resource-intensive, OED can yield enormous savings. We pursue OED for nonlinear systems from a Bayesian perspective, with the goal of choosing experiments that are optimal for parameter inference. Our objective in this context is the expected information gain in model parameters, which in general can only be estimated using Monte Carlo methods. Maximizing this objective thus becomes a stochastic optimization problem. This paper develops gradient-based stochastic optimization methods for the design of experiments on a continuous parameter space. Given a Monte Carlo estimator of expected information gain, we use infinitesimal perturbation analysis to derive gradients of this estimator.We are then able to formulate two gradient-based stochastic optimization approaches: (i) Robbins-Monro stochastic approximation, and (ii) sample average approximation combined with a deterministic quasi-Newton method. A polynomial chaos approximation of the forward model
accelerates objective and gradient evaluations in both cases.We discuss the implementation of these optimization methods, then conduct an empirical comparison of their performance. To demonstrate design in a nonlinear setting with partial differential equation forward models, we use the problem of sensor placement for source inversion. Numerical results yield useful guidelines on the choice of algorithm and sample sizes, assess the impact of estimator bias, and quantify tradeoffs of computational cost versus solution quality and robustness.
%8 2014-10-17