%0 Journal Article
%A Jasra, Ajay
%A Kamatani, Kengo
%A Law, Kody J.H.
%A Zhou, Yan
%D 2018
%I Begell House
%K multi-index Monte Carlo, Markov chain Monte Carlo, stochastic partial differential equations
%N 1
%P 61-73
%R 10.1615/Int.J.UncertaintyQuantification.2018021551
%T A MULTI-INDEX MARKOV CHAIN MONTE CARLO METHOD
%U http://dl.begellhouse.com/journals/52034eb04b657aea,215dae1860d04f34,33d934f151bc0eb0.html
%V 8
%X In this paper, we consider computing expectations with respect to probability laws associated with a certain class
of stochastic systems. In order to achieve such a task, one must not only resort to numerical approximation of the
expectation but also to a biased discretization of the associated probability. We are concerned with the situation for which the discretization is required in multiple dimensions, for instance in space-time. In such contexts, it is known that the multi-index Monte Carlo (MIMC) method of Haji-Ali, Nobile, and Tempone, (Numer. Math., 132, pp. 767–
806, 2016) can improve on independent identically distributed (i.i.d.) sampling from the most accurate approximation of the probability law. Through a nontrivial modification of the multilevel Monte Carlo (MLMC) method, this method can reduce the work to obtain a given level of error, relative to i.i.d. sampling and even to MLMC. In this paper, we consider the case when such probability laws are too complex to be sampled independently, for example a Bayesian inverse problem where evaluation of the likelihood requires solution of a partial differential equation model, which needs to be approximated at finite resolution. We develop a modification of the MIMC method, which allows one to use standard Markov chain Monte Carlo (MCMC) algorithms to replace independent and coupled sampling, in certain
contexts. We prove a variance theorem for a simplified estimator that shows that using our MIMCMC method is
preferable, in the sense above, to i.i.d. sampling from the most accurate approximation, under appropriate assumptions.
The method is numerically illustrated on a Bayesian inverse problem associated to a stochastic partial differential
equation, where the path measure is conditioned on some observations.
%8 2018-02-15