Library Subscription: Guest
Begell Digital Portal Begell Digital Library eBooks Journals References & Proceedings Research Collections
International Journal for Uncertainty Quantification

Impact factor: 1.000

ISSN Print: 2152-5080
ISSN Online: 2152-5099

Open Access

International Journal for Uncertainty Quantification

DOI: 10.1615/Int.J.UncertaintyQuantification.2018021551
pages 61-73

A MULTI-INDEX MARKOV CHAIN MONTE CARLO METHOD

Ajay Jasra
Department of Statistics & Applied Probability National University of Singapore, Singapore
Kengo Kamatani
Graduate School of Engineering Science, Osaka University, Osaka, 565–0871, Japan
Kody J. H. Law
School of Mathematics, University of Manchester, Manchester, UK, M13 9PL
Yan Zhou
Department of Statistics & Applied Probability National University of Singapore, Singapore

ABSTRACT

In this paper, we consider computing expectations with respect to probability laws associated with a certain class of stochastic systems. In order to achieve such a task, one must not only resort to numerical approximation of the expectation but also to a biased discretization of the associated probability. We are concerned with the situation for which the discretization is required in multiple dimensions, for instance in space-time. In such contexts, it is known that the multi-index Monte Carlo (MIMC) method of Haji-Ali, Nobile, and Tempone, (Numer. Math., 132, pp. 767– 806, 2016) can improve on independent identically distributed (i.i.d.) sampling from the most accurate approximation of the probability law. Through a nontrivial modification of the multilevel Monte Carlo (MLMC) method, this method can reduce the work to obtain a given level of error, relative to i.i.d. sampling and even to MLMC. In this paper, we consider the case when such probability laws are too complex to be sampled independently, for example a Bayesian inverse problem where evaluation of the likelihood requires solution of a partial differential equation model, which needs to be approximated at finite resolution. We develop a modification of the MIMC method, which allows one to use standard Markov chain Monte Carlo (MCMC) algorithms to replace independent and coupled sampling, in certain contexts. We prove a variance theorem for a simplified estimator that shows that using our MIMCMC method is preferable, in the sense above, to i.i.d. sampling from the most accurate approximation, under appropriate assumptions. The method is numerically illustrated on a Bayesian inverse problem associated to a stochastic partial differential equation, where the path measure is conditioned on some observations.