Abonnement à la biblothèque: Guest
Portail numérique Bibliothèque numérique eBooks Revues Références et comptes rendus Collections
International Journal for Uncertainty Quantification
Facteur d'impact: 4.911 Facteur d'impact sur 5 ans: 3.179 SJR: 1.008 SNIP: 0.983 CiteScore™: 5.2

ISSN Imprimer: 2152-5080
ISSN En ligne: 2152-5099

Ouvrir l'accès

International Journal for Uncertainty Quantification

DOI: 10.1615/Int.J.UncertaintyQuantification.2014007658
pages 349-364

OPTIMIZATION-BASED SAMPLING IN ENSEMBLE KALMAN FILTERING

Antti Solonen
Lappeenranta University of Technology, Laboratory of Applied Mathematics
Alexander Bibov
Lappeenranta University of Technology, Laboratory of Applied Mathematics
Johnathan M. Bardsley
Department of Mathematical Sciences, The University of Montana, Missoula, Montana 59812-0864, USA
Heikki Haario
Department of Mathematics and Physics, Lappeenranta University of Technology; Finnish Meteorological Institute, Helsinki, Finland

RÉSUMÉ

In the ensemble Kalman filter (EnKF), uncertainty in the state of a dynamical model is represented as samples of the state vector. The samples are propagated forward using the evolution model, and the forecast (prior) mean and covariance matrix are estimated from the ensemble. Data assimilation is carried out by using these estimates in the Kalman filter formulas. The prior is given in the subspace spanned by the propagated ensemble, the size of which is typically much smaller than the dimension of the state space. The rank-deficiency of these covariance matrices is problematic, and, for instance, unrealistic correlations often appear between spatially distant points, and different localization or covariance tapering methods are needed to make the approach feasible in practice. In this paper, we present a novel way to implement ensemble Kalman filtering using optimization-based sampling, in which the forecast error covariance has full rank and the need for localization is diminished. The method is based on the randomize then optimize (RTO) technique, where a sample from a Gaussian distribution is computed by perturbing the data and the prior, and solving a quadratic optimization problem. We test our method in two benchmark problems: the 40-dimensional Lorenz '96 model and the 1600-dimensional two-layer quasi-geostrophic model. Results show that the performance of the method is significantly better than that of the standard EnKF, especially with small ensemble sizes when the rank-deficiency problems in EnKF are emphasized.


Articles with similar content:

STOCHASTIC MULTIOBJECTIVE OPTIMIZATION ON A BUDGET: APPLICATION TO MULTIPASS WIRE DRAWING WITH QUANTIFIED UNCERTAINTIES
International Journal for Uncertainty Quantification, Vol.8, 2018, issue 3
Pramod Zagade, Jitesh Panchal, B. P. Gautham, Ilias Bilionis, Piyush Pandita, Amol Joshi
CLUSTERING-BASED COLLOCATION FOR UNCERTAINTY PROPAGATION WITH MULTIVARIATE DEPENDENT INPUTS
International Journal for Uncertainty Quantification, Vol.8, 2018, issue 1
D. T. Crommelin, Anne W. Eggels, J. A. S. Witteveen
ASYMPTOTICALLY INDEPENDENT MARKOV SAMPLING: A NEW MARKOV CHAIN MONTE CARLO SCHEME FOR BAYESIAN INFERENCE
International Journal for Uncertainty Quantification, Vol.3, 2013, issue 5
James L. Beck, Konstantin M. Zuev
UNCERTAINTIES ASSESSMENT IN GLOBAL SENSITIVITY INDICES ESTIMATION FROM METAMODELS
International Journal for Uncertainty Quantification, Vol.4, 2014, issue 1
Maelle Nodet, Alexandre Janon, Clementine Prieur
GRADIENT-BASED STOCHASTIC OPTIMIZATION METHODS IN BAYESIAN EXPERIMENTAL DESIGN
International Journal for Uncertainty Quantification, Vol.4, 2014, issue 6
Youssef Marzouk, Xun Huan