Library Subscription: Guest
International Journal for Uncertainty Quantification

Published 6 issues per year

ISSN Print: 2152-5080

ISSN Online: 2152-5099

The Impact Factor measures the average number of citations received in a particular year by papers published in the journal during the two preceding years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) IF: 1.7 To calculate the five year Impact Factor, citations are counted in 2017 to the previous five years and divided by the source items published in the previous five years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) 5-Year IF: 1.9 The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. Immediacy Index: 0.5 The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. Eigenfactor: 0.0007 The Journal Citation Indicator (JCI) is a single measurement of the field-normalized citation impact of journals in the Web of Science Core Collection across disciplines. The key words here are that the metric is normalized and cross-disciplinary. JCI: 0.5 SJR: 0.584 SNIP: 0.676 CiteScore™:: 3 H-Index: 25

Indexed in

SURROGATE PREPOSTERIOR ANALYSES FOR PREDICTING AND ENHANCING IDENTIFIABILITY IN MODEL CALIBRATION

Volume 5, Issue 4, 2015, pp. 341-359
DOI: 10.1615/Int.J.UncertaintyQuantification.2015012627
Get accessDownload

ABSTRACT

In physics-based engineering modeling and uncertainty quantification, distinguishing the effects of two main sources of uncertainty − calibration parameter uncertainty and model discrepancy − is challenging. Previous research has shown that identifiability, which is quantified by the posterior covariance of the calibration parameters, can sometimes be improved by experimentally measuring multiple responses of the system that share a mutual dependence on a common set of calibration parameters. In this paper, we address the issue of how to select the most appropriate subset of responses to measure experimentally, to best enhance identifiability. We use a preposterior analysis approach that, prior to conducting physical experiments but after conducting computer simulations, can predict the degree of identifiability that will result using different subsets of responses to measure experimentally. It predicts identifiability via the preposterior covariance from a modular Bayesian Monte Carlo analysis of a multi-response spatial random process (SRP) model. Furthermore, to handle the computational challenge in preposterior analysis, we propose a surrogate preposterior analysis based on Fisher information of the calibration parameters. The proposed methods are applied to a simply supported beam example to select two out of six responses to best improve identifiability. The estimated preposterior covariance is compared to the actual posterior covariance to demonstrate the effectiveness of the methods.

CITED BY
  1. Cao Fang, Ba Shan, Brenneman William A., Joseph V. Roshan, Model Calibration With Censored Data, Technometrics, 60, 2, 2018. Crossref

  2. Jiang Zhen, Arendt Paul D., Apley Daniel W., Chen Wei, Multi-response Approach to Improving Identifiability in Model Calibration, in Handbook of Uncertainty Quantification, 2017. Crossref

  3. Jiang Zhen, Arendt Paul D., Apley Daniel W., Chen Wei, Multi-response Approach to Improving Identifiability in Model Calibration, in Handbook of Uncertainty Quantification, 2015. Crossref

  4. Wu Xu, Shirvan Koroush, Kozlowski Tomasz, Demonstration of the relationship between sensitivity and identifiability for inverse uncertainty quantification, Journal of Computational Physics, 396, 2019. Crossref

  5. van Beek Anton, Tao Siyu, Plumlee Matthew, Apley Daniel W., Chen Wei, Integration of Normative Decision-Making and Batch Sampling for Global Metamodeling, Journal of Mechanical Design, 142, 3, 2020. Crossref

  6. van Beek Anton, Ghumman Umar Farooq, Munshi Joydeep, Tao Siyu, Chien TeYu, Balasubramanian Ganesh, Plumlee Matthew, Apley Daniel, Chen Wei, Scalable Adaptive Batch Sampling in Simulation-Based Design With Heteroscedastic Noise, Journal of Mechanical Design, 143, 3, 2021. Crossref

  7. Giuntoli Andrea, Hansoge Nitin K., van Beek Anton, Meng Zhaoxu, Chen Wei, Keten Sinan, Systematic coarse-graining of epoxy resins with machine learning-informed energy renormalization, npj Computational Materials, 7, 1, 2021. Crossref

Begell Digital Portal Begell Digital Library eBooks Journals References & Proceedings Research Collections Prices and Subscription Policies Begell House Contact Us Language English 中文 Русский Português German French Spain