Abo Bibliothek: Guest
International Journal for Uncertainty Quantification

Erscheint 6 Ausgaben pro Jahr

ISSN Druckformat: 2152-5080

ISSN Online: 2152-5099

The Impact Factor measures the average number of citations received in a particular year by papers published in the journal during the two preceding years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) IF: 1.7 To calculate the five year Impact Factor, citations are counted in 2017 to the previous five years and divided by the source items published in the previous five years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) 5-Year IF: 1.9 The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. Immediacy Index: 0.5 The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. Eigenfactor: 0.0007 The Journal Citation Indicator (JCI) is a single measurement of the field-normalized citation impact of journals in the Web of Science Core Collection across disciplines. The key words here are that the metric is normalized and cross-disciplinary. JCI: 0.5 SJR: 0.584 SNIP: 0.676 CiteScore™:: 3 H-Index: 25

Indexed in

KERNEL OPTIMIZATION FOR LOW-RANK MULTIFIDELITY ALGORITHMS

Volumen 11, Ausgabe 1, 2021, pp. 31-54
DOI: 10.1615/Int.J.UncertaintyQuantification.2020033212
Get accessGet access

ABSTRAKT

One of the major challenges for low-rank multifidelity (MF) approaches is the assumption that low-fidelity (LF) and high-fidelity (HF) models admit "similar" low-rank kernel representations. Low-rank MF methods have traditionally attempted to exploit low-rank representations of linear kernels, which are kernel functions of the form K (u, ν) = νTu for vectors u and ν. However, such linear kernels may not be able to capture low-rank behavior, and they may admit LF and HF kernels that are not similar. Such a situation renders a naive approach to low-rank MF procedures ineffective. In this paper, we propose a novel approach for the selection of a near-optimal kernel function for use in low-rank MF methods. The proposed framework is a two-step strategy wherein (1) hyperparameters of a library of kernel functions are optimized, and (2) a particular combination of the optimized kernels is selected, through either a convex mixture (additive kernel approach) or through a data-driven optimization (adaptive kernel approach). The two resulting methods for this generalized framework both utilize only the available inexpensive low-fidelity data and thus no evaluation of high-fidelity simulation model is needed until a kernel is chosen. These proposed approaches are tested on five non-trivial real-world problems including multifidelity surrogate modeling for one- and two-species molecular systems, gravitational many-body problem, associating polymer networks, plasmonic nanoparticle arrays, and an incompressible flow in channels with stenosis. The results for these numerical experiments demonstrate the numerical stability efficiency of both proposed kernel function selection procedures, as well as high accuracy of their resultant predictive models for estimation of quantities of interest. Comparisons against standard linear kernel procedures also demonstrate increased accuracy of the optimized kernel approaches.

REFERENZEN
  1. Razi, M., Narayan, A., Kirby, R.M., and Bedrov, D., Fast Predictive Models based on Multi-Fidelity Sampling of Properties in Molecular Dynamics Simulations, Comput. Mater. Sci., 152(C):125-133, 2018.

  2. Narayan, A., Gittelson, C., and Xiu, D., A Stochastic Collocation Algorithm with Multifidelity Models, SIAMJ. Sci. Comput, 36(2):A495-A521, 2014.

  3. Zhu, X., Narayan, A., and Xiu, D., Computational Aspects of Stochastic Collocation with Multifidelity Models, SIAM/ASA J. Uncertainty Quantif., 2(1):444-463, 2014.

  4. Hampton, J., Fairbanks, H.R., Narayan, A., and Doostan, A., Practical Error Bounds for a Non-Intrusive Bi-Fidelity Approach to Parametric/Stochastic Model Reduction, J. Comput. Phys, 368:315-332, 2018.

  5. Skinner, R., Doostan, A., Peters, E., Evans, J., and Jansen, K.E., An Evaluation of Bi-Fidelity Modeling Efficiency on a General Family of NACA Airfoils, in Proc. of 35th AIAA Applied Aerodynamics Conf, p. 3260, 2017.

  6. Jofre, L., Geraci, G., Fairbanks, H., Doostan, A., and Iaccarino, G., Multi-Fidelity Uncertainty Quantification of Irradiated Particle-Laden Turbulence, Comput. Phys, arXiv:1801.06062, 2018.

  7. Allaire, D. and Willcox, K., A Mathematical and Computational Framework for Multifidelity Design and Analysis with Computer Models, Int. J. Uncertainty Quantif., 4(1):1-20, 2014.

  8. Lam, R., Allaire, D.L., and Willcox, K.E., Multifidelity Optimization Using Statistical Surrogate Modeling for Non-Hierarchical Information Sources, in Proc. of 56th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conf., p. 0143, 2015.

  9. Razi, M., Narayan, A., and Kirby, R.M., Fast Predictive Multi-Fidelity Prediction with Models of Quantized Fidelity Levels, J. Comput. Phys, 376:992-1008, 2019.

  10. Giles, M.B., Multilevel Monte Carlo Methods, Acta Numer., 24:259-328, 2015.

  11. Fernndez-Godino, M.G., Park, C., Kim, N.H., and Haftka, R.T., Review of Multi-Fidelity Models, Stat. Appl., arXiv:1609.07196, 2016.

  12. Peherstorfer, B., Willcox, K., and Gunzburger, M., Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization, SIAM Rev., 60(3):550-591, 2018.

  13. Zhu, X., Linebarger, E.M., and Xiu, D., Multi-Fidelity Stochastic Collocation Method for Computation of Statistical Moments, J. Comput. Phys, 341:386-396, 2017.

  14. Keshavarzzadeh, V., Kirby, R., and Narayan, A., Convergence Acceleration for Time-Dependent Parametric Multifidelity Models, SIAM J. Numer. Anal, 57(3):1344-1368, 2019.

  15. Perdikaris, P., Venturi, D., Royset, J., and Karniadakis, G., Multi-Fidelity Modelling via Recursive Co-Kriging and Gaussian- Markov Random Fields, Proc. R. Soc. A, 471(2179):20150018, 2015.

  16. Perdikaris, P., Venturi, D., and Karniadakis, G.E., Multifidelity Information Fusion Algorithms for High-Dimensional Systems and Massive Data Sets, SIAM J. Sci. Comput, 38(4):B521-B538, 2016.

  17. Mehmani, A., Chowdhury, S., Meinrenken, C., and Messac, A., Concurrent Surrogate Model Selection (COSMOS): Optimizing Model Type, Kernel Function, and Hyper-Parameters, Struct. Multidisc. Optim., 57(3):1093-1114, 2018.

  18. Anderson, D. and Gu, M., An Efficient, Sparsity-Preserving, Online Algorithm for Low-Rank Approximation, in Proc. of 34th Int. Conf. on Machine Learning, pp. 156-165, 2017.

  19. Perry, D.J. and Whitaker, R.T., Augmented Leverage Score Sampling with Bounds, in Proc. of Joint European Conf. on Machine Learning and Knowledge Discovery in Databases, New York: Springer, pp. 543-558, 2016.

  20. Lozano, A., Swirszcz, G., and Abe, N., Group Orthogonal Matching Pursuit for Logistic Regression, in Proc. of 14th International Conf. on Artificial Intelligence and Statistics, pp. 452-460, 2011.

  21. Perry, D., Kirby, R., Narayan, A., and Whitaker, R., Allocation Strategies for High Fidelity Models in the Multifidelity Regime, SIAM/ASA J Uncertainty Quantif, 7(1):203-231, 2019.

  22. Fasshauer, G.F., Meshfree Approximation Methods with Matlab, Singapore: World Scientific Publishing Company, 2007.

  23. Rasmussen, C.E., Gaussian Processes in Machine Learning, in Advanced Lectures on Machine Learning, Berlin: Springer, pp. 63-71,2004.

  24. Bergstra, J. and Bengio, Y., Random Search for Hyper-Parameter Optimization, J. Mach. Learn. Res, 13(1):281-305, 2012.

  25. Klein, A., Falkner, S., Bartels, S., Hennig, P., and Hutter, F., Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets, in Proc. of 20th Int. Conf. on Artificial Intelligence and Statistics, pp. 528-536, 2017.

  26. Kennedy, J., Particle Swarm Optimization, in Encyclopedia of Machine Learning, Berlin: Springer, pp. 760-766, 2011.

  27. Clerc, M., Particle Swarm Optimization, Vol. 93, Hoboken, NJ: John Wiley & Sons, 2010.

  28. Deepa, S. and Sugumaran, G., Model Order Formulation of a Multivariable Discrete System Using a Modified Particle Swarm Optimization Approach, Swarm Evol. Comput., 1(4):204-212, 2011.

  29. Shahzad, F., Baig, A.R., Masood, S., Kamran, M., and Naveed, N., Opposition-Based Particle Swarm Optimization with Velocity Clamping (OVCPSO), in Advances Computational Intelligence, Berlin: Springer, pp. 339-348, 2009.

  30. Angeline, P.J., Using Selection to Improve Particle Swarm Optimization, in Proc. of 1998 IEEE World Congress on Computational Intelligence, IEEE, pp. 84-89, 1998.

  31. Angeline, P.J., Evolutionary Optimization versus Particle Swarm Optimization: Philosophy and Performance Differences, International Conference on Evolutionary Programming, Springer, pp. 601-610, 1998.

  32. Dittmann, I. and Maug, E.G., Biases and Error Measures: How to Compare Valuation Methods, ERIMReport Series, Ref. No. ERS-2006-011-F&A, p. 2006-07, 2008.

  33. Lee, J.G., Computational Materials Science: An Introduction, Boca Raton, FL: CRC Press, 2016.

  34. Trenti, M. and Hut, P., Gravitational N-Body Simulations, Astrophys., arXiv:0806.3950, 2008.

  35. Jacobs, P., List, P., Ludin, M., Weeden, A., and Panoff, R.M., The Blue Waters Student Internship Program: Promoting Competence and Confidence for Next Generation Researchers in High-Performance Computing, in Proc. of Workshop on Education for High-Performance Computing, IEEE Press, pp. 49-55, 2014.

  36. Billen, J., Wilson, M., Rabinovitch, A., and Baljon, A.R., Topological Changes at the Gel Transition of a Reversible Polymeric Network, Europhys. Lett., 87(6):68003, 2009.

  37. Christensen, R., Theory of Viscoelasticity: An Introduction, Cambridge: Academic Press, 1982.

  38. Spielman, D.A. and Srivastava, N., Graph Sparsification by Effective Resistances, SIAMJ Comput., 40(6):1913-1926,2011.

  39. Guerin, C.A., Mallet, P., and Sentenac, A., Effective-Medium Theory for Finite-Size Aggregates, J. Opt. Soc. Am. A, 23(2):349-358,2006.

  40. Christofi, A.C., Pinheiro, F.A., and Dal Negro, L., Probing Scattering Resonances of Vogel's Spirals with the Green's Matrix Spectral Method, Opt. Lett., 41(9):1933-1936, 2016.

  41. Razi, M., Wang, R., He, Y., Kirby, R.M., and Dal Negro, L., Optimization of Large-Scale Vogel Spiral Arrays of Plasmonic Nanoparticles, Plasmonics, 14(1):253-261, 2019.

  42. Anderson, D., Tannehill, J.C., and Pletcher, R.H., Computational Fluid Mechanics and Heat Transfer, Boca Raton, FL: CRC Press, 2016.

  43. Cantwell, C.D., Moxey, D., Comerford, A., Bolis, A., Rocco, G., Mengaldo, G., De Grazia, D., Yakovlev, S., Lombard, J.E., and Ekelschot, D., Nektar++: An Open-Source Spectral/hp Element Framework, Comput. Phys. Commun., 192:205-219, 2015.

REFERENZIERT VON
  1. Penwarden Michael, Zhe Shandian, Narayan Akil, Kirby Robert M., Multifidelity modeling for Physics-Informed Neural Networks (PINNs), Journal of Computational Physics, 451, 2022. Crossref

Digitales Portal Digitale Bibliothek eBooks Zeitschriften Referenzen und Berichte Forschungssammlungen Preise und Aborichtlinien Begell House Kontakt Language English 中文 Русский Português German French Spain