Inscrição na biblioteca: Guest
International Journal for Uncertainty Quantification

Publicou 6 edições por ano

ISSN Imprimir: 2152-5080

ISSN On-line: 2152-5099

The Impact Factor measures the average number of citations received in a particular year by papers published in the journal during the two preceding years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) IF: 1.7 To calculate the five year Impact Factor, citations are counted in 2017 to the previous five years and divided by the source items published in the previous five years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) 5-Year IF: 1.9 The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. Immediacy Index: 0.5 The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. Eigenfactor: 0.0007 The Journal Citation Indicator (JCI) is a single measurement of the field-normalized citation impact of journals in the Web of Science Core Collection across disciplines. The key words here are that the metric is normalized and cross-disciplinary. JCI: 0.5 SJR: 0.584 SNIP: 0.676 CiteScore™:: 3 H-Index: 25

Indexed in

A MULTI-FIDELITY NEURAL NETWORK SURROGATE SAMPLING METHOD FOR UNCERTAINTY QUANTIFICATION

Volume 10, Edição 4, 2020, pp. 315-332
DOI: 10.1615/Int.J.UncertaintyQuantification.2020031957
Get accessGet access

RESUMO

We propose a multi-fidelity neural network surrogate sampling method for the uncertainty quantification of physical/biological systems described by ordinary or partial differential equations. We first generate a set of low/high-fidelity data by low/high-fidelity computational models, e.g., using coarser/finer discretizations of the governing differential equations. We then construct a two-level neural network, where a large set of low-fidelity data is utilized in order to accelerate the construction of a high-fidelity surrogate model with a small set of high-fidelity data. We then embed the constructed high-fidelity surrogate model in the framework of Monte Carlo sampling. The proposed algorithm combines the approximation power of neural networks with the advantages of Monte Carlo sampling within a multi-fidelity framework. We present two numerical examples to demonstrate the accuracy and efficiency of the proposed method. We show that dramatic savings in computational cost may be achieved when the output predictions are desired to be accurate within small tolerances.

Referências
  1. Sullivan, T.J., Introduction to Uncertainty Quantification, New York: Springer, 2015.

  2. Ghanem, R.G. and Spanos, P.D., Stochastic Finite Elements: A Spectral Approach, New York: Springer, 1991?.

  3. Nobile, F., Tempone, R., and Webster, C.G., A Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data, SIAMj. Numer. Anal., 46:2309-2345, 2008.

  4. Xiu, D. and Hesthaven, J.S., High-Order Collocation Methods for Differential Equations with Random Inputs, SIAM J. Sci. Comput, 27:1118-1139, 2005.

  5. Motamed, M., Nobile, F., and Tempone, R., A Stochastic Collocation Method for the Second Order Wave Equation with a Discontinuous Random Speed, Numer. Math., 123:493-536,2013.

  6. Motamed, M., Nobile, F., and Tempone, R., Analysis and Computation of the Elastic Wave Equation with Random Coefficients, Comput. Math. Appl., 70:2454-2473, 2015.

  7. Fishman, G.S., Monte Carlo: Concepts, Algorithms, and Applications, New York: Springer-Verlag, 1996.

  8. Giles, M.B., Multilevel Monte Carlo Path Simulation, Oper. Res., 56:607-617, 2008.

  9. Cliffe, K.A., Giles, M.B., Scheichl, R., and Teckentrup, A.L., Multilevel Monte Carlo Methods and Applications to Elliptic PDEs with Random Coefficients, Comput. Visual Sci., 14:3-15, 2011.

  10. Motamed, M. and Appelo, D., A Multi-Order Discontinuous Galerkin Monte Carlo Method for Hyperbolic Problems with Stochastic Parameters, SIAMJ. Numer. Anal., 56:448-468, 2018.

  11. Haji-Ali, A.L., Nobile, F., and Tempone, R., Multi-Index Monte Carlo: When Sparsity Meets Sampling, Numer. Math, 132:767-806,2016.

  12. Hou, T. Y. and Wu, X., Quasi-Monte Carlo Methods for Elliptic PDEs with Random Coefficients and Applications, J. Comput. Phys, 230:3668-3694, 2011.

  13. Kuo, F.Y., Schwab, C., and Sloan, I.H., Multi-Level Quasi-Monte Carlo Finite Element Methods for a Class of Elliptic PDEs with Random Coefficients, Found. Comput. Math., 15:411-449,2015.

  14. Speight, A., A Multilevel Approach to Control Variates, J. Comput. Finance, 12:3-27, 2009.

  15. Nobile, F. and Tesei, F., A Multi Level Monte Carlo Method with Control Variate for Elliptic PDEs with Log-Normal Coefficients, Stochastic PDEs: Anal. Comput., 3:398-444, 2015.

  16. Gorodetsky, A.A., Geraci, G., Eldred, M., and Jakeman, J.D., A Generalized Approximate Control Variate Framework for Mulifidelity Uncertainty Quantification, Stat. Comput., arXiv:1811.04988, 2019.

  17. Schmidhuber, J., Deep Learning in Neural Networks: An Overview, Neural Networks, 61:85-117,2015.

  18. Fernandez-Godino, M.G., Park, C., Kim, N.H., and Haftka, R.T., Review of Multi-Fidelity Models, Stat. Appl., arXiv:1609.07196, 2016.

  19. Jakeman, J.D., Eldred, M., Geraci, G., and Gorodetsky, A., Adaptive Multi-Index Collocation for Uncertainty Quantification and Sensitivity Analysis, Math. Numer. Anal., arXiv:1909.13845, 2019.

  20. Aydin, R.C., Braeu, F.A., and Cyron, C.J., General Multi-Fidelity Framework for Training Artificial Neural Networks with Computational Models, Front. Mater., 6:1-14,2019.

  21. Liu, D. and Wang, Y., Multi-Fidelity Physics-Constrained Neural Network and Its Application in Materials Modeling, J. Mech. Des., 141:121403,2019.

  22. Meng, X. and Karniadakis, G.E., A Composite Neural Network That Learns from Multi-Fidelity Data: Application to Function Approximation and Inverse PDE Problems, J. Comput. Phys, 401:109020, 2020.

  23. Perdikaris, P., Raissi, M., Damianou, A., Lawrence, N., and Karniadakis, G.E., Nonlinear Information Fusion Algorithms for Data-Efficient Multi-Fidelity Modelling, Proc. R. Soc. A, 473:20160751,2017.

  24. Robbins, H. and Monro, S., A Stochastic Approximation Method, Ann. Math. Stat., 22:400-407,1951.

  25. Kiefer, J. and Wolfowitz, J., Stochastic Estimation of the Maximum of a Regression Function, Ann. Math. Stat., 23:462-466, 1952.

  26. Kingma, D.P. and Ba, J., Adam: A Method for Stochastic Optimization, Comput. Sci. Mach. Learn., arXiv:1412.6980v9, 2017.

  27. Rumelhart, D.E., Hinton, G.E., and Williams, R.J., Learning Representations by Back-Propagating Errors, Nature, 323:533-536, 1986.

  28. Bottou, L., Curtis, F.E., and Nocedal, J., Optimization Methods for Large-Scale Machine Learning, SIAMRev., 60:223-311, 2018.

  29. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R., Dropout: A Simple Way to Prevent Neural Networks from Overfitting, J. Mach. Learn. Res., 15:1929-1958, 2014.

  30. Bengio, Y., Practical Recommendations for Gradient-Based Training of Deep Architectures, in Neural Networks: Tricks of the Trades, G. Montavon, G.B. Orr, and K.-R. Muller, Eds., Berlin: Springer, pp. 437-478, 2012.

  31. Goodfellow, I.J., Bengio, Y., and Courville, A., Deep Learning, Cambridge: The MIT Press, 2016.

  32. Hornik, K., Stinchcombe, M., and White, H., Multilayer Feedforward Networks are Universal Approximators, J. Neural Networks, 2:359-366,1989.

  33. Mhaskar, H.N. andPoggio, T., Deep vs. Shallow Networks: An Approximation Theory Perspective, Anal. Appl., 14:829-848, 2016.

  34. Yarotsky, D., Error Bounds for Approximations with Deep ReLU Networks, Neural Networks, 94:103-114,2017.

  35. Petersen, P. and Voigtlaender, F., Optimal Approximation of Piecewise Smooth Functions Using Deep ReLU Neural Networks, Neural Networks, 108:296-330, 2018.

  36. Montanelli, H. and Du, Q., New Error Bounds for Deep ReLU Networks Using Sparse Grids, SIAM J. Math. Data Sci., 1:78-92,2019.

  37. Schwab, C. and Zech, J., Deep Learning in High Dimension: Neural Network Expression Rates for Generalized Polynomial Chaos Expansions inUQ, Anal. Appl, 17:19-55, 2019.

  38. Bolcskei, H., Grohs, P., Kutyniok, G., and Petersen, P., Optimal Approximation with Sparsely Connected Deep Neural Net-works, SIAM J. Math. Data Sci., 1:8-45, 2019.

  39. Guhring, I., Kutyniok, G., and Petersen, P., Error Bounds for Approximations with Deep ReLU Neural Networks in Ws,p Norms, Math. Funct. Anal, arXiv:1902.07896, 2019.

  40. Akter, M.A., A Deep Learning Approach to Uncertainty Quantification, Master's, University of New Mexico, Albuquerque, NM, 2019.

  41. Chollet, F., Keras, accessed from https://keras.io, 2015.

CITADO POR
  1. Chen Jie, Gao Yi, Liu Yongming, Convolutional Neural Networks for Multi-fidelity Data Aggregation, AIAA SCITECH 2022 Forum, 2022. Crossref

  2. Guo Mengwu, Manzoni Andrea, Amendt Maurice, Conti Paolo, Hesthaven Jan S., Multi-fidelity regression using artificial neural networks: Efficient approximation of parameter-dependent output quantities, Computer Methods in Applied Mechanics and Engineering, 389, 2022. Crossref

  3. Ramu Palaniappan, Thananjayan Pugazhenthi, Acar Erdem, Bayrak Gamze, Park Jeong Woo, Lee Ikjin, A survey of machine learning techniques in structural and multidisciplinary optimization, Structural and Multidisciplinary Optimization, 65, 9, 2022. Crossref

  4. Chen Jie, Meng Changyu, Gao Yi, Liu Yongming, Multi-fidelity neural optimization machine for Digital Twins, Structural and Multidisciplinary Optimization, 65, 12, 2022. Crossref

Portal Digital Begell Biblioteca digital da Begell eBooks Diários Referências e Anais Coleções de pesquisa Políticas de preços e assinaturas Begell House Contato Language English 中文 Русский Português German French Spain