Inscrição na biblioteca: Guest
Portal Digital Begell Biblioteca digital da Begell eBooks Diários Referências e Anais Coleções de pesquisa
International Journal for Uncertainty Quantification
Fator do impacto: 4.911 FI de cinco anos: 3.179 SJR: 1.008 SNIP: 0.983 CiteScore™: 5.2

ISSN Imprimir: 2152-5080
ISSN On-line: 2152-5099

Open Access

International Journal for Uncertainty Quantification

DOI: 10.1615/Int.J.UncertaintyQuantification.2020031935
pages 55-82

EXTENDING CLASSICAL SURROGATE MODELING TO HIGH DIMENSIONS THROUGH SUPERVISED DIMENSIONALITY REDUCTION: A DATA-DRIVEN APPROACH

Christos Lataniotis
Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Stefano-Franscini-Platz 5, 8093 Zurich, Switzerland
Stefano Marelli
Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Stefano-Franscini-Platz 5, 8093 Zurich, Switzerland
Bruno Sudret
Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Stefano-Franscini-Platz 5, 8093 Zurich, Switzerland

RESUMO

Thanks to their versatility, ease of deployment, and high performance, surrogate models have become staple tools in the arsenal of uncertainty quantification (UQ). From local interpolants to global spectral decompositions, surrogates are characterized by their ability to efficiently emulate complex computational models based on a small set of model runs used for training. An inherent limitation of many surrogate models is their susceptibility to the curse of dimensionality, which traditionally limits their applicability to a maximum of O(102) input dimensions. We present a novel approach at high-dimensional surrogate modeling that is model-, dimensionality reduction-, and surrogate model-agnostic (black box), and can enable the solution of high-dimensional [i.e., up to O(104)] problems. After introducing the general algorithm, we demonstrate its performance by combining Kriging and polynomial chaos expansion surrogates and kernel principal component analysis. In particular, we compare the generalization performance that the resulting surrogates achieve to the classical sequential application of dimensionality reduction followed by surrogate modeling on several benchmark applications, comprising an analytical function and two engineering applications of increasing dimensionality and complexity.

Referências

  1. Sacks, J., Welch, W., Mitchell, T., and Wynn, H., Design and Analysis of Computer Experiments, Stat. Sci., 4:409-435,1989.

  2. Rasmussen, C. and Williams, C., Gaussian Processes for Machine Learning, Adaptive Computation and Machine Learning, Cambridge, MA: The MIT Press, 2006.

  3. Ghanem, R. and Spanos, P., Stochastic Finite Elements-A Spectral Approach, New York: Springer-Verlag, 1991.

  4. Xiu, D. and Karniadakis, G.E., The Wiener-Askey Polynomial Chaos for Stochastic Differential Equations, SIAM J. Sci. Comput., 24(2):619-644,2002.

  5. Xiu, D., Numerical Methods for Stochastic Computations-A Spectral Method Approach, Princeton, NJ: Princeton University Press, 2010.

  6. Chevreuil, M., Lebrun, R., Nouy, A., and Rai, P., A Least-Squares Method for Sparse Low Rank Approximation of Multivariate Functions, SIAM/ASAJ. Uncer. Quantif., 3(1):897-921, 2015.

  7. Konakli, K. and Sudret, B., Polynomial Meta-Models with Canonical Low-Rank Approximations: Numerical Insights and Comparison to Sparse Polynomial Chaos Expansions, J. Comput. Phys, 321:1144-1169, 2016.

  8. Vapnik, V., The Nature of Statistical Learning Theory, New York: Springer-Verlag, 1995.

  9. Verleysen, M. and Francois, D., The Curse of Dimensionality in Data Mining and Time Series Prediction, in International Work-Conference on Artificial Neural Networks, Berlin, Heidelberg: Springer, pp. 758-770, 2005.

  10. Gu, M. and Berger, J.O., Parallel Partial Gaussian Process Emulation for Computer Models with Massive Output, Annals Appl. Stat, 10(3):1317-1347, 2016.

  11. Ramsay, J.O., Functional Data Analysis, Encycl. Stat. Sci., 4,2004.

  12. Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M., and Tarantola, S., Global Sensitivity Analysis-The Primer, New York: Wiley, 2008.

  13. Iooss, B. and Lemaitre, P., A Review on Global Sensitivity Analysis Methods, Boston: Springer, pp. 101-122, 2015.

  14. Djolonga, J., Krause, A., and Cevher, V., High-Dimensional Gaussian Process Bandits, in Proc. ofAdvances in Neural Information Processing Systems, pp. 1025-1033, 2013.

  15. Lawrence, N., Probabilistic Non-Linear Principal Component Analysis with Gaussian Process Latent Variable Models, J. Mach. Learn. Res, 6:1783-1816, 2005.

  16. Durrande, N., Ginsbourger, D., and Roustant, O., Additive Covariance Kernels for High-Dimensional Gaussian Process Modeling, Annal. Fac. Sci. Univ. Toulouse, 21:481,2012.

  17. Wilson, A.G., Hu, Z., Salakhutdinov, R., and Xing, E.P., Deep Kernel Learning, in Proc. of Artificial Intelligence and Statistics, pp. 370-378, 2016.

  18. Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P.A., Extracting and Composing Robust Features with Denoising Autoencoders, in Proc. of 25th Int. Conf. Machine Learning. ACM, pp. 1096-1103,2008.

  19. Pearson, K., On Lines and Planes of Closest Fit to Systems of Points in Space, Philos. Mag, 6(2):559-572, 1901.

  20. Hyvarinen, A. and Oja, E., One-Unit Learning Rules for Independent Component Analysis, in Proc. of Advances in Neural Information Processing Systems, pp. 480-486, 1997.

  21. Tenenbaum, J.B., De Silva, V., andLangford, J.C., A Global Geometric Framework for Nonlinear Dimensionality Reduction, Science, 290(5500):2319-2323,2000.

  22. Roweis, S.T. and Saul, L.K., Nonlinear Dimensionality Reduction by Locally Linear Embedding, Science, 290(5500):2323-2326, 2000.

  23. Hinton, G.E. and Roweis, S.T., Stochastic Neighbor Embedding, in Proc. of Advances in Neural Information Processing Systems, pp. 857-864, 2003.

  24. Wahlstrom, N., Schon, T.B., and Deisenroth, M.P., Learning Deep Dynamical Models from Image Pixels, IFAC-PapersOnLine, 48(28):1059-1064, 2015.

  25. Calandra, R., Peters, J., Rasmussen, C.E., and Deisenroth, M.P., Manifold Gaussian Processes for Regression, in Proc. of 2016 Int. Joint Conf. on Neural Networks (IJCNN), pp. 3338-3345,2016.

  26. Constantine, P.G., Dow, E., and Wang, Q., Active Subspace Methods in Theory and Practice: Applications to Kriging Surfaces, SIAMJ. Sci. Comput., 36(4):A1500-A1524, 2014.

  27. Fornasier, M., Schnass, K., and Vybiral, J., Learning Functions of Few Arbitrary Linear Parameters in High Dimensions, Found. Comput. Math, 12(2):229-262, 2012.

  28. Tipireddy, R. and Ghanem, R.G., Basis Adaptation in Homogeneous Chaos Spaces, J. Comput. Phys, 259:304-317,2014.

  29. Tsilifis, P., Huan, X., Safta, C., Sargsyan, K., Lacaze, G., Oefelein, J.C., Najm, H., and Ghanem, R.G., Compressive Sensing Adaptation for Polynomial Chaos Expansions, J. Comput. Phys, 380:29-47,2019.

  30. Papaioannou, I., Ehre, M., and Straub, D., PLS-Based Adaptation for Efficient PCE Representation in High Dimensions, J. Comput. Phys, 387:186-204, 2019.

  31. Hinton, G.E. and Salakhutdinov, R.R., Reducing the Dimensionality of Data with Neural Networks, Science, 313(5786):504-507, 2006.

  32. Damianou, A. and Lawrence, N., Deep Gaussian Processes, in Proc. of Artificial Intelligence and Statistics, pp. 207-215, 2013.

  33. Huang, W., Zhao, D., Sun, F., Liu, H., and Chang, E.Y., Scalable Gaussian Process Regression Using Deep Neural Networks, in Proc. of 24th Int. Joint Conf. on Artificial Intelligence, pp. 3576-3582,2015.

  34. Fukunaga, K., Introduction to Statistical Pattern Recognition, New York: Academic Press, 2013.

  35. Camastra, F., Data Dimensionality Estimation Methods: A Survey, Pattern Recognit., 36(12):2945-2954,2003.

  36. Kwok, J.T. and Tsang, I.W., The Pre-Image Problem in Kernel Methods, in Proc. of 20th Int. Conf. on Machine Learning (ICML-03), pp. 408-415,2003.

  37. Hastie, T., Tibshirani, R., and Friedman, J., The Elements of Statistical Learning: Data Mining, Inference and Prediction, New York: Springer, 2001.

  38. Arlot, S. and Celisse, A., A Survey of Cross-Validation Procedures for Model Selection, Stat. Surveys, 4:40-79, 2010.

  39. Dubrule, O., Cross Validation of Kriging in a Unique Neighborhood, J. Int. Assoc. Math. Geol., 15(6):687-699,1983.

  40. Blatman, G. and Sudret, B., Adaptive Sparse Polynomial Chaos Expansion based on Least Angle Regression, J. Comput. Phys, 230:2345-2367, 2011.

  41. Goldberg, D.E., Genetic Algorithms in Search, Optimization and Machine Learning, Boston: Addison-Wesley Longman Publishing Co., Inc., 1989.

  42. Hansen, N., Muller, S.D., and Koumoutsakos, P., Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), Evol. Comput., 11(1):1-18,2003.

  43. Yang, Z., Tang, K., and Yao, X., Differential Evolution for High-Dimensional Function Optimization, in Proc. of2007IEEE Congress on Evolutionary Computation, pp. 3523-3530, 2007.

  44. Bertsekas, D.P., Nonlinear Programming, Belmont, MA: Athena Scientific, 1999.

  45. Scholkopf, B., Smola, A., and Muller, K.R., Nonlinear Component Analysis as a Kernel Eigenvalue Problem, Neural Comput:., 10(5):1299-1319, 1998.

  46. Ince, H. and Trafalis, T.B., Kernel Principal Component Analysis and Support Vector Machines for Stock Price Prediction, IEE Trans, 39(6):629-637, 2007.

  47. Wang, Q., Kernel Principal Component Analysis and Its Applications in Face Recognition and Active Shape Models, Comput. Sci., arXiv:1207.3538, 2011.

  48. Mika, S., Scholkopf, B., Smola, A.J., Muller, K.R., Scholz, M., and Ratsch, G., Kernel PCA and De-Noising in Feature Spaces, in Proc. of Advances in Neural Information Processing Systems, pp. 536-542, 1999.

  49. Weston, J., Scholkopf, B., and Bakir, G.H., Learning to Find Pre-Images, in Proc. of Advances in Neural Information Processing Systems, pp. 449-456,2004.

  50. Weinberger, K.Q., Sha, F., and Saul, L.K., Learning a Kernel Matrix for Nonlinear Dimensionality Reduction, in Proc. of 21st Int. Conf. on Machine Learning, p. 106,2004.

  51. Alam, M.A. and Fukumizu, K., Hyperparameter Selection in Kernel Principal Component Analysis, J. Comput. Sci, 10(7):1139-1150, 2014.

  52. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., and Dubourg, V., Scikit-Learn: Machine Learning in Python, J. Mach. Learn. Res., 12:2825-2830, 2011.

  53. Santner, T.J., Williams, B.J., andNotz, W.I., The Design and Analysis of Computer Experiments, New York: Springer, 2003.

  54. Bachoc, F., Cross Validation and Maximum Likelihood Estimations of Hyper-Parameters of Gaussian Processes with Model Misspecifications, Comput. Stat. Data Anal., 66:55-69,2013.

  55. Bazaraa, M.S., Sherali, H.D., and Shetty, C.M., Nonlinear Programming: Theory and Algorithms, New York: John Wiley & Sons, 2013.

  56. Moustapha, M., Sudret, B., Bourinet, J.M., and Guillaume, B., Comparative Study of Kriging and Support Vector Regression for Structural Engineering Applications, ASCE-ASMEJ. Risk Uncertainty Eng. Syst, Part A: Civ. Eng., 4(2):04018005,2018.

  57. Torre, E., Marelli, S., Embrechts, P., and Sudret, B., Data-Driven Polynomial Chaos Expansion for Machine Learning Regression, J. Comput. Phys, 388:601-623, 2019.

  58. Blatman, G. and Sudret, B., An Adaptive Algorithm to Build Up Sparse Polynomial Chaos Expansions for Stochastic Finite Element Analysis, Probl. Eng. Mech., 25:183-197, 2010.

  59. Jakeman, J., Eldred, M., and Sargsyan, K., Enhancing f1-Minimization Estimates of Polynomial Chaos Expansions Using Basis Selection, J. Comput. Phys, 289:18-34, 2015.

  60. Gautschi, W., Orthogonal Polynomials: Computation and Approximation, Numerical Mathematics and Scientific Computation, Oxford: Oxford University Press, 2004.

  61. Berveiller, M., Sudret, B., and Lemaire, M., Stochastic Finite Elements: A Non-Intrusive Approach by Regression, Eur. J. Comput. Mech., 15(1-3):81-92, 2006.

  62. Marelli, S. and Sudret, B., UQLAB: A Framework for Uncertainty Quantification in Matlab, in Vulnerability, Uncertainty, and Risk (Proc. 2nd Int. Conf. on Vulnerability, Risk Analysis and Management (ICVRAM2014)), Liverpool, UK, pp. 2554-2563, 2014.

  63. Marelli, S. and Sudret, B., UQLab User Manual-Polynomial Chaos Expansions, Tech. Rep., Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Report no. UQLab-V1.1-104, 2018.

  64. Lataniotis, C., Marelli, S., and Sudret, B., The Gaussian Process Modelling Module in UQLab, Soft Comput. Civil Eng., 2(3):91-116, 2018.

  65. Konakli, K. and Sudret, B., Global Sensitivity Analysis Using Low-Rank Tensor Approximations, Reliab. Eng. Syst. Saf., 156:64-83,2016.

  66. Kersaudy, P., Sudret, B., Varsier, N., Picon, O., and Wiart, J., A New Surrogate Modeling Technique Combining Kriging and Polynomial Chaos Expansions-Application to Uncertainty Analysis in Computational Dosimetry, J. Comput. Phys, 286:103-117,2015.

  67. McKay, M.D., Beckman, R.J., and Conover, W. J., A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code, Technometrics, 2:239-245, 1979.

  68. Sobol', I., Sensitivity Estimates for Nonlinear Mathematical Models, Math. Model. Comput. Exp., 1:407-414,1993.

  69. Saltelli, A., Chan, K., and Scott, E., Eds., Sensitivity Analysis, New York: J. Wiley & Sons, 2000.

  70. Helton, J.C., Iman, R.L., and Brown, J.B., Sensitivity Analysis of the Asymptotic Behavior of a Model for the Environmental Movement of Radionuclides, Ecol. Model, 28(4):243-278, 1985.

  71. Marelli, S., Lamas, C., Konakli, K., Mylonas, C., Wiederkehr, P., and Sudret, B., UQLab User Manual-Sensitivity Analysis, Tech. Rep., Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Report no. UQLab-V1.2-106, 2019.

  72. Li, C. and Der Kiureghian, A., Optimal Discretization of Random Fields, J. Eng. Mech., 119(6):1136-1154,1993.

  73. Tibshirani, R.J. and Tibshirani, R., A Bias Correction for the Minimum Error Rate in Cross-Validation, Ann. Appl. Stat., 3(2):822-829, 2009.


Articles with similar content:

Reconstruction of the Model of Probabilistic Dependences by Statistical Data. Tools and Algorithm
Journal of Automation and Information Sciences, Vol.41, 2009, issue 12
Alexander S. Balabanov
DESIGN UNDER UNCERTAINTY EMPLOYING STOCHASTIC EXPANSION METHODS
International Journal for Uncertainty Quantification, Vol.1, 2011, issue 2
Michael S. Eldred, Howard C. Elman
Neural network techniques for accuracy improvement of RANS simulations
ICHMT DIGITAL LIBRARY ONLINE, Vol.0, 2018, issue
Luiz E. B. Sampaio, M. A. Cruz, R. L. Thompson, R. D. A. Bacchi
PARALLEL ADAPTIVE MULTILEVEL SAMPLING ALGORITHMS FOR THE BAYESIAN ANALYSIS OF MATHEMATICAL MODELS
International Journal for Uncertainty Quantification, Vol.2, 2012, issue 3
Sai Hung Cheung, Ernesto Prudencio
POLYNOMIAL-CHAOS-BASED KRIGING
International Journal for Uncertainty Quantification, Vol.5, 2015, issue 2
Joe Wiart, Bruno Sudret, Roland Schobi