Inscrição na biblioteca: Guest
International Journal for Uncertainty Quantification

Publicou 6 edições por ano

ISSN Imprimir: 2152-5080

ISSN On-line: 2152-5099

The Impact Factor measures the average number of citations received in a particular year by papers published in the journal during the two preceding years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) IF: 1.7 To calculate the five year Impact Factor, citations are counted in 2017 to the previous five years and divided by the source items published in the previous five years. 2017 Journal Citation Reports (Clarivate Analytics, 2018) 5-Year IF: 1.9 The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. Immediacy Index: 0.5 The Eigenfactor score, developed by Jevin West and Carl Bergstrom at the University of Washington, is a rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals. Eigenfactor: 0.0007 The Journal Citation Indicator (JCI) is a single measurement of the field-normalized citation impact of journals in the Web of Science Core Collection across disciplines. The key words here are that the metric is normalized and cross-disciplinary. JCI: 0.5 SJR: 0.584 SNIP: 0.676 CiteScore™:: 3 H-Index: 25

Indexed in

MULTIVARIATE ANALYSIS OF EXTRAPOLATING TIME-INVARIANT DATA WITH UNCERTAINTY

Volume 9, Edição 6, 2019, pp. 569-587
DOI: 10.1615/Int.J.UncertaintyQuantification.2019028125
Get accessGet access

RESUMO

Data analysis deciphers phenomena and system behaviors within a large number of experimental realizations. Transforming these massive quantities of raw data into knowledge about the data is made possible thanks to continuously improved computing techniques. In science and engineering, a particular interest lies within surrogate models for system behaviour prediction and data extrapolation. These models could, however, be under- or over- fitted when confronted to a complex dataset or one embedded with uncertainty. In this paper, we suggest a treatment approach of experimental data under uncertainty prior to its surrogate model creation. We specially focus on extrapolation an attempt to estimate the true underlying phenomena. We quantify the uncertainty quantity through eigenvalues, copy the behavior of the data through its covariance matrix, and reproduce an almost identical dataset whose particularity is a perfectly correlated inputs and output. This new dataset is then used as the basis for the creation of a surrogate model. Our approach shows consistency and a clear opportunity to obtain better predictions under uncertainty as it focuses on the overall dataset's behavior and stays faithful to each data.

Referências
  1. Letouz, E., Big Data for Development: Challenges and Opportunities, UN Global Pulse, from http://www.unglobalpulse.org big data for Development-GlobalPulseMay2012.pdf, 2012.

  2. Wasserman, L., Nonparametric Statistics, in All ofNonparametric Regression, New York: Springer, pp. 63-66, 2005.

  3. MathWorks, Least-Squares Curve Fitting, accessed May 2016, from http://www.mathworks.com/help/curvefit/least-squares-fitting.html, 2019.

  4. MathWorks, Polynomial Curve Fitting, accessed May 2016, from https://www.mathworks.com/help/matlab/ref/polyfit.html, 2019.

  5. OriginLAB, Curve Fitting, accessed May 2016, https://wwworiginlab.com/index.aspx?go=Products/Origin/DataAnalysis/ CurveFitting, 2019.

  6. University of Wisconsin-Madison, A Basic Introduction to Neural Networks, accessed May 2016, from http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html, 2019.

  7. Stefano, J.D., A Confidence Interval Approach to Data Analysis, Forest Ecol. Manag, 187(2-3):173-183,2004.

  8. Bylander, T., Estimating Generalization Error on Two-Class Datasets Using Out-of-Bag Estimates, Mach. Learn., 48(1):287-297, 2002.

  9. Taiana, M., Nascimento, J., and Bernardino, A., On the Purity of Training and Testing Data for Learning: The Case of Pedestrian Detection, Neurocomputing, 150(Part A):214-226, 2015.

  10. Browne, M.W., Cross-Validation Methods, J. Math. Psychol, 44(1):108-132, 2000.

  11. Cawley, G.C. and Talbot, N.L., Efficient Leave-One-Out Cross-Validation of Kernel Fisher Discriminant Classifiers, Pattern Recognit., 36(11):2585-2592, 2003.

  12. Sokolova, M. and Lapalme, G., A Systematic Analysis of Performance Measures for Classification Tasks, Inf. Process. Manag, 45(4):427-437, 2009.

  13. Davis, T.G., Total Least Squares Spiral Curve Fitting, J. Surv. Eng., 125(4):159-176, 1999.

  14. Golub, G.H. and Loan, C.F.V., An Analysis of the Total Least Squares Problem, Numer. Anal., 17:883-893,1980.

  15. Plesinger, M., The Total Least Squares Problem and Reduction of Data in AX = B, PhD, TU of Liberec and Institute of Computer Science, Liberec, Czech Republic, pp. 883-893,2008.

  16. Hnetynkova, I., Pleeinger, M., Sima, D.M., Strakos, Z., and Huffel, S.V., The Total Least Squares Problem in AX = B. A New Classification with the Relationship to the Classical Works, SIAMJ. Matrix Anal. Appl., 32:748-770, 2011.

  17. Jolliffe, I., Mathematical and Statistical Properties of Population Principal Components, in Principal Component Analysis, New York: Springer Verlag, pp. 27-28, 2002.

  18. Bishop, C.M. and Roach, C.M., Fast Curve Fitting Using Neural Networks, Rev. Sci. Instrum., 63(10):4450-4456, 1992.

  19. MathWorks, Deep Learning ToolboxTM (Formerly Neural Network ToolboxTM), accessed May 2016, from https://www.mathworks.com/products/deep-learning.html, 2019.

  20. Box, G.E.P., Science and Statistics, J. Am. Stat. Assoc, 71(356):791-799, 1976.

  21. He, Y., Mirzargar, M., Hudson, S., Kirby, R.M., and Whitaker, R.T., An Uncertainty Visualization Technique Using Possibility Theory: Possibilistic Marching Cubes, Int. J. Uncertainty Quantif., 5:433-451, 2015.

  22. Wu, K. and Zhang, S., A Contour Tree based Visualization for Exploring Data with Uncertainty, Int. J. Uncertainty Quantif, 3:203-223,2013.

  23. Crespo, L.G., Kenny, S.P., and Giesy, D.P., Random Predictor Models for Rigorous Uncertainty Quantification, Int. J. Uncertainty Quantif, 5:469-489, 2016.

  24. Yang, C., Xiu, D., and Kirby, R.M., Visualization of Covariance and Cross-Covariance Fields, Int. J. Uncertainty Quantif, 3:25-38,2013.

  25. Correa, D.C. and Lindstrom, P., The Mutual Information Diagram for Uncertainty Visualization, Int. J. Uncertainty Quantif, 3:187-201,2013.

Portal Digital Begell Biblioteca digital da Begell eBooks Diários Referências e Anais Coleções de pesquisa Políticas de preços e assinaturas Begell House Contato Language English 中文 Русский Português German French Spain