Return to search

Measuring the Stability of Results from Supervised Statistical Learning

Stability is a major requirement to draw reliable conclusions when
interpreting results from supervised statistical learning. In this
paper, we present a general framework for assessing and comparing the
stability of results, that can be used in real-world statistical
learning applications or in benchmark studies. We use the framework to
show that stability is a property of both the algorithm and the
data-generating process. In particular, we demonstrate that unstable
algorithms (such as recursive partitioning) can produce stable results
when the functional form of the relationship between the predictors
and the response matches the algorithm. Typical uses of the framework
in practice would be to compare the stability of results generated by
different candidate algorithms for a data set at hand or to assess the
stability of algorithms in a benchmark study. Code to perform the
stability analyses is provided in the form of an R-package. / Series: Research Report Series / Department of Statistics and Mathematics

Identiferoai:union.ndltd.org:VIENNA/oai:epub.wu-wien.ac.at:5398
Date17 January 2017
CreatorsPhilipp, Michel, Rusch, Thomas, Hornik, Kurt, Strobl, Carolin
PublisherWU Vienna University of Economics and Business
Source SetsWirtschaftsuniversität Wien
LanguageEnglish
Detected LanguageEnglish
TypePaper, NonPeerReviewed
Formatapplication/pdf
Relationhttp://statmath.wu.ac.at/, http://epub.wu.ac.at/5398/

Page generated in 0.0024 seconds