Return to search

The design and analysis of benchmark experiments

The assessment of the performance of learners by means of benchmark experiments is established exercise. In practice, benchmark studies are a tool to compare the performance of several competing algorithms for a certain learning problem. Cross-validation or resampling techniques are commonly used to derive point estimates of the performances which are compared to identify algorithms with good properties. For several benchmarking problems, test procedures taking the variability of those point estimates into account have been suggested. Most of the recently proposed inference procedures are based on special variance estimators for the cross-validated performance. We introduce a theoretical framework for inference problems in benchmark experiments and show that standard statistical test procedures can be used to test for differences in the performances. The theory is based on well defined distributions of performance measures which can be compared with established tests. To demonstrate the usefulness in practice, the theoretical results are applied to benchmark studies in a supervised learning situation based on artificial and real-world data. / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"

Identiferoai:union.ndltd.org:VIENNA/oai:epub.wu-wien.ac.at:epub-wu-01_59a
Date January 2003
CreatorsHothorn, Torsten, Leisch, Friedrich, Zeileis, Achim, Hornik, Kurt
PublisherSFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business
Source SetsWirtschaftsuniversität Wien
LanguageEnglish
Detected LanguageEnglish
TypePaper, NonPeerReviewed
Formatapplication/pdf
Relationhttp://epub.wu.ac.at/758/

Page generated in 0.002 seconds