Spelling suggestions: "subject:"classifier selection"" "subject:"elassifier selection""
1 |
Data measures that characterise classification problemsVan der Walt, Christiaan Maarten 29 August 2008 (has links)
We have a wide-range of classifiers today that are employed in numerous applications, from credit scoring to speech-processing, with great technical and commercial success. No classifier, however, exists that will outperform all other classifiers on all classification tasks, and the process of classifier selection is still mainly one of trial and error. The optimal classifier for a classification task is determined by the characteristics of the data set employed; understanding the relationship between data characteristics and the performance of classifiers is therefore crucial to the process of classifier selection. Empirical and theoretical approaches have been employed in the literature to define this relationship. None of these approaches have, however, been very successful in accurately predicting or explaining classifier performance on real-world data. We use theoretical properties of classifiers to identify data characteristics that influence classifier performance; these data properties guide us in the development of measures that describe the relationship between data characteristics and classifier performance. We employ these data measures on real-world and artificial data to construct a meta-classification system. We use theoretical properties of classifiers to identify data characteristics that influence classifier performance; these data properties guide us in the development of measures that describe the relationship between data characteristics and classifier performance. We employ these data measures on real-world and artificial data to construct a meta-classification system. The purpose of this meta-classifier is two-fold: (1) to predict the classification performance of real-world classification tasks, and (2) to explain these predictions in order to gain insight into the properties of real-world data. We show that these data measures can be employed successfully to predict the classification performance of real-world data sets; these predictions are accurate in some instances but there is still unpredictable behaviour in other instances. We illustrate that these data measures can give valuable insight into the properties and data structures of real-world data; these insights are extremely valuable for high-dimensional classification problems. / Dissertation (MEng)--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / unrestricted
|
2 |
Development and Evaluation of a Flexible Framework for the Design of Autonomous Classifier SystemsGanapathy, Priya 22 December 2009 (has links)
No description available.
|
3 |
Boundary uncertainty-based classifier evaluation / 境界曖昧性に基づく分類器評価 / キョウカイ アイマイセイ ニ モトズク ブンルイキ ヒョウカア デイビッド, David Ha 20 September 2019 (has links)
種々の分類器を対象として,有限個の学習データのみが利用可能である現実においても理論的に的確で計算量的にも実際的な,分類器性能評価手法を提案する.分類器評価における難しさは,有限データのみの利用に起因する分類誤り推定に伴う偏りの発生にある.この困難を解決するため,「境界曖昧性」と呼ばれる新しい評価尺度を提案し,それを用いる評価法の有用性を,3種の分類器と13個のデータセットを用いた実験を通して実証する. / We propose a general method that makes accurate evaluation of any classifier model for realistic tasks, both in a theoretical sense despite the finiteness of the available data, and in a practical sense in terms of computation costs. The classifier evaluation challenge arises from the bias of the classification error estimate that is only based on finite data. We bypass this existing difficulty by proposing a new classifier evaluation measure called "boundary uncertainty'' whose estimate based on finite data can be considered a reliable representative of its expectation based on infinite data, and demonstrate the potential of our approach on three classifier models and thirteen datasets. / 博士(工学) / Doctor of Philosophy in Engineering / 同志社大学 / Doshisha University
|
Page generated in 0.1104 seconds