Spelling suggestions: "subject:"discriminant analysis"" "subject:"oiscriminant analysis""
21 |
Characterization of Wood Features Using Color, Shape, and Density ParametersBond, Brian H. 27 July 1998 (has links)
Automated defect detection methods allow the forest products industry to better utilize its resources by improving yield, reducing labor costs, and allowing minimum lumber grades to be utilized more intelligently. While many methods have been proposed on what sensors and processing methods should be used to detect and classify wood features, there exists a lack of understanding of what parameters are best used to differentiate wood features.
The goal of this research is to demonstrate that by having an in depth knowledge of how wood features are represented by color, shape, and density parameters, more accurate classification methods can be developed. This goal was achieved through describing wood features using parameters derived from color and x-ray images and characterizing the variability and interrelationships of these parameters, determining the effect of resolution and species on these relationships, and determining the importance and contribution of each parameter for differentiating between wood features using a statistical prediction model relating feature types to the parameters. Knots, bark pockets, stain and mineral streak, and clearwood were selected as features from red oak, (Quercus rubra), hard maple, (Acer saccharum), and Eastern white pine (Pinus stobus). Color (RGB and HSI), shape (eccentricity and roundness), and density (gray-scale values) parameters were measured.
Parameters were measured for each wood feature from images and parameter differences between feature types were tested using analysis of variance techniques (ANOVA) and Tukey's pairwise comparisons with a=0.05. Discriminant classifiers were then developed to demonstrate that an in-depth knowledge of how parameters relate between feature types could be used to develop the best possible classification methods. Classifiers developed using the knowledge of parameter relationships were found to provide higher classification accuracies for all features and species than those which used all parameters and where variable selection procedures had been used<
It was determined that differences exist between all feature types and can be characterized and classified based on two color means, one color standard deviation, the mean density, and a shape parameter. A reduction in image resolution was determined not to affect the relationship of parameters. For different species, the intensity of features was to be related to the intensity of clearwood. The ability to explain classification errors using the knowledge gained about feature parameters was demonstrated. This knowledge could be used to reduce future classification errors.
It was determined that combining parameters collected using multiple sensors increases classification accuracy of wood features. Shape and density were found not to provide good classification variables for features when used separately, but were found to contribute to classification of features when used with other parameters. The ability to differentiate between the feature types examined in this research was found be equal when using the RGB or HSI colorspace. / Ph. D.
|
22 |
Analysis of categorical data with misclassification errors.January 1988 (has links)
by Chun-nam Lau. / Thesis (M.Ph.)--Chinese University of Hong Kong, 1988. / Bibliography: leaves 85-89.
|
23 |
Incremental document clustering for web page classification.January 2000 (has links)
by Wong, Wai-Chiu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 89-94). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgments --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Document Clustering --- p.2 / Chapter 1.2 --- DC-tree --- p.4 / Chapter 1.3 --- Feature Extraction --- p.5 / Chapter 1.4 --- Outline of the Thesis --- p.5 / Chapter 2 --- Related Work --- p.8 / Chapter 2.1 --- Clustering Algorithms --- p.8 / Chapter 2.1.1 --- Partitional Clustering Algorithms --- p.8 / Chapter 2.1.2 --- Hierarchical Clustering Algorithms --- p.10 / Chapter 2.2 --- Document Classification by Examples --- p.11 / Chapter 2.2.1 --- k-NN algorithm - Expert Network (ExpNet) --- p.11 / Chapter 2.2.2 --- Learning Linear Text Classifier --- p.12 / Chapter 2.2.3 --- Generalized Instance Set (GIS) algorithm --- p.12 / Chapter 2.3 --- Document Clustering --- p.13 / Chapter 2.3.1 --- B+-tree-based Document Clustering --- p.13 / Chapter 2.3.2 --- Suffix Tree Clustering --- p.14 / Chapter 2.3.3 --- Association Rule Hypergraph Partitioning Algorithm --- p.15 / Chapter 2.3.4 --- Principal Component Divisive Partitioning --- p.17 / Chapter 2.4 --- Projections for Efficient Document Clustering --- p.18 / Chapter 3 --- Background --- p.21 / Chapter 3.1 --- Document Preprocessing --- p.21 / Chapter 3.1.1 --- Elimination of Stopwords --- p.22 / Chapter 3.1.2 --- Stemming Technique --- p.22 / Chapter 3.2 --- Problem Modeling --- p.23 / Chapter 3.2.1 --- Basic Concepts --- p.23 / Chapter 3.2.2 --- Vector Model --- p.24 / Chapter 3.3 --- Feature Selection Scheme --- p.25 / Chapter 3.4 --- Similarity Model --- p.27 / Chapter 3.5 --- Evaluation Techniques --- p.29 / Chapter 4 --- Feature Extraction and Weighting --- p.31 / Chapter 4.1 --- Statistical Analysis of the Words in the Web Domain --- p.31 / Chapter 4.2 --- Zipf's Law --- p.33 / Chapter 4.3 --- Traditional Methods --- p.36 / Chapter 4.4 --- The Proposed Method --- p.38 / Chapter 4.5 --- Experimental Results --- p.40 / Chapter 4.5.1 --- Synthetic Data Generation --- p.40 / Chapter 4.5.2 --- Real Data Source --- p.41 / Chapter 4.5.3 --- Coverage --- p.41 / Chapter 4.5.4 --- Clustering Quality --- p.43 / Chapter 4.5.5 --- Binary Weight vs Numerical Weight --- p.45 / Chapter 5 --- Web Document Clustering Using DC-tree --- p.48 / Chapter 5.1 --- Document Representation --- p.48 / Chapter 5.2 --- Document Cluster (DC) --- p.49 / Chapter 5.3 --- DC-tree --- p.52 / Chapter 5.3.1 --- Tree Definition --- p.52 / Chapter 5.3.2 --- Insertion --- p.54 / Chapter 5.3.3 --- Node Splitting --- p.55 / Chapter 5.3.4 --- Deletion and Node Merging --- p.56 / Chapter 5.4 --- The Overall Strategy --- p.57 / Chapter 5.4.1 --- Preprocessing --- p.57 / Chapter 5.4.2 --- Building DC-tree --- p.59 / Chapter 5.4.3 --- Identifying the Interesting Clusters --- p.60 / Chapter 5.5 --- Experimental Results --- p.61 / Chapter 5.5.1 --- Alternative Similarity Measurement : Synthetic Data --- p.61 / Chapter 5.5.2 --- DC-tree Characteristics : Synthetic Data --- p.63 / Chapter 5.5.3 --- Compare DC-tree and B+-tree: Synthetic Data --- p.64 / Chapter 5.5.4 --- Compare DC-tree and B+-tree: Real Data --- p.66 / Chapter 5.5.5 --- Varying the Number of Features : Synthetic Data --- p.67 / Chapter 5.5.6 --- Non-Correlated Topic Web Page Collection: Real Data --- p.69 / Chapter 5.5.7 --- Correlated Topic Web Page Collection: Real Data --- p.71 / Chapter 5.5.8 --- Incremental updates on Real Data Set --- p.72 / Chapter 5.5.9 --- Comparison with the other clustering algorithms --- p.73 / Chapter 6 --- Conclusion --- p.75 / Appendix --- p.77 / Chapter A --- Stopword List --- p.77 / Chapter B --- Porter's Stemming Algorithm --- p.81 / Chapter C --- Insertion Algorithm --- p.83 / Chapter D --- Node Splitting Algorithm --- p.85 / Chapter E --- Features Extracted in Experiment 4.53 --- p.87 / Bibliography --- p.88
|
24 |
Chemical pattern recognition of the traditional Chinese medicinal herb, epimedium.January 1998 (has links)
by Kwan Yee Ting, Chris. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 44-48). / Abstract also in Chinese. / Acknowledgements --- p.i / Abstract --- p.ii / Table of Contents --- p.v / List of Figures --- p.ix / List of Tables --- p.x / Chapter Part 1. --- Introduction --- p.1 / Chapter 1.1 --- Identification of TCM --- p.1 / Chapter 1.2 --- Chemical Pattern Recognition --- p.2 / Chapter 1.3 --- Discriminant Analysis --- p.3 / Chapter 1.4 --- Epimedium --- p.5 / Chapter 1.5 --- High Performance Liquid Chromatography --- p.6 / Chapter 1.6 --- Objectives of this work --- p.8 / Chapter Part 2. --- Chemical Analysis --- p.9 / Chapter 2.1 --- Sources of Epimedium samples --- p.9 / Chapter 2.2 --- Extraction --- p.9 / Chapter 2.2.1 --- Sample Pre-treatment --- p.9 / Chapter 2.2.2 --- Extraction Procedure --- p.9 / Chapter 2.2.3 --- Extraction Recovery --- p.11 / Chapter 2.3 --- Instrumental Analysis --- p.11 / Chapter 2.3.1 --- Chromatographic Operating Conditions --- p.12 / Chapter 2.3.2 --- Preparation of Calibration Graph --- p.12 / Chapter 2.3.3 --- Sample injection --- p.13 / Chapter 2.4 --- Results and Discussion --- p.13 / Chapter 2.4.1 --- Linearity of the Calibration Graph --- p.13 / Chapter 2.4.2 --- Development of Analysis Procedure --- p.15 / Chapter 2.4.2.1 --- Sample Pre-treatment --- p.15 / Chapter 2.4.2.2 --- Extractant --- p.15 / Chapter 2.4.2.3 --- Purification of Extract --- p.15 / Chapter 2.4.2.4 --- Extraction Time --- p.17 / Chapter 2.4.2.5 --- Solvent Gradient --- p.18 / Chapter 2.4.2.6 --- Detection --- p.19 / Chapter 2.4.3 --- Quantitative Analysis --- p.19 / Chapter 2.4.3.1 --- Extraction Recovery --- p.19 / Chapter 2.4.3.2 --- Icariin Content --- p.20 / Chapter 2.5 --- Conclusions --- p.22 / Chapter Part 3. --- Chemical Pattern Recognition --- p.24 / Chapter 3.1 --- Materials and Methods --- p.24 / Chapter 3.1.1 --- Chromatographic Results --- p.24 / Chapter 3.1.2 --- Patterns of Epimedium Samples --- p.24 / Chapter 3.1.3 --- Computer Program --- p.25 / Chapter 3.1.4 --- Variable Extraction --- p.25 / Chapter 3.1.4.1 --- Variable Extraction Parameters --- p.25 / Chapter 3.1.4.2 --- Variable Extraction Methods --- p.26 / Chapter 3.1.4.3 --- Transformation of Variables --- p.27 / Chapter 3.1.5 --- Variable Selection --- p.27 / Chapter 3.1.6 --- Predictive Power of the Recognition Model --- p.28 / Chapter 3.2 --- Results --- p.28 / Chapter 3.2.1 --- Accuracy of the Recognition Models --- p.28 / Chapter 3.2.2 --- Classification Functions --- p.29 / Chapter 3.2.3 --- Casewise Results of Recognition Model IV --- p.31 / Chapter 3.2.4 --- Plotting of the Best Two Canonical Discriminant Functions --- p.33 / Chapter 3.3 --- Discussion --- p.33 / Chapter 3.3.1 --- Meaning of Extracted Variables --- p.33 / Chapter 3.3.2 --- Limitations of Variable Extraction Methods --- p.34 / Chapter 3.3.3 --- Importance of the Variable Extraction Methods --- p.34 / Chapter 3.3.4 --- "Reasons for the Poor Performance in Recognition Models I, II and III" --- p.35 / Chapter 3.3.5 --- Selected Variables in Model IV --- p.35 / Chapter 3.3.6 --- Misclassified Samples --- p.36 / Chapter 3.3.7 --- Quality Assessment --- p.38 / Chapter 3.3.8 --- Comparison with Another Chemical Pattern Recognition Method for the Identification of Epimedium --- p.39 / Chapter 3.3.9 --- Potential Usage of the Pattern Recognition Method --- p.42 / Chapter 3.3.10 --- Advantage of the Pattern Recognition Method --- p.42 / Chapter 3.3.11 --- Disadvantage of Discriminant Analysis --- p.42 / Chapter 3.4 --- Conclusions --- p.43 / References --- p.44 / Appendix I Epimedium Species in China --- p.49 / Appendix II --- p.50 / Chapter II.1 --- Chromatograms of Samples of Epimedium sagittatum --- p.50 / Chapter II.2 --- Chromatograms of Samples of Epimedium pubescens --- p.57 / Chapter II.3 --- Chromatograms of Samples of Epimedium koreanum --- p.61 / Chapter II.4 --- Chromatograms of Samples of Epimedium leptorrhizum --- p.67 / Chapter II.5 --- Chromatograms of Samples of Epimedium wnshanese --- p.69 / Chapter II.6 --- Chromatograms of Samples of Epimedium brevicornum --- p.72 / Appendix III Log-transformed Values of Variables --- p.75
|
25 |
Discriminant feature pursuit: from statistical learning to informative learning.January 2006 (has links)
Lin Dahua. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 233-250). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Problem We are Facing --- p.1 / Chapter 1.2 --- Generative vs. Discriminative Models --- p.2 / Chapter 1.3 --- Statistical Feature Extraction: Success and Challenge --- p.3 / Chapter 1.4 --- Overview of Our Works --- p.5 / Chapter 1.4.1 --- New Linear Discriminant Methods: Generalized LDA Formulation and Performance-Driven Sub space Learning --- p.5 / Chapter 1.4.2 --- Coupled Learning Models: Coupled Space Learning and Inter Modality Recognition --- p.6 / Chapter 1.4.3 --- Informative Learning Approaches: Conditional Infomax Learning and Information Chan- nel Model --- p.6 / Chapter 1.5 --- Organization of the Thesis --- p.8 / Chapter I --- History and Background --- p.10 / Chapter 2 --- Statistical Pattern Recognition --- p.11 / Chapter 2.1 --- Patterns and Classifiers --- p.11 / Chapter 2.2 --- Bayes Theory --- p.12 / Chapter 2.3 --- Statistical Modeling --- p.14 / Chapter 2.3.1 --- Maximum Likelihood Estimation --- p.14 / Chapter 2.3.2 --- Gaussian Model --- p.15 / Chapter 2.3.3 --- Expectation-Maximization --- p.17 / Chapter 2.3.4 --- Finite Mixture Model --- p.18 / Chapter 2.3.5 --- A Nonparametric Technique: Parzen Windows --- p.21 / Chapter 3 --- Statistical Learning Theory --- p.24 / Chapter 3.1 --- Formulation of Learning Model --- p.24 / Chapter 3.1.1 --- Learning: Functional Estimation Model --- p.24 / Chapter 3.1.2 --- Representative Learning Problems --- p.25 / Chapter 3.1.3 --- Empirical Risk Minimization --- p.26 / Chapter 3.2 --- Consistency and Convergence of Learning --- p.27 / Chapter 3.2.1 --- Concept of Consistency --- p.27 / Chapter 3.2.2 --- The Key Theorem of Learning Theory --- p.28 / Chapter 3.2.3 --- VC Entropy --- p.29 / Chapter 3.2.4 --- Bounds on Convergence --- p.30 / Chapter 3.2.5 --- VC Dimension --- p.35 / Chapter 4 --- History of Statistical Feature Extraction --- p.38 / Chapter 4.1 --- Linear Feature Extraction --- p.38 / Chapter 4.1.1 --- Principal Component Analysis (PCA) --- p.38 / Chapter 4.1.2 --- Linear Discriminant Analysis (LDA) --- p.41 / Chapter 4.1.3 --- Other Linear Feature Extraction Methods --- p.46 / Chapter 4.1.4 --- Comparison of Different Methods --- p.48 / Chapter 4.2 --- Enhanced Models --- p.49 / Chapter 4.2.1 --- Stochastic Discrimination and Random Subspace --- p.49 / Chapter 4.2.2 --- Hierarchical Feature Extraction --- p.51 / Chapter 4.2.3 --- Multilinear Analysis and Tensor-based Representation --- p.52 / Chapter 4.3 --- Nonlinear Feature Extraction --- p.54 / Chapter 4.3.1 --- Kernelization --- p.54 / Chapter 4.3.2 --- Dimension reduction by Manifold Embedding --- p.56 / Chapter 5 --- Related Works in Feature Extraction --- p.59 / Chapter 5.1 --- Dimension Reduction --- p.59 / Chapter 5.1.1 --- Feature Selection --- p.60 / Chapter 5.1.2 --- Feature Extraction --- p.60 / Chapter 5.2 --- Kernel Learning --- p.61 / Chapter 5.2.1 --- Basic Concepts of Kernel --- p.61 / Chapter 5.2.2 --- The Reproducing Kernel Map --- p.62 / Chapter 5.2.3 --- The Mercer Kernel Map --- p.64 / Chapter 5.2.4 --- The Empirical Kernel Map --- p.65 / Chapter 5.2.5 --- Kernel Trick and Kernelized Feature Extraction --- p.66 / Chapter 5.3 --- Subspace Analysis --- p.68 / Chapter 5.3.1 --- Basis and Subspace --- p.68 / Chapter 5.3.2 --- Orthogonal Projection --- p.69 / Chapter 5.3.3 --- Orthonormal Basis --- p.70 / Chapter 5.3.4 --- Subspace Decomposition --- p.70 / Chapter 5.4 --- Principal Component Analysis --- p.73 / Chapter 5.4.1 --- PCA Formulation --- p.73 / Chapter 5.4.2 --- Solution to PCA --- p.75 / Chapter 5.4.3 --- Energy Structure of PCA --- p.76 / Chapter 5.4.4 --- Probabilistic Principal Component Analysis --- p.78 / Chapter 5.4.5 --- Kernel Principal Component Analysis --- p.81 / Chapter 5.5 --- Independent Component Analysis --- p.83 / Chapter 5.5.1 --- ICA Formulation --- p.83 / Chapter 5.5.2 --- Measurement of Statistical Independence --- p.84 / Chapter 5.6 --- Linear Discriminant Analysis --- p.85 / Chapter 5.6.1 --- Fisher's Linear Discriminant Analysis --- p.85 / Chapter 5.6.2 --- Improved Algorithms for Small Sample Size Problem . --- p.89 / Chapter 5.6.3 --- Kernel Discriminant Analysis --- p.92 / Chapter II --- Improvement in Linear Discriminant Analysis --- p.100 / Chapter 6 --- Generalized LDA --- p.101 / Chapter 6.1 --- Regularized LDA --- p.101 / Chapter 6.1.1 --- Generalized LDA Implementation Procedure --- p.101 / Chapter 6.1.2 --- Optimal Nonsingular Approximation --- p.103 / Chapter 6.1.3 --- Regularized LDA algorithm --- p.104 / Chapter 6.2 --- A Statistical View: When is LDA optimal? --- p.105 / Chapter 6.2.1 --- Two-class Gaussian Case --- p.106 / Chapter 6.2.2 --- Multi-class Cases --- p.107 / Chapter 6.3 --- Generalized LDA Formulation --- p.108 / Chapter 6.3.1 --- Mathematical Preparation --- p.108 / Chapter 6.3.2 --- Generalized Formulation --- p.110 / Chapter 7 --- Dynamic Feedback Generalized LDA --- p.112 / Chapter 7.1 --- Basic Principle --- p.112 / Chapter 7.2 --- Dynamic Feedback Framework --- p.113 / Chapter 7.2.1 --- Initialization: K-Nearest Construction --- p.113 / Chapter 7.2.2 --- Dynamic Procedure --- p.115 / Chapter 7.3 --- Experiments --- p.115 / Chapter 7.3.1 --- Performance in Training Stage --- p.116 / Chapter 7.3.2 --- Performance on Testing set --- p.118 / Chapter 8 --- Performance-Driven Subspace Learning --- p.119 / Chapter 8.1 --- Motivation and Principle --- p.119 / Chapter 8.2 --- Performance-Based Criteria --- p.121 / Chapter 8.2.1 --- The Verification Problem and Generalized Average Margin --- p.122 / Chapter 8.2.2 --- Performance Driven Criteria based on Generalized Average Margin --- p.123 / Chapter 8.3 --- Optimal Subspace Pursuit --- p.125 / Chapter 8.3.1 --- Optimal threshold --- p.125 / Chapter 8.3.2 --- Optimal projection matrix --- p.125 / Chapter 8.3.3 --- Overall procedure --- p.129 / Chapter 8.3.4 --- Discussion of the Algorithm --- p.129 / Chapter 8.4 --- Optimal Classifier Fusion --- p.130 / Chapter 8.5 --- Experiments --- p.131 / Chapter 8.5.1 --- Performance Measurement --- p.131 / Chapter 8.5.2 --- Experiment Setting --- p.131 / Chapter 8.5.3 --- Experiment Results --- p.133 / Chapter 8.5.4 --- Discussion --- p.139 / Chapter III --- Coupled Learning of Feature Transforms --- p.140 / Chapter 9 --- Coupled Space Learning --- p.141 / Chapter 9.1 --- Introduction --- p.142 / Chapter 9.1.1 --- What is Image Style Transform --- p.142 / Chapter 9.1.2 --- Overview of our Framework --- p.143 / Chapter 9.2 --- Coupled Space Learning --- p.143 / Chapter 9.2.1 --- Framework of Coupled Modelling --- p.143 / Chapter 9.2.2 --- Correlative Component Analysis --- p.145 / Chapter 9.2.3 --- Coupled Bidirectional Transform --- p.148 / Chapter 9.2.4 --- Procedure of Coupled Space Learning --- p.151 / Chapter 9.3 --- Generalization to Mixture Model --- p.152 / Chapter 9.3.1 --- Coupled Gaussian Mixture Model --- p.152 / Chapter 9.3.2 --- Optimization by EM Algorithm --- p.152 / Chapter 9.4 --- Integrated Framework for Image Style Transform --- p.154 / Chapter 9.5 --- Experiments --- p.156 / Chapter 9.5.1 --- Face Super-resolution --- p.156 / Chapter 9.5.2 --- Portrait Style Transforms --- p.157 / Chapter 10 --- Inter-Modality Recognition --- p.162 / Chapter 10.1 --- Introduction to the Inter-Modality Recognition Problem . . . --- p.163 / Chapter 10.1.1 --- What is Inter-Modality Recognition --- p.163 / Chapter 10.1.2 --- Overview of Our Feature Extraction Framework . . . . --- p.163 / Chapter 10.2 --- Common Discriminant Feature Extraction --- p.165 / Chapter 10.2.1 --- Formulation of the Learning Problem --- p.165 / Chapter 10.2.2 --- Matrix-Form of the Objective --- p.168 / Chapter 10.2.3 --- Solving the Linear Transforms --- p.169 / Chapter 10.3 --- Kernelized Common Discriminant Feature Extraction --- p.170 / Chapter 10.4 --- Multi-Mode Framework --- p.172 / Chapter 10.4.1 --- Multi-Mode Formulation --- p.172 / Chapter 10.4.2 --- Optimization Scheme --- p.174 / Chapter 10.5 --- Experiments --- p.176 / Chapter 10.5.1 --- Experiment Settings --- p.176 / Chapter 10.5.2 --- Experiment Results --- p.177 / Chapter IV --- A New Perspective: Informative Learning --- p.180 / Chapter 11 --- Toward Information Theory --- p.181 / Chapter 11.1 --- Entropy and Mutual Information --- p.181 / Chapter 11.1.1 --- Entropy --- p.182 / Chapter 11.1.2 --- Relative Entropy (Kullback Leibler Divergence) --- p.184 / Chapter 11.2 --- Mutual Information --- p.184 / Chapter 11.2.1 --- Definition of Mutual Information --- p.184 / Chapter 11.2.2 --- Chain rules --- p.186 / Chapter 11.2.3 --- Information in Data Processing --- p.188 / Chapter 11.3 --- Differential Entropy --- p.189 / Chapter 11.3.1 --- Differential Entropy of Continuous Random Variable . --- p.189 / Chapter 11.3.2 --- Mutual Information of Continuous Random Variable . --- p.190 / Chapter 12 --- Conditional Infomax Learning --- p.191 / Chapter 12.1 --- An Overview --- p.192 / Chapter 12.2 --- Conditional Informative Feature Extraction --- p.193 / Chapter 12.2.1 --- Problem Formulation and Features --- p.193 / Chapter 12.2.2 --- The Information Maximization Principle --- p.194 / Chapter 12.2.3 --- The Information Decomposition and the Conditional Objective --- p.195 / Chapter 12.3 --- The Efficient Optimization --- p.197 / Chapter 12.3.1 --- Discrete Approximation Based on AEP --- p.197 / Chapter 12.3.2 --- Analysis of Terms and Their Derivatives --- p.198 / Chapter 12.3.3 --- Local Active Region Method --- p.200 / Chapter 12.4 --- Bayesian Feature Fusion with Sparse Prior --- p.201 / Chapter 12.5 --- The Integrated Framework for Feature Learning --- p.202 / Chapter 12.6 --- Experiments --- p.203 / Chapter 12.6.1 --- A Toy Problem --- p.203 / Chapter 12.6.2 --- Face Recognition --- p.204 / Chapter 13 --- Channel-based Maximum Effective Information --- p.209 / Chapter 13.1 --- Motivation and Overview --- p.209 / Chapter 13.2 --- Maximizing Effective Information --- p.211 / Chapter 13.2.1 --- Relation between Mutual Information and Classification --- p.211 / Chapter 13.2.2 --- Linear Projection and Metric --- p.212 / Chapter 13.2.3 --- Channel Model and Effective Information --- p.213 / Chapter 13.2.4 --- Parzen Window Approximation --- p.216 / Chapter 13.3 --- Parameter Optimization on Grassmann Manifold --- p.217 / Chapter 13.3.1 --- Grassmann Manifold --- p.217 / Chapter 13.3.2 --- Conjugate Gradient Optimization on Grassmann Manifold --- p.219 / Chapter 13.3.3 --- Computation of Gradient --- p.221 / Chapter 13.4 --- Experiments --- p.222 / Chapter 13.4.1 --- A Toy Problem --- p.222 / Chapter 13.4.2 --- Face Recognition --- p.223 / Chapter 14 --- Conclusion --- p.230
|
26 |
Analysis of ordinal square table with misclassified data.January 2007 (has links)
Tam, Hiu Wah. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 41). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Estimation with Known Misclassification Probabilities --- p.5 / Chapter 2.1 --- Model --- p.5 / Chapter 2.2 --- Maximum Likelihood Estimation --- p.7 / Chapter 2.3 --- Examples --- p.9 / Chapter 2.3.1 --- Example 1: A Real data set analysis --- p.9 / Chapter 2.3.2 --- Example 2: An Artificial Data for 3x3 Table --- p.11 / Chapter 3 --- Estimation by Double Sampling --- p.12 / Chapter 3.1 --- Estimation --- p.13 / Chapter 3.2 --- Example --- p.14 / Chapter 3.2.1 --- Example 3: An Artificial Data Example for 3x3 Table --- p.14 / Chapter 4 --- Simulation --- p.15 / Chapter 5 --- Conclusion --- p.17 / Table --- p.19 / Appendix --- p.27 / Bibliography --- p.41
|
27 |
Regularized Discriminant Analysis: A Large Dimensional StudyYang, Xiaoke 28 April 2018 (has links)
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).
|
28 |
Wavelet-based methods for estimation and discriminationDavis, J. Wade, January 2003 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2003. / Typescript. Vita. Includes bibliographical references (leaves 107-111). Also available on the Internet.
|
29 |
Relationship-based clustering and cluster ensembles for high-dimensional data mining /Strehl, Alexander, January 2002 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2002. / Vita. Includes bibliographical references (leaves 191-214). Available also in a digital version from Dissertation Abstracts.
|
30 |
Wavelet-based methods for estimation and discrimination /Davis, J. Wade, January 2003 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2003. / Typescript. Vita. Includes bibliographical references (leaves 107-111). Also available on the Internet.
|
Page generated in 0.0929 seconds