• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 229
  • 93
  • 20
  • 19
  • 15
  • 11
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 495
  • 417
  • 94
  • 83
  • 67
  • 58
  • 53
  • 50
  • 47
  • 43
  • 42
  • 41
  • 37
  • 36
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Sex Estimation from the Clavicle: A Discriminant Function Analysis

Cleary, Megan Kathleen 01 May 2012 (has links)
AN ABSTRACT OF THE THESIS OF MEGAN K. CLEARY, for the Master of Arts degree in ANTHROPOLOGY, presented on MARCH 28th at 8am, at Southern Illinois University Carbondale TITLE: SEX ESTIMATION FROM THE CLAVICLE: A DISCRIMINANT FUNCTION ANALYSIS MAJOR PROFESSOR: Dr. Gretchen R. Dabbs The development of methods for sex estimation using postcranial remains other than the os coxa is imperative for physical anthropology to improve the reliability of biological profile estimates in cases of incomplete and/or fragmentary skeletal remains. As the last skeletal element to complete fusion, the clavicle has the longest period of time to develop sexually dimorphic features, making it an ideal skeletal element for use in sex estimation. Sexual dimorphism in the clavicle was assessed using 18 measurements of the left clavicle of 265 (129 females; 136 males) individuals from the Hamann-Todd Collection. Independent samples t-tests with Bonferroni correction show males and females differ at a statistically significant level for all 18 variables with a significance level of 0.0028. Discriminate function analyses using the stepwise method (0.05 to enter, 0.10 to exit) produced a four variable model with cross-validated accuracy of 89.8%. A holdout sample from the Hamann-Todd Collection (n=30) similar in demographic character to the calibration sample was tested using the four variable model. The accuracy of the four variable model on the holdout sample was 90.0%. Additionally, four single variable models developed to accommodate fragmentary remains also have high predictive power (75.1-82.3% cross-validated calibration sample; 60.0-86.7% hold-out sample).
22

Investor Risk Tolerance: Testing The Efficacy Of Demographics As Differentiating and Classifying Factors

Grable, John E. 29 October 1997 (has links)
This study was designed to determine whether the variables gender, age, marital status, occupation, self-employment, income, race, and education could be used individually or in combination to both differentiate among levels of investor risk tolerance and classify individuals into risk-tolerance categories. The Leimberg, Satinsky, LeClair, and Doyle (1993) financial management model was used as the theoretical basis for this study. The model explains the process of how investment managers effectively develop plans to allocate a client's scarce investment resources to meet financial objectives. An empirical model for categorizing investors into risk-tolerance categories using demographic factors was developed and empirically tested using data from the 1992 Survey of Consumer Finances (SCF) (N = 2,626). The average respondent was affluent and best represented the profile of an investment management client. Based on findings from a multiple discriminant analysis test it was determined that respondent demographic characteristics were significant in differentiating among levels of risk tolerance at the p < .0001 level (i.e., gender, married, single but previously married, professional occupational status, self-employment status, income, White, Black, and Hispanic racial background, and educational level), while three demographic characteristics were found to be statistically insignificant (i.e., age, Asian racial background, and never married). Multiple discriminant analysis also revealed that the demographic variables examined in this study explained approximately 20% of the variance among the three levels of investor risk tolerance. Classification equations were generated. The classification procedure offered only a 20% improvement-over-chance, which was determined to be a low proportional reduction in error. The classification procedure also generated unacceptable levels of false positive classifications, which led to over classification of respondents into high and no risk-tolerance categories, while under classifying respondents into the average risk-tolerance category. Two demographic characteristics were determined to be the most effective in differentiating among and classifying respondents into risk-tolerance categories. Classes of risk tolerance differed most widely on respondents' educational level and gender. Educational level of respondents was determined to be the most significant optimizing factor. It also was concluded that demographic characteristics provide only a starting point in assessing investor risk tolerance. Understanding risk tolerance is a complicated process that goes beyond the exclusive use of demographic characteristics. More research is needed to determine which additional factors can be used by investment managers to increase the explained variance in risk-tolerance differences. / Ph. D.
23

Indices in number fields and their applications / Indices dans les corps de nombres et leurs applications

Seddik, Mohammed 20 June 2018 (has links)
Dans la première partie de cette thèse, on considère les extensions cycliques simples K_m/Q de degré 4. Ces dernières sont les extensions quartiques définies par les polynômes irréductibles x^4-mx^3-6x^2+mx+1, où m est un entier tel que la partie impaire de m^2+16 est sans facteur carré. Nous étudions l'indice I(K_m) et déterminons la décomposition explicite des nombres premiers dans l'extension K_m/Q. Ensuite, nous calculons une formule asymptotique qui donne le nombre des K_m ayant le même indice, avec discriminant inférieur ou égal à x. Dans la seconde partie, nous étudions l'entier suivant introduit par Gunji et McQuillan : i(K)=ppcm{i(\theta} où i(\theta}=pgcd{F(x), x dans Z et F le polynôme caractéristique de \theta }. Nos principaux résultats pour cette partie sont les suivants : 1. Si p est un nombre premier inférieur ou égal à n, alors il existe un corps de nombres K de degré n tel que p divise i(K). 2. Nous calculons i(K) pour les corps de nombres cubiques, et nous déterminons I(K)$ et $i(K)$ pour des familles de corps de nombres de degré inférieur ou égal à 6. 3. Soit p un nombre premier. Nous prouvons que le type de décomposition de p dans O_K ne suffit pas pour déterminer complètement la valuation p-adique v_p(i(K)). Pour cela, nous donnons des exemples de deux corps de nombres K_1 et K_2 de degré 6, tels que le type de décomposition de 2 est P_1P_2 mais v_2(i(K_1)) est différente v_2(i(K_2)). 4. Nous répondons à deux questions posées dans par plusieurs auteurs. On étudie aussi une conjecture. Dans la dernière partie, nous appliquons les résultats sur l'indice des corps de nombres cubiques pour la résolution des équations cubiques de Thue ax^3+bx^2y+cxy^2+dy^3= k ainsi nous donnons des applications pour résoudre des formes homogènes cubiques et des courbes elliptiques. / In the first part, we consider the simplest quartic number fields K_m defined by the irreducible quartic polynomialsx^4-mx^3-6x^2+mx+1, where m runs over the positive rational integers such that the odd part of m^2+16 is square free. We study the index I( K_m) and determine the explicit prime ideal factorization of rational primes in simplest quartic number fields K_m. On the other hand, we establish an asymptotic formula for the number of simplest quartic fields with discriminant less than or equal to x and given index. In the second Part, we study the next integer introduced by Gunji and McQuillan : i(K)=lcm{i(\theta} where i(\theta}=gcd{F(x), x in Z and F is caractéristic polynomial de \theta } . Our main results for this part are:1. If p is a prime number less than or equal to n then there exists a number field K of degree n for which p divides i(K).2. We compute i(K) for cubic fields and we determine I(K) and i(K) for families of simplest number fields of degree less than 7.3. Let p be a prime number. We prove that p-adic valuation v_p(i(K)) is not determined only by the splitting type of p in O_K, we give examples of number fields K_1 and K_2 of degree 6 in which the prime 2 has the same splitting type P_1P_2 but v_2(i(K_1)) is different to v_2(i(K_2)).4. We give answers to the important questions. Furthermore, we discuss their conjecture.We investigate the index of algebraic integers in cubic number fields. Let a,b,c,d and k be integers. We then solve the following Thue cubic equations:ax^3+bx^2y+cxy^2+dy^3= k and we give applications to resolve the famous parametric families of cubic Thue equations, homogeneous Diophantine equations and twist elliptic curves.
24

Characterization of Wood Features Using Color, Shape, and Density Parameters

Bond, Brian H. 27 July 1998 (has links)
Automated defect detection methods allow the forest products industry to better utilize its resources by improving yield, reducing labor costs, and allowing minimum lumber grades to be utilized more intelligently. While many methods have been proposed on what sensors and processing methods should be used to detect and classify wood features, there exists a lack of understanding of what parameters are best used to differentiate wood features. The goal of this research is to demonstrate that by having an in depth knowledge of how wood features are represented by color, shape, and density parameters, more accurate classification methods can be developed. This goal was achieved through describing wood features using parameters derived from color and x-ray images and characterizing the variability and interrelationships of these parameters, determining the effect of resolution and species on these relationships, and determining the importance and contribution of each parameter for differentiating between wood features using a statistical prediction model relating feature types to the parameters. Knots, bark pockets, stain and mineral streak, and clearwood were selected as features from red oak, (Quercus rubra), hard maple, (Acer saccharum), and Eastern white pine (Pinus stobus). Color (RGB and HSI), shape (eccentricity and roundness), and density (gray-scale values) parameters were measured. Parameters were measured for each wood feature from images and parameter differences between feature types were tested using analysis of variance techniques (ANOVA) and Tukey's pairwise comparisons with a=0.05. Discriminant classifiers were then developed to demonstrate that an in-depth knowledge of how parameters relate between feature types could be used to develop the best possible classification methods. Classifiers developed using the knowledge of parameter relationships were found to provide higher classification accuracies for all features and species than those which used all parameters and where variable selection procedures had been used< It was determined that differences exist between all feature types and can be characterized and classified based on two color means, one color standard deviation, the mean density, and a shape parameter. A reduction in image resolution was determined not to affect the relationship of parameters. For different species, the intensity of features was to be related to the intensity of clearwood. The ability to explain classification errors using the knowledge gained about feature parameters was demonstrated. This knowledge could be used to reduce future classification errors. It was determined that combining parameters collected using multiple sensors increases classification accuracy of wood features. Shape and density were found not to provide good classification variables for features when used separately, but were found to contribute to classification of features when used with other parameters. The ability to differentiate between the feature types examined in this research was found be equal when using the RGB or HSI colorspace. / Ph. D.
25

Analysis of categorical data with misclassification errors.

January 1988 (has links)
by Chun-nam Lau. / Thesis (M.Ph.)--Chinese University of Hong Kong, 1988. / Bibliography: leaves 85-89.
26

Incremental document clustering for web page classification.

January 2000 (has links)
by Wong, Wai-Chiu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 89-94). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgments --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Document Clustering --- p.2 / Chapter 1.2 --- DC-tree --- p.4 / Chapter 1.3 --- Feature Extraction --- p.5 / Chapter 1.4 --- Outline of the Thesis --- p.5 / Chapter 2 --- Related Work --- p.8 / Chapter 2.1 --- Clustering Algorithms --- p.8 / Chapter 2.1.1 --- Partitional Clustering Algorithms --- p.8 / Chapter 2.1.2 --- Hierarchical Clustering Algorithms --- p.10 / Chapter 2.2 --- Document Classification by Examples --- p.11 / Chapter 2.2.1 --- k-NN algorithm - Expert Network (ExpNet) --- p.11 / Chapter 2.2.2 --- Learning Linear Text Classifier --- p.12 / Chapter 2.2.3 --- Generalized Instance Set (GIS) algorithm --- p.12 / Chapter 2.3 --- Document Clustering --- p.13 / Chapter 2.3.1 --- B+-tree-based Document Clustering --- p.13 / Chapter 2.3.2 --- Suffix Tree Clustering --- p.14 / Chapter 2.3.3 --- Association Rule Hypergraph Partitioning Algorithm --- p.15 / Chapter 2.3.4 --- Principal Component Divisive Partitioning --- p.17 / Chapter 2.4 --- Projections for Efficient Document Clustering --- p.18 / Chapter 3 --- Background --- p.21 / Chapter 3.1 --- Document Preprocessing --- p.21 / Chapter 3.1.1 --- Elimination of Stopwords --- p.22 / Chapter 3.1.2 --- Stemming Technique --- p.22 / Chapter 3.2 --- Problem Modeling --- p.23 / Chapter 3.2.1 --- Basic Concepts --- p.23 / Chapter 3.2.2 --- Vector Model --- p.24 / Chapter 3.3 --- Feature Selection Scheme --- p.25 / Chapter 3.4 --- Similarity Model --- p.27 / Chapter 3.5 --- Evaluation Techniques --- p.29 / Chapter 4 --- Feature Extraction and Weighting --- p.31 / Chapter 4.1 --- Statistical Analysis of the Words in the Web Domain --- p.31 / Chapter 4.2 --- Zipf's Law --- p.33 / Chapter 4.3 --- Traditional Methods --- p.36 / Chapter 4.4 --- The Proposed Method --- p.38 / Chapter 4.5 --- Experimental Results --- p.40 / Chapter 4.5.1 --- Synthetic Data Generation --- p.40 / Chapter 4.5.2 --- Real Data Source --- p.41 / Chapter 4.5.3 --- Coverage --- p.41 / Chapter 4.5.4 --- Clustering Quality --- p.43 / Chapter 4.5.5 --- Binary Weight vs Numerical Weight --- p.45 / Chapter 5 --- Web Document Clustering Using DC-tree --- p.48 / Chapter 5.1 --- Document Representation --- p.48 / Chapter 5.2 --- Document Cluster (DC) --- p.49 / Chapter 5.3 --- DC-tree --- p.52 / Chapter 5.3.1 --- Tree Definition --- p.52 / Chapter 5.3.2 --- Insertion --- p.54 / Chapter 5.3.3 --- Node Splitting --- p.55 / Chapter 5.3.4 --- Deletion and Node Merging --- p.56 / Chapter 5.4 --- The Overall Strategy --- p.57 / Chapter 5.4.1 --- Preprocessing --- p.57 / Chapter 5.4.2 --- Building DC-tree --- p.59 / Chapter 5.4.3 --- Identifying the Interesting Clusters --- p.60 / Chapter 5.5 --- Experimental Results --- p.61 / Chapter 5.5.1 --- Alternative Similarity Measurement : Synthetic Data --- p.61 / Chapter 5.5.2 --- DC-tree Characteristics : Synthetic Data --- p.63 / Chapter 5.5.3 --- Compare DC-tree and B+-tree: Synthetic Data --- p.64 / Chapter 5.5.4 --- Compare DC-tree and B+-tree: Real Data --- p.66 / Chapter 5.5.5 --- Varying the Number of Features : Synthetic Data --- p.67 / Chapter 5.5.6 --- Non-Correlated Topic Web Page Collection: Real Data --- p.69 / Chapter 5.5.7 --- Correlated Topic Web Page Collection: Real Data --- p.71 / Chapter 5.5.8 --- Incremental updates on Real Data Set --- p.72 / Chapter 5.5.9 --- Comparison with the other clustering algorithms --- p.73 / Chapter 6 --- Conclusion --- p.75 / Appendix --- p.77 / Chapter A --- Stopword List --- p.77 / Chapter B --- Porter's Stemming Algorithm --- p.81 / Chapter C --- Insertion Algorithm --- p.83 / Chapter D --- Node Splitting Algorithm --- p.85 / Chapter E --- Features Extracted in Experiment 4.53 --- p.87 / Bibliography --- p.88
27

Chemical pattern recognition of the traditional Chinese medicinal herb, epimedium.

January 1998 (has links)
by Kwan Yee Ting, Chris. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 44-48). / Abstract also in Chinese. / Acknowledgements --- p.i / Abstract --- p.ii / Table of Contents --- p.v / List of Figures --- p.ix / List of Tables --- p.x / Chapter Part 1. --- Introduction --- p.1 / Chapter 1.1 --- Identification of TCM --- p.1 / Chapter 1.2 --- Chemical Pattern Recognition --- p.2 / Chapter 1.3 --- Discriminant Analysis --- p.3 / Chapter 1.4 --- Epimedium --- p.5 / Chapter 1.5 --- High Performance Liquid Chromatography --- p.6 / Chapter 1.6 --- Objectives of this work --- p.8 / Chapter Part 2. --- Chemical Analysis --- p.9 / Chapter 2.1 --- Sources of Epimedium samples --- p.9 / Chapter 2.2 --- Extraction --- p.9 / Chapter 2.2.1 --- Sample Pre-treatment --- p.9 / Chapter 2.2.2 --- Extraction Procedure --- p.9 / Chapter 2.2.3 --- Extraction Recovery --- p.11 / Chapter 2.3 --- Instrumental Analysis --- p.11 / Chapter 2.3.1 --- Chromatographic Operating Conditions --- p.12 / Chapter 2.3.2 --- Preparation of Calibration Graph --- p.12 / Chapter 2.3.3 --- Sample injection --- p.13 / Chapter 2.4 --- Results and Discussion --- p.13 / Chapter 2.4.1 --- Linearity of the Calibration Graph --- p.13 / Chapter 2.4.2 --- Development of Analysis Procedure --- p.15 / Chapter 2.4.2.1 --- Sample Pre-treatment --- p.15 / Chapter 2.4.2.2 --- Extractant --- p.15 / Chapter 2.4.2.3 --- Purification of Extract --- p.15 / Chapter 2.4.2.4 --- Extraction Time --- p.17 / Chapter 2.4.2.5 --- Solvent Gradient --- p.18 / Chapter 2.4.2.6 --- Detection --- p.19 / Chapter 2.4.3 --- Quantitative Analysis --- p.19 / Chapter 2.4.3.1 --- Extraction Recovery --- p.19 / Chapter 2.4.3.2 --- Icariin Content --- p.20 / Chapter 2.5 --- Conclusions --- p.22 / Chapter Part 3. --- Chemical Pattern Recognition --- p.24 / Chapter 3.1 --- Materials and Methods --- p.24 / Chapter 3.1.1 --- Chromatographic Results --- p.24 / Chapter 3.1.2 --- Patterns of Epimedium Samples --- p.24 / Chapter 3.1.3 --- Computer Program --- p.25 / Chapter 3.1.4 --- Variable Extraction --- p.25 / Chapter 3.1.4.1 --- Variable Extraction Parameters --- p.25 / Chapter 3.1.4.2 --- Variable Extraction Methods --- p.26 / Chapter 3.1.4.3 --- Transformation of Variables --- p.27 / Chapter 3.1.5 --- Variable Selection --- p.27 / Chapter 3.1.6 --- Predictive Power of the Recognition Model --- p.28 / Chapter 3.2 --- Results --- p.28 / Chapter 3.2.1 --- Accuracy of the Recognition Models --- p.28 / Chapter 3.2.2 --- Classification Functions --- p.29 / Chapter 3.2.3 --- Casewise Results of Recognition Model IV --- p.31 / Chapter 3.2.4 --- Plotting of the Best Two Canonical Discriminant Functions --- p.33 / Chapter 3.3 --- Discussion --- p.33 / Chapter 3.3.1 --- Meaning of Extracted Variables --- p.33 / Chapter 3.3.2 --- Limitations of Variable Extraction Methods --- p.34 / Chapter 3.3.3 --- Importance of the Variable Extraction Methods --- p.34 / Chapter 3.3.4 --- "Reasons for the Poor Performance in Recognition Models I, II and III" --- p.35 / Chapter 3.3.5 --- Selected Variables in Model IV --- p.35 / Chapter 3.3.6 --- Misclassified Samples --- p.36 / Chapter 3.3.7 --- Quality Assessment --- p.38 / Chapter 3.3.8 --- Comparison with Another Chemical Pattern Recognition Method for the Identification of Epimedium --- p.39 / Chapter 3.3.9 --- Potential Usage of the Pattern Recognition Method --- p.42 / Chapter 3.3.10 --- Advantage of the Pattern Recognition Method --- p.42 / Chapter 3.3.11 --- Disadvantage of Discriminant Analysis --- p.42 / Chapter 3.4 --- Conclusions --- p.43 / References --- p.44 / Appendix I Epimedium Species in China --- p.49 / Appendix II --- p.50 / Chapter II.1 --- Chromatograms of Samples of Epimedium sagittatum --- p.50 / Chapter II.2 --- Chromatograms of Samples of Epimedium pubescens --- p.57 / Chapter II.3 --- Chromatograms of Samples of Epimedium koreanum --- p.61 / Chapter II.4 --- Chromatograms of Samples of Epimedium leptorrhizum --- p.67 / Chapter II.5 --- Chromatograms of Samples of Epimedium wnshanese --- p.69 / Chapter II.6 --- Chromatograms of Samples of Epimedium brevicornum --- p.72 / Appendix III Log-transformed Values of Variables --- p.75
28

Discriminant feature pursuit: from statistical learning to informative learning.

January 2006 (has links)
Lin Dahua. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 233-250). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Problem We are Facing --- p.1 / Chapter 1.2 --- Generative vs. Discriminative Models --- p.2 / Chapter 1.3 --- Statistical Feature Extraction: Success and Challenge --- p.3 / Chapter 1.4 --- Overview of Our Works --- p.5 / Chapter 1.4.1 --- New Linear Discriminant Methods: Generalized LDA Formulation and Performance-Driven Sub space Learning --- p.5 / Chapter 1.4.2 --- Coupled Learning Models: Coupled Space Learning and Inter Modality Recognition --- p.6 / Chapter 1.4.3 --- Informative Learning Approaches: Conditional Infomax Learning and Information Chan- nel Model --- p.6 / Chapter 1.5 --- Organization of the Thesis --- p.8 / Chapter I --- History and Background --- p.10 / Chapter 2 --- Statistical Pattern Recognition --- p.11 / Chapter 2.1 --- Patterns and Classifiers --- p.11 / Chapter 2.2 --- Bayes Theory --- p.12 / Chapter 2.3 --- Statistical Modeling --- p.14 / Chapter 2.3.1 --- Maximum Likelihood Estimation --- p.14 / Chapter 2.3.2 --- Gaussian Model --- p.15 / Chapter 2.3.3 --- Expectation-Maximization --- p.17 / Chapter 2.3.4 --- Finite Mixture Model --- p.18 / Chapter 2.3.5 --- A Nonparametric Technique: Parzen Windows --- p.21 / Chapter 3 --- Statistical Learning Theory --- p.24 / Chapter 3.1 --- Formulation of Learning Model --- p.24 / Chapter 3.1.1 --- Learning: Functional Estimation Model --- p.24 / Chapter 3.1.2 --- Representative Learning Problems --- p.25 / Chapter 3.1.3 --- Empirical Risk Minimization --- p.26 / Chapter 3.2 --- Consistency and Convergence of Learning --- p.27 / Chapter 3.2.1 --- Concept of Consistency --- p.27 / Chapter 3.2.2 --- The Key Theorem of Learning Theory --- p.28 / Chapter 3.2.3 --- VC Entropy --- p.29 / Chapter 3.2.4 --- Bounds on Convergence --- p.30 / Chapter 3.2.5 --- VC Dimension --- p.35 / Chapter 4 --- History of Statistical Feature Extraction --- p.38 / Chapter 4.1 --- Linear Feature Extraction --- p.38 / Chapter 4.1.1 --- Principal Component Analysis (PCA) --- p.38 / Chapter 4.1.2 --- Linear Discriminant Analysis (LDA) --- p.41 / Chapter 4.1.3 --- Other Linear Feature Extraction Methods --- p.46 / Chapter 4.1.4 --- Comparison of Different Methods --- p.48 / Chapter 4.2 --- Enhanced Models --- p.49 / Chapter 4.2.1 --- Stochastic Discrimination and Random Subspace --- p.49 / Chapter 4.2.2 --- Hierarchical Feature Extraction --- p.51 / Chapter 4.2.3 --- Multilinear Analysis and Tensor-based Representation --- p.52 / Chapter 4.3 --- Nonlinear Feature Extraction --- p.54 / Chapter 4.3.1 --- Kernelization --- p.54 / Chapter 4.3.2 --- Dimension reduction by Manifold Embedding --- p.56 / Chapter 5 --- Related Works in Feature Extraction --- p.59 / Chapter 5.1 --- Dimension Reduction --- p.59 / Chapter 5.1.1 --- Feature Selection --- p.60 / Chapter 5.1.2 --- Feature Extraction --- p.60 / Chapter 5.2 --- Kernel Learning --- p.61 / Chapter 5.2.1 --- Basic Concepts of Kernel --- p.61 / Chapter 5.2.2 --- The Reproducing Kernel Map --- p.62 / Chapter 5.2.3 --- The Mercer Kernel Map --- p.64 / Chapter 5.2.4 --- The Empirical Kernel Map --- p.65 / Chapter 5.2.5 --- Kernel Trick and Kernelized Feature Extraction --- p.66 / Chapter 5.3 --- Subspace Analysis --- p.68 / Chapter 5.3.1 --- Basis and Subspace --- p.68 / Chapter 5.3.2 --- Orthogonal Projection --- p.69 / Chapter 5.3.3 --- Orthonormal Basis --- p.70 / Chapter 5.3.4 --- Subspace Decomposition --- p.70 / Chapter 5.4 --- Principal Component Analysis --- p.73 / Chapter 5.4.1 --- PCA Formulation --- p.73 / Chapter 5.4.2 --- Solution to PCA --- p.75 / Chapter 5.4.3 --- Energy Structure of PCA --- p.76 / Chapter 5.4.4 --- Probabilistic Principal Component Analysis --- p.78 / Chapter 5.4.5 --- Kernel Principal Component Analysis --- p.81 / Chapter 5.5 --- Independent Component Analysis --- p.83 / Chapter 5.5.1 --- ICA Formulation --- p.83 / Chapter 5.5.2 --- Measurement of Statistical Independence --- p.84 / Chapter 5.6 --- Linear Discriminant Analysis --- p.85 / Chapter 5.6.1 --- Fisher's Linear Discriminant Analysis --- p.85 / Chapter 5.6.2 --- Improved Algorithms for Small Sample Size Problem . --- p.89 / Chapter 5.6.3 --- Kernel Discriminant Analysis --- p.92 / Chapter II --- Improvement in Linear Discriminant Analysis --- p.100 / Chapter 6 --- Generalized LDA --- p.101 / Chapter 6.1 --- Regularized LDA --- p.101 / Chapter 6.1.1 --- Generalized LDA Implementation Procedure --- p.101 / Chapter 6.1.2 --- Optimal Nonsingular Approximation --- p.103 / Chapter 6.1.3 --- Regularized LDA algorithm --- p.104 / Chapter 6.2 --- A Statistical View: When is LDA optimal? --- p.105 / Chapter 6.2.1 --- Two-class Gaussian Case --- p.106 / Chapter 6.2.2 --- Multi-class Cases --- p.107 / Chapter 6.3 --- Generalized LDA Formulation --- p.108 / Chapter 6.3.1 --- Mathematical Preparation --- p.108 / Chapter 6.3.2 --- Generalized Formulation --- p.110 / Chapter 7 --- Dynamic Feedback Generalized LDA --- p.112 / Chapter 7.1 --- Basic Principle --- p.112 / Chapter 7.2 --- Dynamic Feedback Framework --- p.113 / Chapter 7.2.1 --- Initialization: K-Nearest Construction --- p.113 / Chapter 7.2.2 --- Dynamic Procedure --- p.115 / Chapter 7.3 --- Experiments --- p.115 / Chapter 7.3.1 --- Performance in Training Stage --- p.116 / Chapter 7.3.2 --- Performance on Testing set --- p.118 / Chapter 8 --- Performance-Driven Subspace Learning --- p.119 / Chapter 8.1 --- Motivation and Principle --- p.119 / Chapter 8.2 --- Performance-Based Criteria --- p.121 / Chapter 8.2.1 --- The Verification Problem and Generalized Average Margin --- p.122 / Chapter 8.2.2 --- Performance Driven Criteria based on Generalized Average Margin --- p.123 / Chapter 8.3 --- Optimal Subspace Pursuit --- p.125 / Chapter 8.3.1 --- Optimal threshold --- p.125 / Chapter 8.3.2 --- Optimal projection matrix --- p.125 / Chapter 8.3.3 --- Overall procedure --- p.129 / Chapter 8.3.4 --- Discussion of the Algorithm --- p.129 / Chapter 8.4 --- Optimal Classifier Fusion --- p.130 / Chapter 8.5 --- Experiments --- p.131 / Chapter 8.5.1 --- Performance Measurement --- p.131 / Chapter 8.5.2 --- Experiment Setting --- p.131 / Chapter 8.5.3 --- Experiment Results --- p.133 / Chapter 8.5.4 --- Discussion --- p.139 / Chapter III --- Coupled Learning of Feature Transforms --- p.140 / Chapter 9 --- Coupled Space Learning --- p.141 / Chapter 9.1 --- Introduction --- p.142 / Chapter 9.1.1 --- What is Image Style Transform --- p.142 / Chapter 9.1.2 --- Overview of our Framework --- p.143 / Chapter 9.2 --- Coupled Space Learning --- p.143 / Chapter 9.2.1 --- Framework of Coupled Modelling --- p.143 / Chapter 9.2.2 --- Correlative Component Analysis --- p.145 / Chapter 9.2.3 --- Coupled Bidirectional Transform --- p.148 / Chapter 9.2.4 --- Procedure of Coupled Space Learning --- p.151 / Chapter 9.3 --- Generalization to Mixture Model --- p.152 / Chapter 9.3.1 --- Coupled Gaussian Mixture Model --- p.152 / Chapter 9.3.2 --- Optimization by EM Algorithm --- p.152 / Chapter 9.4 --- Integrated Framework for Image Style Transform --- p.154 / Chapter 9.5 --- Experiments --- p.156 / Chapter 9.5.1 --- Face Super-resolution --- p.156 / Chapter 9.5.2 --- Portrait Style Transforms --- p.157 / Chapter 10 --- Inter-Modality Recognition --- p.162 / Chapter 10.1 --- Introduction to the Inter-Modality Recognition Problem . . . --- p.163 / Chapter 10.1.1 --- What is Inter-Modality Recognition --- p.163 / Chapter 10.1.2 --- Overview of Our Feature Extraction Framework . . . . --- p.163 / Chapter 10.2 --- Common Discriminant Feature Extraction --- p.165 / Chapter 10.2.1 --- Formulation of the Learning Problem --- p.165 / Chapter 10.2.2 --- Matrix-Form of the Objective --- p.168 / Chapter 10.2.3 --- Solving the Linear Transforms --- p.169 / Chapter 10.3 --- Kernelized Common Discriminant Feature Extraction --- p.170 / Chapter 10.4 --- Multi-Mode Framework --- p.172 / Chapter 10.4.1 --- Multi-Mode Formulation --- p.172 / Chapter 10.4.2 --- Optimization Scheme --- p.174 / Chapter 10.5 --- Experiments --- p.176 / Chapter 10.5.1 --- Experiment Settings --- p.176 / Chapter 10.5.2 --- Experiment Results --- p.177 / Chapter IV --- A New Perspective: Informative Learning --- p.180 / Chapter 11 --- Toward Information Theory --- p.181 / Chapter 11.1 --- Entropy and Mutual Information --- p.181 / Chapter 11.1.1 --- Entropy --- p.182 / Chapter 11.1.2 --- Relative Entropy (Kullback Leibler Divergence) --- p.184 / Chapter 11.2 --- Mutual Information --- p.184 / Chapter 11.2.1 --- Definition of Mutual Information --- p.184 / Chapter 11.2.2 --- Chain rules --- p.186 / Chapter 11.2.3 --- Information in Data Processing --- p.188 / Chapter 11.3 --- Differential Entropy --- p.189 / Chapter 11.3.1 --- Differential Entropy of Continuous Random Variable . --- p.189 / Chapter 11.3.2 --- Mutual Information of Continuous Random Variable . --- p.190 / Chapter 12 --- Conditional Infomax Learning --- p.191 / Chapter 12.1 --- An Overview --- p.192 / Chapter 12.2 --- Conditional Informative Feature Extraction --- p.193 / Chapter 12.2.1 --- Problem Formulation and Features --- p.193 / Chapter 12.2.2 --- The Information Maximization Principle --- p.194 / Chapter 12.2.3 --- The Information Decomposition and the Conditional Objective --- p.195 / Chapter 12.3 --- The Efficient Optimization --- p.197 / Chapter 12.3.1 --- Discrete Approximation Based on AEP --- p.197 / Chapter 12.3.2 --- Analysis of Terms and Their Derivatives --- p.198 / Chapter 12.3.3 --- Local Active Region Method --- p.200 / Chapter 12.4 --- Bayesian Feature Fusion with Sparse Prior --- p.201 / Chapter 12.5 --- The Integrated Framework for Feature Learning --- p.202 / Chapter 12.6 --- Experiments --- p.203 / Chapter 12.6.1 --- A Toy Problem --- p.203 / Chapter 12.6.2 --- Face Recognition --- p.204 / Chapter 13 --- Channel-based Maximum Effective Information --- p.209 / Chapter 13.1 --- Motivation and Overview --- p.209 / Chapter 13.2 --- Maximizing Effective Information --- p.211 / Chapter 13.2.1 --- Relation between Mutual Information and Classification --- p.211 / Chapter 13.2.2 --- Linear Projection and Metric --- p.212 / Chapter 13.2.3 --- Channel Model and Effective Information --- p.213 / Chapter 13.2.4 --- Parzen Window Approximation --- p.216 / Chapter 13.3 --- Parameter Optimization on Grassmann Manifold --- p.217 / Chapter 13.3.1 --- Grassmann Manifold --- p.217 / Chapter 13.3.2 --- Conjugate Gradient Optimization on Grassmann Manifold --- p.219 / Chapter 13.3.3 --- Computation of Gradient --- p.221 / Chapter 13.4 --- Experiments --- p.222 / Chapter 13.4.1 --- A Toy Problem --- p.222 / Chapter 13.4.2 --- Face Recognition --- p.223 / Chapter 14 --- Conclusion --- p.230
29

Analysis of ordinal square table with misclassified data.

January 2007 (has links)
Tam, Hiu Wah. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 41). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Estimation with Known Misclassification Probabilities --- p.5 / Chapter 2.1 --- Model --- p.5 / Chapter 2.2 --- Maximum Likelihood Estimation --- p.7 / Chapter 2.3 --- Examples --- p.9 / Chapter 2.3.1 --- Example 1: A Real data set analysis --- p.9 / Chapter 2.3.2 --- Example 2: An Artificial Data for 3x3 Table --- p.11 / Chapter 3 --- Estimation by Double Sampling --- p.12 / Chapter 3.1 --- Estimation --- p.13 / Chapter 3.2 --- Example --- p.14 / Chapter 3.2.1 --- Example 3: An Artificial Data Example for 3x3 Table --- p.14 / Chapter 4 --- Simulation --- p.15 / Chapter 5 --- Conclusion --- p.17 / Table --- p.19 / Appendix --- p.27 / Bibliography --- p.41
30

Regularized Discriminant Analysis: A Large Dimensional Study

Yang, Xiaoke 28 April 2018 (has links)
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).

Page generated in 0.0569 seconds