• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 8
  • 8
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 72
  • 26
  • 11
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modelling Distance Functions Induced by Face Recognition Algorithms

Chaudhari, Soumee 09 November 2004 (has links)
Face recognition algorithms has in the past few years become a very active area of research in the fields of computer vision, image processing, and cognitive psychology. This has spawned various algorithms of different complexities. The concept of principal component analysis(PCA) is a popular mode of face recognition algorithm and has often been used to benchmark other face recognition algorithms for identification and verification scenarios. However in this thesis, we try to analyze different face recognition algorithms at a deeper level. The objective is to model the distances output by any face recognition algorithm as a function of the input images. We achieve this by creating an affine eigen space from the PCA space such that it can approximate the results of the face recognition algorithm under consideration as closely as possible. Holistic template matching algorithms like the Linear Discriminant Analysis algorithm( LDA), the Bayesian Intrapersonal/Extrapersonal classifier(BIC), as well as local feature based algorithms like the Elastic Bunch Graph Matching algorithm(EBGM) and a commercial face recognition algorithm are selected for our experiments. We experiment on two different data sets, the FERET data set and the Notre Dame data set. The FERET data set consists of images of subjects with variation in both time and expression. The Notre Dame data set consists of images of subjects with variation in time. We train our affine approximation algorithm on 25 subjects and test with 300 subjects from the FERET data set and 415 subjects from the Notre Dame data set. We also analyze the effect of different distance metrics used by the face recognition algorithm on the accuracy of the approximation. We study the quality of the approximation in the context of recognition for the identification and verification scenarios, characterized by cumulative match score curves (CMC) and receiver operator curves (ROC), respectively. Our studies indicate that both the holistic template matching algorithms as well as feature based algorithms can be well approximated. We also find the affine approximation training can be generalized across covariates. For the data with time variation, we find that the rank order of approximation performance is BIC, LDA, EBGM, and commercial. For the data with expression variation, the rank order is LDA, BIC, commercial, and EBGM. Experiments to approximate PCA with distance measures other than Euclidean also performed very well. PCA+Euclidean distance is best approximated followed by PCA+MahL1, PCA+MahCosine, and PCA+Covariance.
22

Gait-Based Recognition at a Distance: Performance, Covariate Impact and Solutions

Liu, Zongyi 10 November 2004 (has links)
It has been noticed for a long time that humans can identify others based on their biological movement from a distance. However, it is only recently that computer vision based gait biometrics has received much attention. In this dissertation, we perform a thorough study of gait recognition from a computer vision perspective. We first present a parameterless baseline recognition algorithm, which bases similarity on spatio-temporal correlation that emphasizes gait dynamics as well as gait shapes. Our experiments are performed with three popular gait databases: the USF/NIST HumanID Gait Challenge outdoor database with 122 subjects, the UMD outdoor database with 55 subjects, and the CMU Mobo indoor database with 25 subjects. Despite its simplicity, the baseline algorithm shows strong recognition power. On the other hand, the outcome suggests that changes in surface and time have strong impact on recognition with significant drop in performance. To gain insight into the effects of image segmentation on recognition -- a possible cause for performance degradation, we propose a silhouette reconstruction method based on a Population Hidden Markov Model (pHMM), which models gait over one cycle, coupled with an Eigen-stance model utilizing the Principle Component Analysis (PCA) of the silhouette shapes. Both models are built from a set of manually created silhouettes of 71 subjects. Given a sequence of machine segmented silhouettes, each frame is matched into a stance by pHMM using the Viterbi algorithm, and then is projected into and reconstructed by the Eigen-stance model. We demonstrate that the system dramatically improves the silhouette quality. Nonetheless, it does little help for recognition, indicating that segmentation is not the key factor of the covariate impacts. To improve performance, we look into other aspects. Toward this end, we propose three recognition algorithms: (i) an averaged silhouette based algorithm that deemphasizes gait dynamics, which substantially reduces computation time but achieves similar recognition power with the baseline algorithm; (ii) an algorithm that normalizes gait dynamics using pHMM and then uses Euclidean distance between corresponding selected stances -- this improves recognition over surface and time; and (iii) an algorithm that also performs gait dynamics normalization using pHMM, but instead of Euclidean distances, we consider distances in shape space based on the Linear Discriminant Analysis (LDA) and consider measures that are invariant to morphological deformation of silhouettes. This algorithm statistically improves the recognition over all covariates. Compared with the best reported algorithm to date, it improves the top-rank identification rate (gallery size: 122 subjects) for comparison across hard covariates: briefcase, surface type and time, by 22%, 14%, and 12% respectively. In addition to better gait algorithms, we also study multi-biometrics combination to improve outdoor biometric performance, specifically, fusing with face data. We choose outdoor face recognition, a "known" hard problem in face biometrics, and test four combination schemes: score sum, Bayesian rule, confidence score sum, and rank sum. We find that the recognition power after combination is significantly stronger although individual biometrics are weak, suggesting another effective approach to improve biometric recognition. The fundamental contributions of this work include (i) establishing the "hard" problems for gait recognition involving comparison across time, surface, and briefcase carrying conditions, (ii) revealing that their impacts cannot be explained by silhouette segmentation, (iii) demonstrating that gait shape is more important than gait dynamics in recognition, and (iv) proposing a novel gait algorithm that outperforms other gait algorithms to date.
23

An Indepth Analysis of Face Recognition Algorithms using Affine Approximations

Reguna, Lakshmi 19 May 2003 (has links)
In order to foster the maturity of face recognition analysis as a science, a well implemented baseline algorithm and good performance metrics are highly essential to benchmark progress. In the past, face recognition algorithms based on Principal Components Analysis(PCA) have often been used as a baseline algorithm. The objective of this thesis is to develop a strategy to estimate the best affine transformation, which when applied to the eigen space of the PCA face recognition algorithm can approximate the results of any given face recognition algorithm. The affine approximation strategy outputs an optimal affine transform that approximates the similarity matrix of the distances between a given set of faces generated by any given face recognition algorithm. The affine approximation strategy would help in comparing how close a face recognition algorithm is to the PCA based face recognition algorithm. This thesis work shows how the affine approximation algorithm can be used as a valuable tool to evaluate face recognition algorithms at a deep level. Two test algorithms were choosen to demonstrate the usefulness of the affine approximation strategy. They are the Linear Discriminant Analysis(LDA) based face recognition algorithm and the Bayesian interpersonal and intrapersonal classifier based face recognition algorithm. Our studies indicate that both the algorithms can be approximated well. These conclusions were arrived based on the results produced by analyzing the raw similarity scores and by studying the identification and verification performance of the algorithms. Two training scenarios were considered, one in which both the face recognition and the affine approximation algorithm were trained on the same data set and in the other, different data sets were used to train both the algorithms. Gross error measures like the average RMS error and Stress-1 error were used to directly compare the raw similarity scores. The histogram of the difference between the similarity matrixes also clearly showed that the error spread is small for the affine approximation algorithm. The performance of the algorithms in the identification and the verification scenario were characterized using traditional CMS and ROC curves. The McNemar's test showed that the difference between the CMS and the ROC curves generated by the test face recognition algorithms and the affine approximation strategy is not statistically significant. The results were statistically insignificant at rank 1 for the first training scenario but for the second training scenario they became insignificant only at higher ranks. This difference in performance can be attributed to the different training sets used in the second training scenario.
24

On Computing Multiple Solutions of Nonlinear PDEs Without Variational Structure

Wang, Changchun 2012 May 1900 (has links)
Variational structure plays an important role in critical point theory and methods. However many differential problems are non-variational i.e. they are not the Euler- Lagrange equations of any variational functionals, which makes traditional critical point approach not applicable. In this thesis, two types of non-variational problems, a nonlinear eigen solution problem and a non-variational semi-linear elliptic system, are studied. By considering nonlinear eigen problems on their variational energy profiles and using the implicit function theorem, an implicit minimax method is developed for numerically finding eigen solutions of focusing nonlinear Schrodinger equations subject to zero Dirichlet/Neumann boundary condition in the order of their eigenvalues. Its mathematical justification and some related properties, such as solution intensity preserving, bifurcation identification, etc., are established, which show some significant advantages of the new method over the usual ones in the literature. A new orthogonal subspace minimization method is also developed for finding multiple (eigen) solutions to defocusing nonlinear Schrodinger equations with certain symmetries. Numerical results are presented to illustrate these methods. A new joint local min orthogonal method is developed for finding multiple solutions of a non-variational semi-linear elliptic system. Mathematical justification and convergence of the method are discussed. A modified non-variational Gross-Pitaevskii system is used in numerical experiment to test the method.
25

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Edelman, Alan, Rao, N. Raj 01 1900 (has links)
Random matrix theory is now a big subject with applications in many disciplines of science, engineering and finance. This talk is a survey specifically oriented towards the needs and interests of a computationally inclined audience. We include the important mathematics (free probability) that permit the characterization of a large class of random matrices. We discuss how computational software is transforming this theory into practice by highlighting its use in the context of a stochastic eigen-inference application. / Singapore-MIT Alliance (SMA)
26

[en] NUMERICAL SOLUTIONS FOR EIGENPROBLEMS ASSOCIATED TO SYMMETRIC OPERATORS / [pt] SOLUÇÃO NUMÉRICA DE AUTO-PROBLEMAS ASSOCIADOS A OPERADORES SIMÉTRICOS

PAULO ROBERTO GARDEL KURKA 29 August 2012 (has links)
[pt] Desenvolve-se uma técnica para a extração de auto-pares relacionados com a solução de problemas de Elementos Finitos. O algoritmo consiste no uso dos métodos da Iteração Inversa e Gradiente Conjugado para a obtenção do vetor solução associado ao menor auto-valor. As soluções do auto-sistema são calculadas sequencialmente pela modificação da matriz dos coeficientes das equações de equilíbrio do problema através do uso de uma técnica de Deflação. O uso extensivo desta técnica introduz auto-valores múltiplos na matriz dos coeficientes, tornando necessário proceder-se a uma combinação dos dois métodos. É efetuado também um estudo para encontrar vetores iniciais apropriados a serem utilizados pelos métodos. O algoritmo foi implementado e alguns resultados de resolução de exemplos são apresentados, para ilustrar o seu desempenho. / [en] A vector iterative technique is developed for the extraction of eigenpairs related to the solution of finite element problems. The algorithm consists of using inverse iteration and conjugate gradient methods so as to obtain the solution vector associated to the smallest eigenvalue. Eigensolutions are sequentially calculated by replacing the coefficient matrix in the problem equilibrium equation using a deflation technique. The extensive usage of this technique, introduces multiple eigenvalue in the coefficient matrix, requiring a procedure to combine both methods. Also, a study is performed to find the appropriate starting vector to be used with methods. The algorithm has been implemented and the results of some example solutions are given that yield insight into its predictive capabilities.
27

Building and generating facial textures using Eigen faces

Krogh, Robert January 2016 (has links)
With the evolution in the game industry and other virtual environments, demands on what comes with an application is higher than ever before. This leads to many companies trying to to procedurally generate content in order to save up on storage space and get a wider variety of content. It has become essential to infuse immersion in such application and some companies has even gone as far as to let the player recreate him- or herself to be the hero or heroine of the game. Even so, many AAA companies refrain from using face segmentation software as it gives the power of adding game content by the end users, and that may lead to an increased risk of offensive content, that goes against company standards and policy, to enter their application. By taking the concept of procedural generation and applying this together with face segmentation, placing a Principal Component Analysis (PCA) based texturization model, we allow for a controlled yet functioning face texturization in a run-time virtual environment. In this project we use MatLab to create a controlled Eigen space, infuses this into an application built in Unity 3D using UMA, and lets smaller recreation vectors, that spans a few kilobytes as most, to create textures in run-time. In doing so, we can project faces onto the Eigen space and get fully functioning and texturized characters, able to use ready animations and controllers of the developer’s choice. These Eigen spaces may cost more storage space and loading times up to a limit, but can in turn generate a seemingly endless variation of textural content dynamically. In order to see what potential users prioritize when it comes to applications like these, we conducted a survey where the responders saw variations of this technique and were able to express their view on attributes expected from a “good” (from their point of view) application. In the end we have a UMA ready set of scripts, and a one-time use system to create Eigen spaces for the applications to use it. We worked in close relation with Högström’s Selfie to Avatar face segmentation software and proved the concept in Unity 3D applications.
28

A Multi-Variate Analysis of SMTP Paths and Relays to Restrict Spam and Phishing Attacks in Emails

Palla, Srikanth 12 1900 (has links)
The classifier discussed in this thesis considers the path traversed by an email (instead of its content) and reputation of the relays, features inaccessible to spammers. Groups of spammers and individual behaviors of a spammer in a given domain were analyzed to yield association patterns, which were then used to identify similar spammers. Unsolicited and phishing emails were successfully isolated from legitimate emails, using analysis results. Spammers and phishers are also categorized into serial spammers/phishers, recent spammers/phishers, prospective spammers/phishers, and suspects. Legitimate emails and trusted domains are classified into socially close (family members, friends), socially distinct (strangers etc), and opt-outs (resolved false positives and false negatives). Overall this classifier resulted in far less false positives when compared to current filters like SpamAssassin, achieving a 98.65% precision, which is well comparable to the precisions achieved by SPF, DNSRBL blacklists.
29

Automated Detection of Hematological Abnormalities through Classification of Flow Cytometric Data Patterns

Rossman, Mark A 24 March 2011 (has links)
Flow Cytometry analyzers have become trusted companions due to their ability to perform fast and accurate analyses of human blood. The aim of these analyses is to determine the possible existence of abnormalities in the blood that have been correlated with serious disease states, such as infectious mononucleosis, leukemia, and various cancers. Though these analyzers provide important feedback, it is always desired to improve the accuracy of the results. This is evidenced by the occurrences of misclassifications reported by some users of these devices. It is advantageous to provide a pattern interpretation framework that is able to provide better classification ability than is currently available. Toward this end, the purpose of this dissertation was to establish a feature extraction and pattern classification framework capable of providing improved accuracy for detecting specific hematological abnormalities in flow cytometric blood data. This involved extracting a unique and powerful set of shift-invariant statistical features from the multi-dimensional flow cytometry data and then using these features as inputs to a pattern classification engine composed of an artificial neural network (ANN). The contribution of this method consisted of developing a descriptor matrix that can be used to reliably assess if a donor’s blood pattern exhibits a clinically abnormal level of variant lymphocytes, which are blood cells that are potentially indicative of disorders such as leukemia and infectious mononucleosis. This study showed that the set of shift-and-rotation-invariant statistical features extracted from the eigensystem of the flow cytometric data pattern performs better than other commonly-used features in this type of disease detection, exhibiting an accuracy of 80.7%, a sensitivity of 72.3%, and a specificity of 89.2%. This performance represents a major improvement for this type of hematological classifier, which has historically been plagued by poor performance, with accuracies as low as 60% in some cases.
30

Iris Biometric Identification Using Artificial Neural Networks

Haskett, Kevin Joseph 01 August 2018 (has links)
A biometric method is a more secure way of personal identification than passwords. This thesis examines the iris as a personal identifier with the use of neural networks as the classifier. A comparison of different feature extraction methods that include the Fourier transform, discrete cosine transform, the eigen analysis method, and the wavelet transform, is performed. The robustness of each method, with respect to distortion and noise, is also studied.

Page generated in 0.046 seconds