• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 128
  • 33
  • 18
  • 18
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 452
  • 130
  • 128
  • 71
  • 61
  • 56
  • 56
  • 48
  • 46
  • 45
  • 44
  • 43
  • 43
  • 43
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Modelling Distance Functions Induced by Face Recognition Algorithms

Chaudhari, Soumee 09 November 2004 (has links)
Face recognition algorithms has in the past few years become a very active area of research in the fields of computer vision, image processing, and cognitive psychology. This has spawned various algorithms of different complexities. The concept of principal component analysis(PCA) is a popular mode of face recognition algorithm and has often been used to benchmark other face recognition algorithms for identification and verification scenarios. However in this thesis, we try to analyze different face recognition algorithms at a deeper level. The objective is to model the distances output by any face recognition algorithm as a function of the input images. We achieve this by creating an affine eigen space from the PCA space such that it can approximate the results of the face recognition algorithm under consideration as closely as possible. Holistic template matching algorithms like the Linear Discriminant Analysis algorithm( LDA), the Bayesian Intrapersonal/Extrapersonal classifier(BIC), as well as local feature based algorithms like the Elastic Bunch Graph Matching algorithm(EBGM) and a commercial face recognition algorithm are selected for our experiments. We experiment on two different data sets, the FERET data set and the Notre Dame data set. The FERET data set consists of images of subjects with variation in both time and expression. The Notre Dame data set consists of images of subjects with variation in time. We train our affine approximation algorithm on 25 subjects and test with 300 subjects from the FERET data set and 415 subjects from the Notre Dame data set. We also analyze the effect of different distance metrics used by the face recognition algorithm on the accuracy of the approximation. We study the quality of the approximation in the context of recognition for the identification and verification scenarios, characterized by cumulative match score curves (CMC) and receiver operator curves (ROC), respectively. Our studies indicate that both the holistic template matching algorithms as well as feature based algorithms can be well approximated. We also find the affine approximation training can be generalized across covariates. For the data with time variation, we find that the rank order of approximation performance is BIC, LDA, EBGM, and commercial. For the data with expression variation, the rank order is LDA, BIC, commercial, and EBGM. Experiments to approximate PCA with distance measures other than Euclidean also performed very well. PCA+Euclidean distance is best approximated followed by PCA+MahL1, PCA+MahCosine, and PCA+Covariance.
162

Gait-Based Recognition at a Distance: Performance, Covariate Impact and Solutions

Liu, Zongyi 10 November 2004 (has links)
It has been noticed for a long time that humans can identify others based on their biological movement from a distance. However, it is only recently that computer vision based gait biometrics has received much attention. In this dissertation, we perform a thorough study of gait recognition from a computer vision perspective. We first present a parameterless baseline recognition algorithm, which bases similarity on spatio-temporal correlation that emphasizes gait dynamics as well as gait shapes. Our experiments are performed with three popular gait databases: the USF/NIST HumanID Gait Challenge outdoor database with 122 subjects, the UMD outdoor database with 55 subjects, and the CMU Mobo indoor database with 25 subjects. Despite its simplicity, the baseline algorithm shows strong recognition power. On the other hand, the outcome suggests that changes in surface and time have strong impact on recognition with significant drop in performance. To gain insight into the effects of image segmentation on recognition -- a possible cause for performance degradation, we propose a silhouette reconstruction method based on a Population Hidden Markov Model (pHMM), which models gait over one cycle, coupled with an Eigen-stance model utilizing the Principle Component Analysis (PCA) of the silhouette shapes. Both models are built from a set of manually created silhouettes of 71 subjects. Given a sequence of machine segmented silhouettes, each frame is matched into a stance by pHMM using the Viterbi algorithm, and then is projected into and reconstructed by the Eigen-stance model. We demonstrate that the system dramatically improves the silhouette quality. Nonetheless, it does little help for recognition, indicating that segmentation is not the key factor of the covariate impacts. To improve performance, we look into other aspects. Toward this end, we propose three recognition algorithms: (i) an averaged silhouette based algorithm that deemphasizes gait dynamics, which substantially reduces computation time but achieves similar recognition power with the baseline algorithm; (ii) an algorithm that normalizes gait dynamics using pHMM and then uses Euclidean distance between corresponding selected stances -- this improves recognition over surface and time; and (iii) an algorithm that also performs gait dynamics normalization using pHMM, but instead of Euclidean distances, we consider distances in shape space based on the Linear Discriminant Analysis (LDA) and consider measures that are invariant to morphological deformation of silhouettes. This algorithm statistically improves the recognition over all covariates. Compared with the best reported algorithm to date, it improves the top-rank identification rate (gallery size: 122 subjects) for comparison across hard covariates: briefcase, surface type and time, by 22%, 14%, and 12% respectively. In addition to better gait algorithms, we also study multi-biometrics combination to improve outdoor biometric performance, specifically, fusing with face data. We choose outdoor face recognition, a "known" hard problem in face biometrics, and test four combination schemes: score sum, Bayesian rule, confidence score sum, and rank sum. We find that the recognition power after combination is significantly stronger although individual biometrics are weak, suggesting another effective approach to improve biometric recognition. The fundamental contributions of this work include (i) establishing the "hard" problems for gait recognition involving comparison across time, surface, and briefcase carrying conditions, (ii) revealing that their impacts cannot be explained by silhouette segmentation, (iii) demonstrating that gait shape is more important than gait dynamics in recognition, and (iv) proposing a novel gait algorithm that outperforms other gait algorithms to date.
163

Silhouette based Gait Recognition: Research Resource and Limits

Malavé, Laura Helena 11 July 2003 (has links)
As is seen from the work on gait recognition, there is a de-facto consensus about the silhouette of a person being the low-level representation of choice. It has been hypothesized that the performance degradation that is observed when one compares sequences taken on different surfaces, hence against different backgrounds, or when one considers outdoor sequences is due to the low silhouette quality and its variation. If only one can get better silhouettes the perfomance of gait recognition would be high. This thesis challenges that hypothesis. In the context of the HumanID Gait Challenge problem, we constructed a set of ground truth silhouttes over one gait cycles for 71 subjects, to test recognition across two conditions, shoe and surface. Using these, we show that the performance with ground truth silhouette is as good as that obtained by those obtained by a basic background subtraction algorithm. Therefore further research into ways to enhance silhouette extraction does not appear to be the most productive way to advance gait recognition. We also show, using the manually specified part level silhouettes, that most of the gait recognition power lies in the legs and the arms. The recognition power in various static gait recognition factors as extracted from a single view image, such as gait period, cadence, body size, height, leg size, and torso length, does not seem to be adequate. Using cummulative silhouette error images, we also suggest that gait actually changes when one changes walking surface; in particular the swing phase of the gait gets effected the most.
164

An Indepth Analysis of Face Recognition Algorithms using Affine Approximations

Reguna, Lakshmi 19 May 2003 (has links)
In order to foster the maturity of face recognition analysis as a science, a well implemented baseline algorithm and good performance metrics are highly essential to benchmark progress. In the past, face recognition algorithms based on Principal Components Analysis(PCA) have often been used as a baseline algorithm. The objective of this thesis is to develop a strategy to estimate the best affine transformation, which when applied to the eigen space of the PCA face recognition algorithm can approximate the results of any given face recognition algorithm. The affine approximation strategy outputs an optimal affine transform that approximates the similarity matrix of the distances between a given set of faces generated by any given face recognition algorithm. The affine approximation strategy would help in comparing how close a face recognition algorithm is to the PCA based face recognition algorithm. This thesis work shows how the affine approximation algorithm can be used as a valuable tool to evaluate face recognition algorithms at a deep level. Two test algorithms were choosen to demonstrate the usefulness of the affine approximation strategy. They are the Linear Discriminant Analysis(LDA) based face recognition algorithm and the Bayesian interpersonal and intrapersonal classifier based face recognition algorithm. Our studies indicate that both the algorithms can be approximated well. These conclusions were arrived based on the results produced by analyzing the raw similarity scores and by studying the identification and verification performance of the algorithms. Two training scenarios were considered, one in which both the face recognition and the affine approximation algorithm were trained on the same data set and in the other, different data sets were used to train both the algorithms. Gross error measures like the average RMS error and Stress-1 error were used to directly compare the raw similarity scores. The histogram of the difference between the similarity matrixes also clearly showed that the error spread is small for the affine approximation algorithm. The performance of the algorithms in the identification and the verification scenario were characterized using traditional CMS and ROC curves. The McNemar's test showed that the difference between the CMS and the ROC curves generated by the test face recognition algorithms and the affine approximation strategy is not statistically significant. The results were statistically insignificant at rank 1 for the first training scenario but for the second training scenario they became insignificant only at higher ranks. This difference in performance can be attributed to the different training sets used in the second training scenario.
165

Identity Verification using Keyboard Statistics. / Identitetsverifiering med användning av tangentbordsstatistik.

Mroczkowski, Piotr January 2004 (has links)
<p>In the age of a networking revolution, when the Internet has changed not only the way we see computing, but also the whole society, we constantly face new challenges in the area of user verification. It is often the case that the login-id password pair does not provide a sufficient level of security. Other, more sophisticated techniques are used: one-time passwords, smart cards or biometric identity verification. The biometric approach is considered to be one of the most secure ways of authentication. </p><p>On the other hand, many biometric methods require additional hardware in order to sample the corresponding biometric feature, which increases the costs and the complexity of implementation. There is however one biometric technique which does not demand any additional hardware – user identification based on keyboard statistics. This thesis is focused on this way of authentication. </p><p>The keyboard statistics approach is based on the user’s unique typing rhythm. Not only what the user types, but also how she/he types is important. This report describes the statistical analysis of typing samples which were collected from 20 volunteers, as well as the implementation and testing of the identity verification system, which uses the characteristics examined in the experimental stage.</p>
166

Changeable and Privacy Preserving Face Recognition

Wang, Yongjin 23 February 2011 (has links)
Traditional methods of identity recognition are based on the knowledge of a password or a PIN, or possession factors such as tokens and ID cards. Such strategies usually afford low level of security, and can not meet the requirements of applications with high security demands. Biometrics refer to the technology of recognizing or validating the identity of an individual based on his/her physiological and/or behavioral characteristics. It is superior to conventional methods in both security and convenience since biometric traits can not be lost, forgotten, or stolen as easily, and it is relatively difficult to circumvent. However, although biometrics based solutions provide various advantages, there exist some inherent concerns of the technology. In the first place, biometrics can not be easily changed or reissued if compromised due to the limited number of biometric traits that humans possess. Secondly, since biometric data reflect the user's physiological or behavioral characteristics, privacy issues arise if the stored biometric templates are obtained by an adversary. To that end, changeability and privacy protection of biometric templates are two important issues that need to be addressed for widespread deployment of biometric technology. This dissertation systematically investigates random transformation based methods for addressing the challenging problems of changeability and privacy protection in biometrics enabled recognition systems. A random projection based approach is first introduced. We present a detailed mathematical analysis on the similarity and privacy preserving properties of random projection, and introduce a vector translation technique to achieve strong changeability. To further enhance privacy protection as well as to improve the recognition accuracy, a sorted index number (SIN) approach is proposed such that only the index numbers of the sorted feature vectors are stored as templates. The SIN framework is then evaluated in conjunction with random additive transform, random multiplicative transform, and random projection, for producing reissuable and privacy preserving biometric templates. The feasibility of the introduced solutions is well supported by detailed theoretical analyses. Extensive experimentation on a face based biometric recognition problem demonstrates the effectiveness of the proposed methods.
167

Changeable and Privacy Preserving Face Recognition

Wang, Yongjin 23 February 2011 (has links)
Traditional methods of identity recognition are based on the knowledge of a password or a PIN, or possession factors such as tokens and ID cards. Such strategies usually afford low level of security, and can not meet the requirements of applications with high security demands. Biometrics refer to the technology of recognizing or validating the identity of an individual based on his/her physiological and/or behavioral characteristics. It is superior to conventional methods in both security and convenience since biometric traits can not be lost, forgotten, or stolen as easily, and it is relatively difficult to circumvent. However, although biometrics based solutions provide various advantages, there exist some inherent concerns of the technology. In the first place, biometrics can not be easily changed or reissued if compromised due to the limited number of biometric traits that humans possess. Secondly, since biometric data reflect the user's physiological or behavioral characteristics, privacy issues arise if the stored biometric templates are obtained by an adversary. To that end, changeability and privacy protection of biometric templates are two important issues that need to be addressed for widespread deployment of biometric technology. This dissertation systematically investigates random transformation based methods for addressing the challenging problems of changeability and privacy protection in biometrics enabled recognition systems. A random projection based approach is first introduced. We present a detailed mathematical analysis on the similarity and privacy preserving properties of random projection, and introduce a vector translation technique to achieve strong changeability. To further enhance privacy protection as well as to improve the recognition accuracy, a sorted index number (SIN) approach is proposed such that only the index numbers of the sorted feature vectors are stored as templates. The SIN framework is then evaluated in conjunction with random additive transform, random multiplicative transform, and random projection, for producing reissuable and privacy preserving biometric templates. The feasibility of the introduced solutions is well supported by detailed theoretical analyses. Extensive experimentation on a face based biometric recognition problem demonstrates the effectiveness of the proposed methods.
168

Hardware accelerators for embedded fingerprint-based personal recognition systems

Fons Lluís, Mariano 29 May 2012 (has links)
Abstract The development of automatic biometrics-based personal recognition systems is a reality in the current technological age. Not only those operations demanding stringent security levels but also many daily use consumer applications request the existence of computational platforms in charge of recognizing the identity of one individual based on the analysis of his/her physiological and/or behavioural characteristics. The state of the art points out two main open problems in the implementation of such applications: on the one hand, the needed reliability improvement in terms of recognition accuracy, overall security and real-time performances; and on the other hand, the cost reduction of those physical platforms in charge of the processing. This work aims at finding the proper system architecture able to address those limitations of current personal recognition applications. Embedded system solutions based on hardware-software co-design techniques and programmable (and run-time reconfigurable) logic devices under FPGAs or SOPCs is proven to be an efficient alternative to those existing multiprocessor systems based on HPCs, GPUs or PC platforms in the development of that kind of high-performance applications at low cost / El desenvolupament de sistemes automàtics de reconeixement personal basats en tècniques biomètriques esdevé una realitat en l’era tecnològica actual. No només aquelles operacions que exigeixen un elevat nivell de seguretat sinó també moltes aplicacions quotidianes demanen l’existència de plataformes computacionals encarregades de reconèixer la identitat d’un individu a partir de l’anàlisi de les seves característiques fisiològiques i/o comportamentals. L’estat de l’art de la tècnica identifica dues limitacions importants en la implementació d’aquest tipus d’aplicacions: per una banda, és necessària la millora de la fiabilitat d’aquests sistemes en termes de precisió en el procés de reconeixement personal, seguretat i execució en temps real; i per altra banda, és necessari reduir notablement el cost dels sistemes electrònics encarregats del processat biomètric. Aquest treball té per objectiu la cerca de l’arquitectura adequada a nivell de sistema que permeti fer front a les limitacions de les aplicacions de reconeixement personal actuals. Es demostra que la proposta de sistemes empotrats basats en tècniques de codisseny hardware-software i dispositius lògics programables (i reconfigurables en temps d’execució) sobre FPGAs o SOPCs resulta ser una alternativa eficient en front d’aquells sistemes multiprocessadors existents basats en HPCs, GPUs o plataformes PC per al desenvolupament d’aquests tipus d’aplicacions que requereixen un alt nivell de prestacions a baix cost. / El desarrollo de sistemas automáticos de reconocimiento personal basados en técnicas biométricas se ha convertido en una realidad en la era tecnológica actual. No tan solo aquellas operaciones que requieren un alto nivel de seguridad sino también muchas otras aplicaciones cotidianas exigen la existencia de plataformas computacionales encargadas de verificar la identidad de un individuo a partir del análisis de sus características fisiológicas y/o comportamentales. El estado del arte de la técnica identifica dos limitaciones importantes en la implementación de este tipo de aplicaciones: por un lado, es necesario mejorar la fiabilidad que presentan estos sistemas en términos de precisión en el proceso de reconocimiento personal, seguridad y ejecución en tiempo real; y por otro lado, es necesario reducir notablemente el coste de los sistemas electrónicos encargados de dicho procesado biométrico. Este trabajo tiene por objetivo la búsqueda de aquella arquitectura adecuada a nivel de sistema que permita hacer frente a las limitaciones de los sistemas de reconocimiento personal actuales. Se demuestra que la propuesta basada en sistemas embebidos implementados mediante técnicas de codiseño hardware-software y dispositivos lógicos programables (y reconfigurables en tiempo de ejecución) sobre FPGAs o SOPCs resulta ser una alternativa eficiente frente a aquellos sistemas multiprocesador actuales basados en HPCs, GPUs o plataformas PC en el ámbito del desarrollo de aplicaciones que demandan un alto nivel de prestaciones a bajo coste
169

Palmprint Identification Based on Generalization of IrisCode

Kong, Adams 22 January 2007 (has links)
The development of accurate and reliable security systems is a matter of wide interest, and in this context biometrics is seen as a highly effective automatic mechanism for personal identification. Among biometric technologies, IrisCode developed by Daugman in 1993 is regarded as a highly accurate approach, being able to support real-time personal identification of large databases. Since 1993, on the top of IrisCode, different coding methods have been proposed for iris and fingerprint identification. In this research, I extend and generalize IrisCode for real-time secure palmprint identification. PalmCode, the first coding method for palmprint identification developed by me in 2002, directly applied IrisCode to extract phase information of palmprints as features. However, I observe that the PalmCodes from the different palms are similar, having many 45o streaks. Such structural similarities in the PalmCodes of different palms would reduce the individuality of PalmCodes and the performance of palmprint identification systems. To reduce the correlation between PalmCodes, in this thesis, I employ multiple elliptical Gabor filters with different orientations to compute different PalmCodes and merge them to produce a single feature, called Fusion Code. Experimental results demonstrate that Fusion Code performs better than PalmCode. Based on the results of Fusion Code, I further identify that the orientation fields of palmprints are powerful features. Consequently, Competitive Code, which uses real parts of six Gabor filters to estimate the orientation fields, is developed. To embed the properties of IrisCode, such as high speed matching, in Competitive Code, a novel coding scheme and a bitwise angular distance are proposed. Experimental results demonstrate that Competitive Code is much more effective than other palmprint algorithms. Although many coding methods have been developed based on IrisCode for iris and palmprint identification, we lack a detailed analysis of IrisCode. One of the aims of this research is to provide such analysis as a way of better understanding IrisCode, extending the coarse phase representation to a precise phase representation, and uncovering the relationship between IrisCode and other coding methods. This analysis demonstrates that IrisCode is a clustering process with four prototypes; the locus of a Gabor function is a two-dimensional ellipse with respect to a phase parameter and the bitwise hamming distance can be regarded as a bitwise angular distance. In this analysis, I also point out that the theoretical evidence of the imposter binomial distribution of IrisCode is incomplete. I use this analysis to develop a precise phase representation which can enhance iris recognition accuracy and to relate IrisCode and other coding methods. By making use of this analysis, principal component analysis and simulated annealing, near optimal filters for palmprint identification are sought. The near optimal filters perform better than Competitive Code in term of d’ index. Identical twins having the closest genetics-based relationship are expected to have maximum similarity in their biometrics. Classifying identical twins is a challenging problem for some automatic biometric systems. Palmprint has been studied for personal identification for many years. However, genetically identical palmprints have not been studied. I systemically examine Competitive Code on genetically identical palmprints for automatic personal identification and to uncover the genetically related palmprint features. The experimental results show that the three principal lines and some portions of weak lines are genetically related features but our palms still contain rich genetically unrelated features for classifying identical twins. As biometric systems are vulnerable to replay, database and brute-force attacks, such potential attacks must be analyzed before they are massively deployed in security systems. I propose projected multinomial distribution for studying the probability of successfully using brute-force attacks to break into a palmprint system based on Competitive Code. The proposed model indicates that it is computationally infeasible to break into the palmprint system using brute-force attacks. In addition to brute-force attacks, I address the other three security issues: template re-issuances, also called cancellable biometrics, replay attacks, and database attacks. A random orientation filter bank (ROFB) is used to generate cancellable Competitive Codes for templates re-issuances. Secret messages are hidden in templates to prevent replay and database attacks. This technique can be regarded as template watermarking. A series of analyses is provided to evaluate the security levels of the measures.
170

Fingerprints recognition

Dimitrov, Emanuil January 2009 (has links)
Nowadays biometric identification is used in a variety of applications-administration, business and even home. Although there are a lot of biometric identifiers, fingerprints are the most widely spread due to their acceptance from the people and the cheap price of the hardware equipment. Fingerprint recognition is a complex image recognition problem and includes algorithms and procedures for image enhancement and binarization, extracting and matching features and sometimes classification. In this work the main approaches in the research area are discussed, demonstrated and tested in a sample application. The demonstration software application is developed by using Verifinger SDK and Microsoft Visual Studio platform. The fingerprint sensor for testing the application is AuthenTec AES2501.

Page generated in 0.0485 seconds