Spelling suggestions: "subject:"biometric.""
181 |
Silhouette based gait recognition [electronic resource] : research resource and limits / by Laura Helena Malavé.Malavé, Laura Helena. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 115 pages. / Thesis (M.S.C.S.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: As is seen from the work on gait recognition, there is a de-facto consensus about the silhouette of a person being the low-level representation of choice. It has been hypothesized that the performance degradation that is observed when one compares sequences taken on different surfaces, hence against different backgrounds, or when one considers outdoor sequences is due to the low silhouette quality and its variation. If only one can get better silhouettes the perfomance of gait recognition would be high. This thesis challenges that hypothesis. In the context of the HumanID Gait Challenge problem, we constructed a set of ground truth silhouttes over one gait cycles for 71 subjects, to test recognition across two conditions, shoe and surface. Using these, we show that the performance with ground truth silhouette is as good as that obtained by those obtained by a basic background subtraction algorithm. / ABSTRACT: Therefore further research into ways to enhance silhouette extraction does not appear to be the most productive way to advance gait recognition. We also show, using the manually specified part level silhouettes, that most of the gait recognition power lies in the legs and the arms. The recognition power in various static gait recognition factors as extracted from a single view image, such as gait period, cadence, body size, height, leg size, and torso length, does not seem to be adequate. Using cummulative silhouette error images, we also suggest that gait actually changes when one changes walking surface; in particular the swing phase of the gait gets effected the most. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
|
182 |
Enabling telemedicine with smartphonesGokhale, Vaidehee Padgaonkar 24 February 2012 (has links)
As smartphone technology continues to mature, one of the many areas it can help enhance is telemedicine: the concept of using telecommunications to provide health information from a distance. A new medical condition or disease can require frequent visits to the doctor for simple biometric monitoring. These frequent visits are time-consuming and can be extremely inconvenient for the patient. This report describes how a smartphone can be the optimal platform to communicate critical biometric measurements to one's physician, reduce in-person hospital visits, and still allow for the patient to receive feedback from the doctor. A proof-of-concept infrastructure for enabling telemedicine is demonstrated by interfacing a glucose meter with an Android device that uploads that data to the Cloud to be viewed by the doctor. / text
|
183 |
A new algorithm for minutiae extraction and matching in fingerprintNoor, Azad January 2012 (has links)
A novel algorithm for fingerprint template formation and matching in automatic fingerprint recognition has been developed. At present, fingerprint is being considered as the dominant biometric trait among all other biometrics due to its wide range of applications in security and access control. Most of the commercially established systems use singularity point (SP) or ‘core’ point for fingerprint indexing and template formation. The efficiency of these systems heavily relies on the detection of the core and the quality of the image itself. The number of multiple SPs or absence of ‘core’ on the image can cause some anomalies in the formation of the template and may result in high False Acceptance Rate (FAR) or False Rejection Rate (FRR). Also the loss of actual minutiae or appearance of new or spurious minutiae in the scanned image can contribute to the error in the matching process. A more sophisticated algorithm is therefore necessary in the formation and matching of templates in order to achieve low FAR and FRR and to make the identification more accurate. The novel algorithm presented here does not rely on any ‘core’ or SP thus makes the structure invariant with respect to global rotation and translation. Moreover, it does not need orientation of the minutiae points on which most of the established algorithm are based. The matching methodology is based on the local features of each minutiae point such as distances to its nearest neighbours and their internal angle. Using a publicly available fingerprint database, the algorithm has been evaluated and compared with other benchmark algorithms. It has been found that the algorithm has performed better compared to others and has been able to achieve an error equal rate of 3.5%.
|
184 |
New approaches to automatic 3-D and 2-D 3-D face recognitionJahanbin, Sina 01 June 2011 (has links)
Automatic face recognition has attracted the attention of many research institutes, commercial industries, and government agencies in the past few years
mainly due to the emergence of numerous applications, such as surveillance, access control to secure facilities, and airport screening. Almost all of the research on the early days of face recognition was focused on using 2-D (intensity/portrait) images
of the face. While several sophisticated 2-D solutions have been proposed, unbiased evaluation studies show that their collective performance remains unsatisfactory, and degrades significantly with variations in lighting condition, face position,
makeup, or existence of non-neutral facial expressions. Recent developments in
3-D imaging technology has made cheaper, quicker and more reliable acquisition of 3-D facial models a reality. These 3-D facial models contain information about
the anatomical structure of the face that remains constant under variable lighting conditions, facial makeup, and pose variations. Thus, researchers are considering to utilize 3-D structure of the face alone or in combination with 2-D information to
alleviate inherent limitations of 2-D images and attain better performance.
Published 3-D face recognition algorithms have demonstrated promising results confirming the effectiveness of 3-D facial models in dealing with the above mentioned factors contributing to the failure of 2-D face recognition systems. However,
the majority of these 3-D algorithms are extensions of conventional 2-D approaches,
where intensity images are simply replaced by 3-D models rendered as
range images. These algorithms are not specifically tailored to exploit abundant geometric and anthropometric clues available in 3-D facial models.
In this dissertation we introduce innovative 3-D and 2-D+3-D facial measurements (features) that effectively describe the geometric characteristics of the corresponding faces. Some of the features described in this dissertation, as well as
many features proposed in the literature are defined around or between meaningful facial landmarks (fiducial points). In order to reach our goal of designing an accurate
automatic face recognition system, we also propose a novel algorithm combining 3-D (range) and 2-D (portrait) Gabor clues to pinpoint a number of points with meaningful anthropometric definitions with significantly better accuracies than those achievable using a single modality alone.
This dissertation is organized as follows. In Chapter 1, various biometric modalities are introduced and the advantages of the facial biometrics over other
modalities are discussed. The discussion in Chapter 1 is continued with introduction
of the face recognition’s modes of operation followed by some current and potential future applications. The problem statement of this dissertation is also included in this chapter. In Chapter 2, an extensive review of the successful 2-D, 3-D, and 2-D+3-D face recognition algorithms are provided. Chapter 3 presents the details of our innovative 3-D and 2-D+3-D face features, as well as our accurate fiducial point detection algorithm. Conclusions and directions for future extensions are presented
in Chapter 4. / text
|
185 |
Multilinear Subspace Learning for Face and Gait RecognitionLu, Haiping 19 January 2009 (has links)
Face and gait recognition problems are challenging due to largely varying appearances, highly complex pattern distributions, and insufficient training samples. This dissertation focuses on multilinear subspace learning for face and gait recognition, where low-dimensional representations are learned directly from tensorial face or gait objects.
This research introduces a unifying multilinear subspace learning framework for systematic treatment of the multilinear subspace learning problem. Three multilinear projections are categorized according to the input-output space mapping as: vector-to-vector projection, tensor-to-tensor projection, and tensor-to-vector projection. Techniques for subspace learning from tensorial data are then proposed and analyzed. Multilinear principal component analysis (MPCA) seeks a tensor-to-tensor projection that maximizes the variation captured in the projected space, and it is further combined with linear discriminant analysis and boosting for better recognition performance. Uncorrelated MPCA (UMPCA) solves for a tensor-to-vector projection that maximizes the captured variation in the projected space while enforcing the zero-correlation constraint. Uncorrelated multilinear discriminant analysis (UMLDA) aims to produce uncorrelated features through a tensor-to-vector projection that maximizes a ratio of the between-class scatter over the within-class scatter defined in the projected space. Regularization and aggregation are incorporated in the UMLDA solution for enhanced performance.
Experimental studies and comparative evaluations are presented and analyzed on the PIE and FERET face databases, and the USF gait database. The results indicate that the MPCA-based solution has achieved the best overall performance in various learning scenarios, the UMLDA-based solution has produced the most stable and competitive results with the same parameter setting, and the UMPCA algorithm is effective in unsupervised learning in low-dimensional subspace. Besides advancing the state-of-the-art of multilinear subspace learning for face and gait recognition, this dissertation also has potential impact in both the development of new multilinear subspace learning algorithms and other applications involving tensor objects.
|
186 |
Multilinear Subspace Learning for Face and Gait RecognitionLu, Haiping 19 January 2009 (has links)
Face and gait recognition problems are challenging due to largely varying appearances, highly complex pattern distributions, and insufficient training samples. This dissertation focuses on multilinear subspace learning for face and gait recognition, where low-dimensional representations are learned directly from tensorial face or gait objects.
This research introduces a unifying multilinear subspace learning framework for systematic treatment of the multilinear subspace learning problem. Three multilinear projections are categorized according to the input-output space mapping as: vector-to-vector projection, tensor-to-tensor projection, and tensor-to-vector projection. Techniques for subspace learning from tensorial data are then proposed and analyzed. Multilinear principal component analysis (MPCA) seeks a tensor-to-tensor projection that maximizes the variation captured in the projected space, and it is further combined with linear discriminant analysis and boosting for better recognition performance. Uncorrelated MPCA (UMPCA) solves for a tensor-to-vector projection that maximizes the captured variation in the projected space while enforcing the zero-correlation constraint. Uncorrelated multilinear discriminant analysis (UMLDA) aims to produce uncorrelated features through a tensor-to-vector projection that maximizes a ratio of the between-class scatter over the within-class scatter defined in the projected space. Regularization and aggregation are incorporated in the UMLDA solution for enhanced performance.
Experimental studies and comparative evaluations are presented and analyzed on the PIE and FERET face databases, and the USF gait database. The results indicate that the MPCA-based solution has achieved the best overall performance in various learning scenarios, the UMLDA-based solution has produced the most stable and competitive results with the same parameter setting, and the UMPCA algorithm is effective in unsupervised learning in low-dimensional subspace. Besides advancing the state-of-the-art of multilinear subspace learning for face and gait recognition, this dissertation also has potential impact in both the development of new multilinear subspace learning algorithms and other applications involving tensor objects.
|
187 |
MERGING OF FINGERPRINT SCANS OBTAINED FROM MULTIPLE CAMERAS IN 3D FINGERPRINT SCANNER SYSTEMBoyanapally, Deepthi 01 January 2008 (has links)
Fingerprints are the most accurate and widely used biometrics for human identification due to their uniqueness, rapid and easy means of acquisition. Contact based techniques of fingerprint acquisition like traditional ink and live scan methods are not user friendly, reduce capture area and cause deformation of fingerprint features. Also, improper skin conditions and worn friction ridges lead to poor quality fingerprints. A non-contact, high resolution, high speed scanning system has been developed to acquire a 3D scan of a finger using structured light illumination technique. The 3D scanner system consists of three cameras and a projector, with each camera producing a 3D scan of the finger. By merging the 3D scans obtained from the three cameras a nail to nail fingerprint scan is obtained. However, the scans from the cameras do not merge perfectly. The main objective of this thesis is to calibrate the system well such that 3D scans obtained from the three cameras merge or align automatically. This error in merging is reduced by compensating for radial distortion present in the projector of the scanner system. The error in merging after radial distortion correction is then measured using the projector coordinates of the scanner system.
|
188 |
Security monitoring through human computer interaction devicesAhmed, Ahmed Awad El Sayed 14 June 2010 (has links)
In this work we introduce a new form of behavioral biometrics based on mouse dynamics, which can be used in different security applications. We develop a technique that can be used to model the behavioral characteristics from the captured data using artificial neural networks. In addition. we present an architecture and implementation for the detector, which cover all the phases of the biometric data flow including the detection process. We also introduce a new technique for keystroke biometrics analysis which supports free text detection allowing passive, dynamic, and real-time monitoring of users. The enrollment process can also be done passively without requiring the user to enter a specific text. Experimental data illustrating the experiments conducted to evaluate the accuracy of the proposed detection techniques are presented and analyzed. We take the study a step further and target the general field of Continuous Authentication (CA). CA systems depart from traditional (static) authentication scheme by repeating several times the authentication process dynamically throughout the entire login session. The main objectives being to detect masqueraders, ensure session security, and combat insider threat. Mouse and Keystroke dynamics are good candidates for CA. CA is an emerging field that we believe will play an important role in the overall security strategies of many organizations in the future. Thus, as the technology gains in maturity and becomes more diverse, it is essential to develop common and meaningful evaluation metrics that can be used to compare and contrast between existing and future schemes. So far, all the CA systems proposed in the literature have been evaluated using the same accuracy metrics used for static authentication systems and, in some cases, using a simplified form of the Time-To-Alarm (TTA) metric. As an alternative, we propose in this work dynamic accuracy metrics that better capture the continuous nature of CA activity. Furthermore, we introduce and study diverse and more complex forms of the Time-to-Alarm (TTA) metrics. We study and illustrate empirically the proposed metrics and models using a combination of synthetic and real data samples.
|
189 |
An investigation into the relationship between static and dynamic gait features : a biometrics perspectiveAlawar, Hamad Mansoor Mohd Aqil January 2014 (has links)
Biometrics is a unique physical or behavioral characteristic of a person. This unique attribute, such as fingerprints or gait, can be used for identification or verification purposes. Gait is an emerging biometrics with great potential. Gait recognition is based on recognizing a person by the manner in which they walk. Its potential lays in that it can be captured at a distance and does not require the cooperation of the subject. This advantage makes it a very attractive tool for forensic cases and applications, where it can assist in identifying a suspect when other evidence such as DNA, fingerprints, or a face were not attainable. Gait can be used for recognition in a direct manner when the two samples are shot from similar camera resolution, position, and conditions. Yet in some cases, the only sample available is of an incomplete gait cycle, low resolution, low frame rate, a partially visible subject, or a single static image. Most of these conditions have one thing in common: static measurements. A gait signature is usually formed from a number of dynamic and static features. Static features are physical measurements of height, length, or build; while dynamic features are representations of joint rotations or trajectories. The aim of this thesis is to study the potential of predicting dynamic features from static features. In this thesis, we have created a database that utilizes a 3D laser scanner for capturing accurate shape and volumes of a person, and a motion capture system to accurately record motion data. The first analysis focused on analyzing the correlation between twenty-one 2D static features and eight dynamic features. Eleven pairs of features were regarded as significant with the criterion of a P-value less than 0.05. Other features also showed a strong correlation that indicated the potential of their predictive power. The second analysis focused on 3D static and dynamic features. Through the correlation analysis, 1196 pairs of features were found to be significantly correlated. Based on these results, a linear regression analysis was used to predict a dynamic gait signature. The predictors chosen were based on two adaptive methods that were developed in this thesis: "the top-x" method and the "mixed method". The predictions were assessed for both for their accuracy and their classification potential that would be used for gait recognition. The top results produced a 59.21% mean matching percentile. This result will act as baseline for future research in predicting a dynamic gait signature from static features. The results of this thesis bare potential for applications in biomechanics, biometrics, forensics, and 3D animation.
|
190 |
Reconhecimento facial com projeções ortogonais preservadoras de localidade customizadas para maximizar margens suaves / Face recognition using customized orthogonal locality preserving projections with soft margin maximizationSoldera, John January 2015 (has links)
Atualmente, o reconhecimento facial por técnicas automáticas é ainda uma tarefa desafiadora uma vez que as imagens faciais podem ser afetadas por mudanças na cena, tais como na iluminação, na pose da cabeça, ou na expressão facial. Além disso, a representação de faces por feições faciais geralmente requer diversas dimensões, o que impõe desafios adicionais ao reconhecimento facial. Nessa tese, é proposto um novo método de reconhecimento facial com o objetivo de ser robusto a muitos dos fatores que podem afetar as feições faciais na prática e se baseia em determinar transformações do espaço original de feições faciais de alta dimensionalidade para um espaço de baixa dimensionalidade que apresenta maior discriminação das classes de dados faciais (indivíduos). Isso é realizado através da aplicação de um método Projeções Ortogonais Preservadoras de Localidade (Orthogonal Locality Preserving Projections - OLPP) modificado, que usa esquemas de definição de localidade supervisionados que têm o objetivo de preservar a estrutura das classes de dados faciais no espaço resultante de baixa dimensionalidade, diferentemente do método OLPP típico que preserva a estrutura dos dados faciais. Dessa forma, as classes se tornam mais compactas, preservando a métrica de classificação. O método proposto pode trabalhar tanto com representações densas como esparsas de imagens faciais (ou seja, ele pode usar subconjuntos ou todos os pixels das imagens faciais), sendo proposto nessa tese um método de extração de feições faciais esparsas e um método de extração de feições faciais densas que preservam a informação de cor das imagens faciais apresentando melhora em relação ao método OLPP típico que usa imagens em escalas de cinza em baixa resolução. Novas imagens faciais de teste são classificadas no espaço de baixa dimensionalidade obtido usando Máquinas de Vetores de Suporte (Support Vector Machines - SVM) treinadas com margens suaves, apresentando maior eficiência do que a regra do vizinho mais próximo usada no método OLPP típico. Um conjunto de experimentos foi projetado para avaliar o método proposto sob várias condições encontradas na prática (como mudanças na pose, expressão facial, iluminação e a presença de artefatos que causam oclusão facial). Os resultados experimentais foram obtidos usando cinco bases de imagens faciais públicas (a PUT, a FEI, a FERET, a Yale e a ORL). Esses experimentos confirmam que os esquemas propostos de extração de feições faciais integrados à transformação proposta para um espaço discriminativo de baixa dimensionalidade empregando o esquema alternativo de classificação usando SVM com margens suaves obtêm maiores taxas de reconhecimento do que o próprio método OLPP e métodos representativos do estado da arte mesmo quando são usadas imagens coloridas em alta resolução (das bases de imagens faciais PUT, FEI e FERET) como imagens faciais em escalas de cinza em baixa resolução (das bases Yale e ORL). / Nowadays, face recognition by automatic techniques still is a challenging task since face images may be affected by changes in the scene, such as in the illumination, head pose or face expression. Also, face feature representation often requires several dimensions, which poses additional challenges for face recognition. In this thesis is proposed a novel face recognition method with the objective of to be robust to many issues which can affect the face features in practice and it is based on projections of high dimensional face image representations into lower dimensionality and highly discriminative spaces. This is achieved by a modified Orthogonal Locality Preserving Projections (OLPP) method that uses a supervised alternative locality definition scheme designed to preserve the face class (individuals) structure in the obtained lower dimensionality face feature space unlike the typical OLPP method which preserves the face data structure. Besides, a new kernel equation is proposed to calculate affinities among face samples, presenting better class structure preservation when compared to the heat kernel used by the typical OLPP method. The proposed method can work with sparse and dense face image representations (i.e. it can use sub-sets or all face image pixels), and a sparse and a dense feature extraction methods are proposed, which preserve the color information during the feature extraction process from the facial images improving on the typical OLPP method which uses grayscale low-resolution face images. New test face images are classified in the obtained lower dimensionality feature space using a trained soft margins Support Vector Machine (SVM), so it performs better than the nearest neighbor rule used in the typical OLPP method. A set of experiments was designed to evaluate the proposed method under various conditions found in practice (such as changes in head pose, face expression, illumination, and in the presence of occlusion artifacts). The experimental results were obtained using five challenging public face databases (namely, PUT, FEI, FERET, Yale and ORL). These experiments confirm that the proposed feature extraction method integrated to the proposed transformation to a discriminative lower dimensionality space using the alternative classification scheme with SVM and soft margins obtains higher recognition rates than the OLPP method itself and methods representative of the state-ofthe- art even when are used color (RGB) face images in high resolution (PUT, FEI and FERET face databases) as well as grayscale face images in low resolution (Yale and ORL face databases).
|
Page generated in 0.0765 seconds