• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 18
  • 10
  • 8
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 126
  • 53
  • 51
  • 29
  • 21
  • 19
  • 19
  • 18
  • 17
  • 14
  • 14
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A decompositional investigation of 3D face recognition

Cook, James Allen January 2007 (has links)
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.
102

Recuperação de imagens digitais com base na distribuição de características de baixo nível em partições do domínio utilizando índice invertido

Proença, Patrícia Aparecida 29 March 2010 (has links)
Fundação de Amparo a Pesquisa do Estado de Minas Gerais / The main goal of a images retrieval system is to obtain images from a collection that assist a need of the user. To achieve this objective, in generally, the systems of retrieval of images calculate the similarity between the user's need represented by a query and representations of the images of the collection. Such an objective is dicult of being obtain due to the subjectivity of the similarity concept among images, because a same image can be interpreted in dierent ways by dierent people. In the attempt of solving this problem the content based image retrieval systems explore the characteristics of low level color, forms and texture in the calculation of the similarity among the images. A problem of this approach is that in most of the systems the calculation of the similarity is accomplished being compared the query image with all of the images of the collection, turning the dicult and slow processing. Considering the indexation of characteristics of low level of partitions of digital images mapped to an inverted index, this work looks for improvements in the acting of the processing of querys and improve in the precision considering the group of images retrieval in great bases of data. We used an approach based in inverted index that is here adapted for partitions images. In this approach the concept of term of the retrieval textual, main element of the indexation, it is used in the work as characteristic of partitions of images for the indexation. Experiments show improvement in the quality of the precision using two collections of digital images. / O principal objetivo de um sistema de recuperação de imagens é obter imagens de uma coleção que atendam a uma necessidade do usuário. Para atingir esse objetivo, em geral, os sistemas de recuperação de imagens calculam a similaridade entre a necessidade do usuário, representada por uma consulta, e representações das imagens da coleção. Tal objetivo é difícil de ser alcançado devido à subjetividade do conceito de similaridade entre imagens, visto que uma mesma imagem pode ser interpretada de formas diferentes por pessoas distintas. Na tentativa de resolver este problema os sistemas de recuperação de imagens por conteúdo exploram as características de baixo nível cor, forma e textura no cálculo da similaridade entre as imagens. Um problema desta abordagem é que na maioria dos sistemas o cálculo da similaridade é realizado comparando-se a imagem de consulta com todas as imagens da coleção, tornando o processamento difícil e lento. Considerando a indexação de características de baixo nível de partições de imagens digitais mapeadas para um índice invertido, este trabalho busca melhorias no desempenho do processamento de consultas e ganho na precisão considerando o conjunto de imagens recuperadas em grandes bases de dados. Utilizamos uma abordagem baseada em índice invertido, que é aqui adaptada para imagens particionadas. Nesta abordagem o conceito de termo da recuperação textual, principal elemento da indexação, é utilizado no trabalho como característica de partições de imagens para a indexação. Experimentos mostram ganho na qualidade da precisão usando duas coleções de imagens digitais. / Mestre em Ciência da Computação
103

Um sistema para detecção e reconhecimento de face em vídeo utilizando a transformada cosseno discreta

Omaia, Derzu 27 August 2009 (has links)
Made available in DSpace on 2015-05-14T12:36:43Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2151124 bytes, checksum: ffc486a2022781c4365766e4bf1e7054 (MD5) Previous issue date: 2009-08-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Human face has a very complex and variable pattern, which makes the face detection and recognition operations a challenging problem. The scope of these operations is quite comprehensive, involving mainly security applications, such as authorization for physical and logical access, people tracking, and real time authentication. In addition to security applications, face detection and recognition can also be associated with other applications, such as human-computer interaction and virtual reality. Several studies of face detection and recognition have been proposed and developed by researchers, pursuing greater precision and efficiency. Currently there are face detectors and recognizers with accuracy exceeding 95%. Commercial systems are available as well. This work presents a study on several face detection and recognition methods. Also was discussed the possibility of developing a new face detection method using Prediction by Partial Match (PPM), Entropy and Discrete Cosine Transform (DCT). It is further proposed a new face recognition method based on DCT. Finally, is proposed an architecture for a face detection and recognition system in video. To validate the architecture, the proposed system was implemented using one of the best detectors in the literature and the recognizer produced in this work. Several experiments were performed, and both the face detector used as the recognizer developed were effective, achieving success rates compatible with most current methods / A face humana possui um padrão bastante complexo e variável, o que torna as operações de detecção e reconhecimento de face um problema desafiador. O campo de aplicação dessas operações é bastante abrangente, envolvendo principalmente aplicações de segurança, como autorização de acesso físico e lógico, rastreamento de pessoas e autenticação em tempo real. Além de aplicações de segurança, a detecção e o reconhecimento de faces também pode ser associado a outras aplicações, como interação homem-máquina e realidade virtual. Diversos trabalhos de detecção e reconhecimento de face vêm sendo propostos e desenvolvidos pela comunidade científica, buscando continuamente uma maior precisão e eficiência. Atualmente já estão disponíveis detectores e reconhecedores de face com precisão superior a 95%. Sistemas comerciais também já estão disponíveis no mercado. Este trabalho apresenta um estudo sobre os diversos métodos de detecção e reconhecimento de face existentes. Também foi analisada a possibilidade de desenvolvimento de um novo método de detecção de face utilizando Predição por Casamento Parcial (Prediction by Partial Match, PPM), Entropia e Transformada Cosseno Discreta (Discrete Cosine Transform, DCT). Propõe-se ainda, um novo método de reconhecimento de face baseado na DCT. Por fim, apresenta-se a arquitetura de um sistema de detecção e reconhecimento de face em vídeo. Para validação desta arquitetura, o sistema proposto foi implementado utilizando um dos melhores detectores encontrados na literatura e o reconhecedor produzido neste trabalho. Diversos experimentos foram realizados e tanto o detector de face utilizado, quanto o reconhecedor desenvolvido mostraram-se eficientes, atingindo taxas de acerto compatíveis com os métodos mais atuais.
104

Characterization of the Voice Source by the DCT for Speaker Information

Abhiram, B January 2014 (has links) (PDF)
Extracting speaker-specific information from speech is of great interest to both researchers and developers alike, since speaker recognition technology finds application in a wide range of areas, primary among them being forensics and biometric security systems. Several models and techniques have been employed to extract speaker information from the speech signal. Speech production is generally modeled as an excitation source followed by a filter. Physiologically, the source corresponds to the vocal fold vibrations and the filter corresponds to the spectrum-shaping vocal tract. Vocal tract-based features like the melfrequency cepstral coefficients (MFCCs) and linear prediction cepstral coefficients have been shown to contain speaker information. However, high speed videos of the larynx show that the vocal folds of different individuals vibrate differently. Voice source (VS)-based features have also been shown to perform well in speaker recognition tasks, thereby revealing that the VS does contain speaker information. Moreover, a combination of the vocal tract and VS-based features has been shown to give an improved performance, showing that the latter contains supplementary speaker information. In this study, the focus is on extracting speaker information from the VS. The existing techniques for the same are reviewed, and it is observed that the features which are obtained by fitting a time-domain model on the VS perform poorly than those obtained by simple transformations of the VS. Here, an attempt is made to propose an alternate way of characterizing the VS to extract speaker information, and to study the merits and shortcomings of the proposed speaker-specific features. The VS cannot be measured directly. Thus, to characterize the VS, we first need an estimate of the VS, and the integrated linear prediction residual (ILPR) extracted from the speech signal is used as the VS estimate in this study. The voice source linear prediction model, which was proposed in an earlier study to obtain the ILPR, is used in this work. It is hypothesized here that a speaker’s voice may be characterized by the relative proportions of the harmonics present in the VS. The pitch synchronous discrete cosine transform (DCT) is shown to capture these, and the gross shape of the ILPR in a few coefficients. The ILPR and hence its DCT coefficients are visually observed to distinguish between speakers. However, it is also observed that they do have intra-speaker variability, and thus it is hypothesized that the distribution of the DCT coefficients may capture speaker information, and this distribution is modeled by a Gaussian mixture model (GMM). The DCT coefficients of the ILPR (termed the DCTILPR) are directly used as a feature vector in speaker identification (SID) tasks. Issues related to the GMM, like the type of covariance matrix, are studied, and it is found that diagonal covariance matrices perform better than full covariance matrices. Thus, mixtures of Gaussians having diagonal covariances are used as speaker models, and by conducting SID experiments on three standard databases, it is found that the proposed DCTILPR features fare comparably with the existing VS-based features. It is also found that the gross shape of the VS contains most of the speaker information, and the very fine structure of the VS does not help in distinguishing speakers, and instead leads to more confusion between speakers. The major drawbacks of the DCTILPR are the session and handset variability, but they are also present in existing state-of-the-art speaker-specific VS-based features and the MFCCs, and hence seem to be common problems. There are techniques to compensate these variabilities, which need to be used when the systems using these features are deployed in an actual application. The DCTILPR is found to improve the SID accuracy of a system trained with MFCC features by 12%, indicating that the DCTILPR features capture speaker information which is missed by the MFCCs. It is also found that a combination of MFCC and DCTILPR features on a speaker verification task gives significant performance improvement in the case of short test utterances. Thus, on the whole, this study proposes an alternate way of extracting speaker information from the VS, and adds to the evidence for speaker information present in the VS.
105

An Efficient Classification Model for Analyzing Skewed Data to Detect Frauds in the Financial Sector / Un modèle de classification efficace pour l'analyse des données déséquilibrées pour détecter les fraudes dans le secteur financier

Makki, Sara 16 December 2019 (has links)
Différents types de risques existent dans le domaine financier, tels que le financement du terrorisme, le blanchiment d’argent, la fraude de cartes de crédit, la fraude d’assurance, les risques de crédit, etc. Tout type de fraude peut entraîner des conséquences catastrophiques pour des entités telles que les banques ou les compagnies d’assurances. Ces risques financiers sont généralement détectés à l'aide des algorithmes de classification. Dans les problèmes de classification, la distribution asymétrique des classes, également connue sous le nom de déséquilibre de classe (class imbalance), est un défi très commun pour la détection des fraudes. Des approches spéciales d'exploration de données sont utilisées avec les algorithmes de classification traditionnels pour résoudre ce problème. Le problème de classes déséquilibrées se produit lorsque l'une des classes dans les données a beaucoup plus d'observations que l’autre classe. Ce problème est plus vulnérable lorsque l'on considère dans le contexte des données massives (Big Data). Les données qui sont utilisées pour construire les modèles contiennent une très petite partie de groupe minoritaire qu’on considère positifs par rapport à la classe majoritaire connue sous le nom de négatifs. Dans la plupart des cas, il est plus délicat et crucial de classer correctement le groupe minoritaire plutôt que l'autre groupe, comme la détection de la fraude, le diagnostic d’une maladie, etc. Dans ces exemples, la fraude et la maladie sont les groupes minoritaires et il est plus délicat de détecter un cas de fraude en raison de ses conséquences dangereuses qu'une situation normale. Ces proportions de classes dans les données rendent très difficile à l'algorithme d'apprentissage automatique d'apprendre les caractéristiques et les modèles du groupe minoritaire. Ces algorithmes seront biaisés vers le groupe majoritaire en raison de leurs nombreux exemples dans l'ensemble de données et apprendront à les classer beaucoup plus rapidement que l'autre groupe. Dans ce travail, nous avons développé deux approches : Une première approche ou classifieur unique basée sur les k plus proches voisins et utilise le cosinus comme mesure de similarité (Cost Sensitive Cosine Similarity K-Nearest Neighbors : CoSKNN) et une deuxième approche ou approche hybride qui combine plusieurs classifieurs uniques et fondu sur l'algorithme k-modes (K-modes Imbalanced Classification Hybrid Approach : K-MICHA). Dans l'algorithme CoSKNN, notre objectif était de résoudre le problème du déséquilibre en utilisant la mesure de cosinus et en introduisant un score sensible au coût pour la classification basée sur l'algorithme de KNN. Nous avons mené une expérience de validation comparative au cours de laquelle nous avons prouvé l'efficacité de CoSKNN en termes de taux de classification correcte et de détection des fraudes. D’autre part, K-MICHA a pour objectif de regrouper des points de données similaires en termes des résultats de classifieurs. Ensuite, calculez les probabilités de fraude dans les groupes obtenus afin de les utiliser pour détecter les fraudes de nouvelles observations. Cette approche peut être utilisée pour détecter tout type de fraude financière, lorsque des données étiquetées sont disponibles. La méthode K-MICHA est appliquée dans 3 cas : données concernant la fraude par carte de crédit, paiement mobile et assurance automobile. Dans les trois études de cas, nous comparons K-MICHA au stacking en utilisant le vote, le vote pondéré, la régression logistique et l’algorithme CART. Nous avons également comparé avec Adaboost et la forêt aléatoire. Nous prouvons l'efficacité de K-MICHA sur la base de ces expériences. Nous avons également appliqué K-MICHA dans un cadre Big Data en utilisant H2O et R. Nous avons pu traiter et analyser des ensembles de données plus volumineux en très peu de temps / There are different types of risks in financial domain such as, terrorist financing, money laundering, credit card fraudulence and insurance fraudulence that may result in catastrophic consequences for entities such as banks or insurance companies. These financial risks are usually detected using classification algorithms. In classification problems, the skewed distribution of classes also known as class imbalance, is a very common challenge in financial fraud detection, where special data mining approaches are used along with the traditional classification algorithms to tackle this issue. Imbalance class problem occurs when one of the classes have more instances than another class. This problem is more vulnerable when we consider big data context. The datasets that are used to build and train the models contain an extremely small portion of minority group also known as positives in comparison to the majority class known as negatives. In most of the cases, it’s more delicate and crucial to correctly classify the minority group rather than the other group, like fraud detection, disease diagnosis, etc. In these examples, the fraud and the disease are the minority groups and it’s more delicate to detect a fraud record because of its dangerous consequences, than a normal one. These class data proportions make it very difficult to the machine learning classifier to learn the characteristics and patterns of the minority group. These classifiers will be biased towards the majority group because of their many examples in the dataset and will learn to classify them much faster than the other group. After conducting a thorough study to investigate the challenges faced in the class imbalance cases, we found that we still can’t reach an acceptable sensitivity (i.e. good classification of minority group) without a significant decrease of accuracy. This leads to another challenge which is the choice of performance measures used to evaluate models. In these cases, this choice is not straightforward, the accuracy or sensitivity alone are misleading. We use other measures like precision-recall curve or F1 - score to evaluate this trade-off between accuracy and sensitivity. Our objective is to build an imbalanced classification model that considers the extreme class imbalance and the false alarms, in a big data framework. We developed two approaches: A Cost-Sensitive Cosine Similarity K-Nearest Neighbor (CoSKNN) as a single classifier, and a K-modes Imbalance Classification Hybrid Approach (K-MICHA) as an ensemble learning methodology. In CoSKNN, our aim was to tackle the imbalance problem by using cosine similarity as a distance metric and by introducing a cost sensitive score for the classification using the KNN algorithm. We conducted a comparative validation experiment where we prove the effectiveness of CoSKNN in terms of accuracy and fraud detection. On the other hand, the aim of K-MICHA is to cluster similar data points in terms of the classifiers outputs. Then, calculating the fraud probabilities in the obtained clusters in order to use them for detecting frauds of new transactions. This approach can be used to the detection of any type of financial fraud, where labelled data are available. At the end, we applied K-MICHA to a credit card, mobile payment and auto insurance fraud data sets. In all three case studies, we compare K-MICHA with stacking using voting, weighted voting, logistic regression and CART. We also compared with Adaboost and random forest. We prove the efficiency of K-MICHA based on these experiments
106

Modèles géométriques avec defauts pour la fabrication additive / Skin Model Shapes for Additive Manufacturing

Zhu, Zuowei 10 July 2019 (has links)
Les différentes étapes et processus de la fabrication additive (FA) induisent des erreurs de sources multiples et complexes qui soulèvent des problèmes majeurs au niveau de la qualité géométrique du produit fabriqué. Par conséquent, une modélisation effective des écarts géométriques est essentielle pour la FA. Le paradigme Skin Model Shapes (SMS) offre un cadre intégral pour la modélisation des écarts géométriques des produits manufacturés et constitue ainsi une solution efficace pour la modélisation des écarts géométriques en FA.Dans cette thèse, compte tenu de la spécificité de fabrication par couche en FA, un nouveau cadre de modélisation à base de SMS est proposé pour caractériser les écarts géométriques en FA en combinant une approche dans le plan et une approche hors plan. La modélisation des écarts dans le plan vise à capturer la variabilité de la forme 2D de chaque couche. Une méthode de transformation des formes est proposée et qui consiste à représenter les effets de variations sous la forme de transformations affines appliquées à la forme nominale. Un modèle paramétrique des écarts est alors établi dans un système de coordonnées polaires, quelle que soit la complexité de la forme. Ce modèle est par la suite enrichi par un apprentissage statistique permettant la collecte simultanée de données des écarts de formes multiples et l'amélioration des performances de la méthode.La modélisation des écarts hors plan est réalisée par la déformation de la couche dans la direction de fabrication. La modélisation des écarts hors plan est effectuée à l'aide d'une méthode orientée données. Sur la base des données des écarts obtenues à partir de simulations par éléments finis, deux méthodes d'analyse modale: la transformée en cosinus discrète (DCT) et l'analyse statistique des formes (SSA) sont exploitées. De plus, les effets des paramètres des pièces et des procédés sur les modes identifiés sont caractérisés par le biais d'un modèle à base de processus Gaussien.Les méthodes présentées sont finalement utilisées pour obtenir des SMSs haute-fidélité pour la fabrication additive en déformant les contours de la couche nominale avec les écarts prédits et en reconstruisant le modèle de surface non idéale complet à partir de ces contours déformés. Une toolbox est développée dans l'environnement MATLAB pour démontrer l'efficacité des méthodes proposées. / The intricate error sources within different stages of the Additive Manufacturing (AM) process have brought about major issues regarding the dimensional and geometrical accuracy of the manufactured product. Therefore, effective modeling of the geometric deviations is critical for AM. The Skin Model Shapes (SMS) paradigm offers a comprehensive framework aiming at addressing the deviation modeling problem at different stages of product lifecycle, and is thus a promising solution for deviation modeling in AM. In this thesis, considering the layer-wise characteristic of AM, a new SMS framework is proposed which characterizes the deviations in AM with in-plane and out-of-plane perspectives. The modeling of in-plane deviation aims at capturing the variability of the 2D shape of each layer. A shape transformation perspective is proposed which maps the variational effects of deviation sources into affine transformations of the nominal shape. With this assumption, a parametric deviation model is established based on the Polar Coordinate System which manages to capture deviation patterns regardless of the shape complexity. This model is further enhanced with a statistical learning capability to simultaneously learn from deviation data of multiple shapes and improve the performance on all shapes.Out-of-plane deviation is defined as the deformation of layer in the build direction. A layer-level investigation of out-of-plane deviation is conducted with a data-driven method. Based on the deviation data collected from a number of Finite Element simulations, two modal analysis methods, Discrete Cosine Transform (DCT) and Statistical Shape Analysis (SSA), are adopted to identify the most significant deviation modes in the layer-wise data. The effect of part and process parameters on the identified modes is further characterized with a Gaussian Process (GP) model. The discussed methods are finally used to obtain high-fidelity SMSs of AM products by deforming the nominal layer contours with predicted deviations and rebuilding the complete non-ideal surface model from the deformed contours. A toolbox is developed in the MATLAB environment to demonstrate the effectiveness of the proposed methods.
107

Algoritmy pro detekci anomálií v datech z klinických studií a zdravotnických registrů / Algorithms for anomaly detection in data from clinical trials and health registries

Bondarenko, Maxim January 2018 (has links)
This master's thesis deals with the problems of anomalies detection in data from clinical trials and medical registries. The purpose of this work is to perform literary research about quality of data in clinical trials and to design a personal algorithm for detection of anomalous records based on machine learning methods in real clinical data from current or completed clinical trials or medical registries. In the practical part is described the implemented algorithm of detection, consists of several parts: import of data from information system, preprocessing and transformation of imported data records with variables of different data types into numerical vectors, using well known statistical methods for detection outliers and evaluation of the quality and accuracy of the algorithm. The result of creating the algorithm is vector of parameters containing anomalies, which has to make the work of data manager easier. This algorithm is designed for extension the palette of information system functions (CLADE-IS) on automatic monitoring the quality of data by detecting anomalous records.
108

Algoritmy pro detekci anomálií v datech z klinických studií a zdravotnických registrů / Algorithms for anomaly detection in data from clinical trials and health registries

Bondarenko, Maxim January 2018 (has links)
This master's thesis deals with the problems of anomalies detection in data from clinical trials and medical registries. The purpose of this work is to perform literary research about quality of data in clinical trials and to design a personal algorithm for detection of anomalous records based on machine learning methods in real clinical data from current or completed clinical trials or medical registries. In the practical part is described the implemented algorithm of detection, consists of several parts: import of data from information system, preprocessing and transformation of imported data records with variables of different data types into numerical vectors, using well known statistical methods for detection outliers and evaluation of the quality and accuracy of the algorithm. The result of creating the algorithm is vector of parameters containing anomalies, which has to make the work of data manager easier. This algorithm is designed for extension the palette of information system functions (CLADE-IS) on automatic monitoring the quality of data by detecting anomalous records.
109

Porovnání možností komprese multimediálních signálů / Comparison of Multimedia Signal Compression Possibilities

Špaček, Milan January 2013 (has links)
Thesis deals with multimedia signal comparison of compression options focused on video and advanced codecs. Specifically it describes the encoding and decoding of video recordings according to the MPEG standard. The theoretical part of the thesis describes characteristic properties of the video signal and justification for the need to use recording and transmission compression. There are also described methods for elimination of encoded video signal redundancy and irrelevance. Further on are discussed ways of measuring the video signal quality. A separate chapter is focused on the characteristics of currently used and promising codecs. In the practical part of the thesis were created functions in Matlab environment. These functions were implemented into graphic user interface that simulates the activity of functional blocks of the encoder and decoder. Based on user-specified input parameters it performs encoding and decoding of any given picture, composed of images in RGB format, and displays the outputs of individual functional blocks. There are implemented algorithms for the initial processing of the input sequence including sub-sampling, as well as DCT, quantization, motion compensation and their inverse operations. Separate chapters are dedicated to the realisation of codec description in the Matlab environment and to the individual processing steps output. Further on are mentioned compress algorithm comparisons and the impact of parameter change onto the final signal. The findings are summarized in conclusion.
110

Biometrie s využitím snímků duhovky / Biometry based on iris images

Tobiášová, Nela January 2014 (has links)
The biometric techniques are well known and widespread nowadays. In this context biometry means automated person recognition using anatomic features. This work uses the iris as the anatomic feature. Iris recognition is taken as the most promising technique of all because of its non-invasiveness and low error rate. The inventor of iris recognition is John G. Daugman. His work underlies almost all current public works of this technology. This final thesis is concerned with biometry based on iris images. The principles of biometric methods based on iris images are described in the first part. The first practical part of this work is aimed at the proposal and realization of two methods which localize the iris inner boundary. The third part presents the proposal and realization of iris image processing in order to classifying persons. The last chapter is focus on evaluation of experimental results and there are also compared our results with several well-known methods.

Page generated in 0.0579 seconds