• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 128
  • 33
  • 18
  • 18
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 451
  • 130
  • 128
  • 71
  • 61
  • 56
  • 55
  • 47
  • 46
  • 45
  • 44
  • 43
  • 43
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Métodos lineares e não lineares de análise de séries temporais e sua aplicação no estudo da variabilidade da frequência cardíaca de jovens saudáveis

Ferreira, Maria Teodora [UNESP] 25 February 2010 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:27:25Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-02-25Bitstream added on 2014-06-13T18:31:34Z : No. of bitstreams: 1 ferreira_mt_me_botib.pdf: 1296142 bytes, checksum: b87978808d056c330197d1345a497191 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Neste trabalho fazemos um estudo de métodos lineares e não llineares utilizados na análise de séries temporais experimentais e aplicamos tais métodos na análise da Variabilidade da Frequência Cardíaca (VFC) de jovens saudáveis. Os métodos lineares são baseados na análise estatística, enquanto os não lineares estão relacionados à teoria dos sistemas dinâmicos determinísticos, da qual a teoria do caos é parte integrante. Chamamos de VFC o estudo das variações dos intervalos existentes entre os batimentos cardíacos, medidos sempre entre os intervalos RR do sinal cardíaco, pois estes apresentam maior potencial para serem captados. As séries temporais obtidas destas medidas são chamadas séries de intervalos RR (ou simplesmente séries RR)... / In this work we study methods linear and nonlinear used in the analysis of experimental time series apply such methods in the analysis of Heart Rate Variability (HRV) of healthy young people. The linear methods are based on statistical analysis, while the nonlinear methods are related to deterministic dynamical systems, including the chaos theory. The HRV is known as the study of variation in the intervals between the heart beats, measured as the RR intervals of the cardiac signal, because they have greater potential to be captured. Time series obtained from these measurements are called series of RR intervals (or simply RR series). Based on the methods studied, we performed HRV analysis of 88 volunteers with 86 present no clinical disease diagnosed and are taken a healthy young people and two are young people who have some clinical pathology diagnosed... (Complete abstract click electronic access below)
102

Unconstrained Periocular Face Recognition: From Reconstructive Dictionary Learning to Generative Deep Learning and Beyond

Juefei-Xu, Felix 01 April 2018 (has links)
Many real-world face recognition tasks are under unconstrained conditions such as off-angle pose variations, illumination variations, facial occlusion, facial expression, etc. In this work, we are focusing on the real-world scenarios where only the periocular region of a face is visible such as in the ISIS case. In Part I of the dissertation, we will showcase the face recognition capability based on the periocular region, which we call the periocular face recognition. We will demonstrate that face matching using the periocular region directly is more robust than the full face in terms of age-tolerant face recognition, expression-tolerant face recognition, pose-tolerant face recognition, as well as contains more cues for determining the gender information of a subject. In this dissertation, we will study direct periocular matching more comprehensively and systematically using both shallow and deep learning methods. Based on this, in Part II and Part III of the dissertation, we will continue to explore an indirect way of carrying out the periocular face recognition: periocular-based full face hallucination, because we want to capitalize on the powerful commercial face matchers and deep learning-based face recognition engines which are all trained on large-scale full face images. The reproducibility and feasibility of re-training for a proprietary facial region, such as the periocular region, is relatively low, due to the nonopen source nature of commercial face matchers as well as the amount of training data and computation power required by the deep learning based models. We will carry out the periocular-based full face hallucination based on two proposed reconstructive dictionary learning methods, including the dimensionally weighted K-SVD (DW-KSVD) dictionary learning approach and its kernel feature space counterpart using Fastfood kernel expansion approximation to reconstruct high-fidelity full face images from the periocular region, as well as two proposed generative deep learning approaches that build upon deep convolutional generative adversarial networks (DCGAN) to generate the full face from the periocular region observations, including the Gang of GANs (GoGAN) method and the discriminant nonlinear many-to-one generative adversarial networks (DNMM-GAN) for applications such as the generative open-set landmark-free frontalization (Golf) for faces and universal face optimization (UFO), which tackles an even broader set of problems than periocular based full face hallucination. Throughout Parts I-III, we will study how to handle challenging realworld scenarios such as unconstrained pose variations, unconstrained illumination conditions, and unconstrained low resolution of the periocular and facial images. Together, we aim to achieve unconstrained periocular face recognition through both direct periocular face matching and indirect periocular-based full face hallucination. In the final Part IV of the dissertation, we will go beyond and explore several new methods in deep learning that are statistically efficient for generalpurpose image recognition. Methods include the local binary convolutional neural networks (LBCNN), the perturbative neural networks (PNN), and the polynomial convolutional neural networks (PolyCNN).
103

Métodos lineares e não lineares de análise de séries temporais e sua aplicação no estudo da variabilidade da frequência cardíaca de jovens saudáveis /

Ferreira, Maria Teodora. January 2010 (has links)
Orientador: Marcelo Messias / Banca: Eibert Einstein Nehrer Macau / Banca: José Raimundo de Souza Passos / Resumo: Neste trabalho fazemos um estudo de métodos lineares e não llineares utilizados na análise de séries temporais experimentais e aplicamos tais métodos na análise da Variabilidade da Frequência Cardíaca (VFC) de jovens saudáveis. Os métodos lineares são baseados na análise estatística, enquanto os não lineares estão relacionados à teoria dos sistemas dinâmicos determinísticos, da qual a teoria do caos é parte integrante. Chamamos de VFC o estudo das variações dos intervalos existentes entre os batimentos cardíacos, medidos sempre entre os intervalos RR do sinal cardíaco, pois estes apresentam maior potencial para serem captados. As séries temporais obtidas destas medidas são chamadas séries de intervalos RR (ou simplesmente séries RR)... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: In this work we study methods linear and nonlinear used in the analysis of experimental time series apply such methods in the analysis of Heart Rate Variability (HRV) of healthy young people. The linear methods are based on statistical analysis, while the nonlinear methods are related to deterministic dynamical systems, including the chaos theory. The HRV is known as the study of variation in the intervals between the heart beats, measured as the RR intervals of the cardiac signal, because they have greater potential to be captured. Time series obtained from these measurements are called series of RR intervals (or simply RR series). Based on the methods studied, we performed HRV analysis of 88 volunteers with 86 present no clinical disease diagnosed and are taken a healthy young people and two are young people who have some clinical pathology diagnosed... (Complete abstract click electronic access below) / Mestre
104

Solving ill-posed problems with mollification and an application in biometrics

Lindgren, Emma January 2018 (has links)
This is a thesis about how mollification can be used as a regularization method to reduce noise in ill-posed problems in order to make them well-posed. Ill-posed problems are problems where noise get magnified during the solution process. An example of this is how measurement errors increases with differentiation. To correct this we use mollification. Mollification is a regularization method that uses integration or weighted average to even out a noisy function. The different types of error that occurs when mollifying are the truncation error and the propagated data error. We are going to calculate these errors and see what affects them. An other thing worth investigating is the ability to differentiate a mollified function even if the function itself can not be differentiated. An application to mollification is a blood vessel problem in biometrics where the goal is to calculate the elasticity of the blood vessel’s wall. To do this measurements from the blood and the blood vessel are required, as well as equations for the calculations. The model used for the calculations is ill-posed with regard to specific variables which is why we want to apply mollification. Here we are also going to take a look at how the noise level affects the final result as well as the mollification radius.
105

Tecnologia computacional de apoio a rastreabilidade biométrica de bovinos / Computer technology to support bovines biometric traceability

Walter da Silva Leick 13 October 2016 (has links)
O Brasil é um dos maiores produtores e exportadores de carne bovina do planeta e com a expectativa de ser responsável por 45% do consumo mundial sendo que a maior parte de seu consumo ainda é local. Para se manter nesta posição e expandir suas vendas tanto no mercado interno como externo é importante que se garanta a qualidade do produto. Esta qualidade só é conseguida quando se consegue gerenciar todo o processo da cadeia produtiva, de forma a permitir o registro de todos os dados do animal na cadeia de produção. Tanto o governo através do SISBOV como grandes distribuidores possuem sistemas de gerenciamento que através de técnicas de rastreabilidade permitem ter este controle. A identificação do animal é ponto chave para a rastreabilidade que hoje é feita através de bottons, transponders, brincos entre outros. Todos estes métodos são invasivos e suscetíveis a perdas e adulterações. Esta dissertação mostra a viabilidade de inserir em sistemas de rastreabilidade existentes ou não a inclusão de identificação biométrica e usa como exemplo o focinho nasal do bovino. Para tanto desenvolveu-se programas para entrada de informações através de um celular com sistema operacional Android que em conjunto com programas desenvolvidos para rodarem na WEB pudessem cadastrar e confirmar a identidade do animal. Os testes mostraram a capacidade do aplicativo Android em localizar o espelho nasal e coletar o mesmo. Com os dados coletados foi possível armazenar as informações ou confirmar a identidade do animal por meio dos serviços do servidor. Mostrou-se desta forma viável a utilização deste tipo de identificação em sistemas de gerenciamento novos ou já existentes. / Brazil is one of the largest producers and exporters of beef in the world and with the expectation of being responsible for 45% of global consumption and the largest part of consumption is still locall. To stay in this position and to expand its sales both in domestic and foreign markets is important to ensure product quality. This quality is achieved only when you can manage the entire process of the production chain in order to allow the registration of all animal data in the production chain. The government through SISBOV and large distributors have been working with management systems through traceability techniques. Animal identification is key to the traceability and nowadays is made via bottoms, transponders, earrings among others. All these methods are invasive and susceptible to loss and tampering. This work shows the feasibility of entering into existing traceability systems or not the inclusion of biometric identification and uses as an example the nasal muzzle beef. For both developed programs to input information through a mobile phone with Android operating system in conjunction with programs designed to be web based solution could register and confirm the identity of the animal. All results was able to shown the system performance by showing that it was possible to store the information or confirm the identity of the animal through the server services. The methodology proposed could be useful in commercial applications focusing in bovine traceability.
106

Archeozoologciká problematika eneolitu Čech / Archeozoology of the Czech Eneolithic

Kyselý, René January 2010 (has links)
This dissertation is a contribution to the understanding of animal history and the relationship between man and animal during the Eneolithic, i.e. spanning the period ca 4500 - 2200 BC. The Eneolithic period differs from the Neolithic in more respects. Traditionally the development of metallurgy (copper) is considered as the primary cause of social economic changes; however Sherratt's theory of a "secondary products revolution" points at the fundamental relevance of a rapid change from the use of primary animal products (meat, skin etc.) to the use of secondary products (milk, wool, labour, mainly yoke) precisely in the period corresponding with the Bohemian Eneolithic. Nevertheless this theory is still being discussed and criticised and, considering possible mosaic nature of the palaeoeconomic situation, it should first be verified at local and regional levels. The author of this thesis analysed in detail ca. 49 500 osteological finds from archaeological settlements in Bohemia, from which ca 13 500 could be zoologically closely determined. Further data were adopted from publications of Czech and Moravian sites (ca. 22 000 finds, from which 11 000 were determinable). This material was subjected to detailed archaeozoological analysis with a unified methodology and techniques covering taphonomy,...
107

Security, Privacy and Performance Improvements for Fuzzy Extractors

Brien, Renaud 08 June 2020 (has links)
With the usage of biometrics becoming commonly used in a variety of applications, keeping those biometrics private and secure is an important issue. Indeed, the convenience of using biometrics for authentication is counteracted by the fact that they cannot easily be modified or changed. This can have dire consequences to a person if their biometrics are leaked. In the past decades, various techniques have been proposed to solve this problem. Such techniques range from using and storing randomized templates, using homomorphic encryption, or using biometric encryption techniques such as fuzzy extractors. Fuzzy extractors are a construction that allows the extraction of cryptographic keys from noisy data like biometrics. The key can then be rebuilt from some helper data and another biometric, provided that it is similar enough to the biometrics used to generate the key. This can be achieved through various approaches like the use of a quantizer or an error correcting code. In this thesis, we consider specifically fuzzy extractors for facial images. The first part of this thesis focuses on improving the security, privacy and performance of the extractor for faces first proposed by Sutcu et al. Our improvements make their construction more resistant to partial and total leaks of secure information, as well as improve the performance in a biometric authentication setting. The second part looks at using low density lattice codes (LDLC) as a quantizer in the fuzzy extractor, instead of using component based quantization. Although LDLC have been proposed as a quantizer for a general fuzzy extractor, they have yet to be used or tested for continuous biometrics like face images. We present a construction for a fuzzy extractor scheme using LDLC and we analyze its performance on a publicly available data set of images. Using an LDLC quantizer on this data set has lower accuracy than the improved scheme from the first part of this thesis. On the other hand, the LDLC scheme performs better when the inputs have additive white Gaussian noise (AWGN), as we show through simulated data. As such, we expect it to perform well in general on data and biometrics with variance akin to a AWGN channel.
108

Biometria no Brasil e o registro de identidade civil: novos rumos para a identificação / Biometrics in Brazil and the civil identity register: new directions to identify

Kanashiro, Marta Mourão 23 September 2011 (has links)
O tema geral desta pesquisa são as tecnologias que permitem o controle de acesso, vigilância, monitoramento e identificação de pessoas, e que se aliam a construção de bancos de dados e perfis sobre a população. Neste amplo universo, a tecnologia biométrica para identificação foi focalizada a partir de um estudo de caso sobre o novo documento biométrico de identidade brasileiro: o Registro de Identidade Civil. Retomando o conceito de dispositivo em Michel Foucault, buscou-se trazer a tona os discursos, as instituições, as leis, o debate legal, as medidas, decisões, e enunciados científicos que configuram o funcionamento do poder na atualidade. No âmbito das ciências, a biometria hoje distancia-se da antropometria e das formas de identificação do século XIX, vinculando-se a um exercício do poder que não é mais aquele para disciplinar os corpos (Michel Foucault), mas para gerir os fluxos de dados, um corpo de dados. As novas tecnologias focalizadas apontam para um exercício do poder mais próximo do que Gilles Deleuze chamou de sociedades de controle. / This research focus on technologies that are enabled to access control, surveillance, monitoring and identification of persons, connected with databases and profiles construction on the population. In this vast universe, the biometric technology for identification has been focused from a case study on the new biometric identity document Brazil: the Civil Identity Register. Based on the Foucaults concept of apparatus, this reasearch aimed to bring out the discourses, institutions, laws, the legal debate, measures, decisions, and scientific statements that configure the operation of power today. Within the sciences, biometrics today area distanced itself from anthropometry and forms of identification of the nineteenth century. This is related with an exercise of power that is no longer that to discipline their bodies (Michel Foucault), but to manage the data flows, or a \"body of data. New technologies are related with an exercise of power closer to what Gilles Deleuze called societies of control.
109

Towards an Accurate ECG Biometric Authentication System with Low Acquisition Time

Arteaga Falconi, Juan Sebastian 31 January 2020 (has links)
Biometrics is the study of physical or behavioral traits that establishes the identity of a person. Forensics, physical security and cyber security are some of the main fields that use biometrics. Unlike traditional authentication systems—such as password based—biometrics cannot be lost, forgotten or shared. This is possible because biometrics establishes the identity of a person based on a physiological/behavioural characteristic rather than what the person possess or remembers. Biometrics has two modes of operation: identification and authentication. Identification finds the identity of a person among a group of persons. Authentication determines if the claimed identity of a person is truthful. Biometric person authentication is an alternative to passwords or graphical patterns. It prevents shoulder surfing attacks, i.e., people watching from a short distance. Nevertheless, biometric traits of conventional authentication techniques like fingerprints, face—and to some extend iris—are easy to capture and duplicate. This denotes a security risk for modern and future applications such as digital twins, where an attacker can copy and duplicate a biometric trait in order to spoof a biometric system. Researchers have proposed ECG as biometric authentication to solve this problem. ECG authentication conceals the biometric traits and reduces the risk of an attack by duplication of the biometric trait. However, current ECG authentication solutions require 10 or more seconds of an ECG signal in order to have accurate results. The accuracy is directly proportional to the ECG signal time-length for authentication. This is inconvenient to implement ECG authentication in an end-user product because a user cannot wait 10 or more seconds to gain access in a secure manner to their device. This thesis addresses the problem of spoofing by proposing an accurate and secure ECG biometric authentication system with relatively short ECG signal length for authentication. The system consists of an ECG acquisition from lead I (two electrodes), signal processing approaches for filtration and R-peak detection, a feature extractor and an authentication process. To evaluate this system, we developed a method to calculate the Equal Error Rate—EER—with non-normal distributed data. In the authentication process, we propose an approach based on Support Vector Machine—SVM—and achieve 4.5% EER with 4 seconds of ECG signal length for authentication. This approach opens the door for a deeper understanding of the signal and hence we enhanced it by applying a hybrid approach of Convolutional Neural Networks—CNN—combined with SVM. The purpose of this hybrid approach is to improve accuracy by automatically detect and extract features with Deep Learning—in this case CNN—and then take the output into a one-class SVM classifier—Authentication; which proved to outperform accuracy for one-class ECG classification. This hybrid approach reduces the EER to 2.84% with 4 seconds of ECG signal length for authentication. Furthermore, we investigated the combination of two different biometrics techniques and we improved the accuracy to 0.46% EER, while maintaining a short ECG signal length for authentication of 4 seconds. We fuse Fingerprint with ECG at the decision level. Decision level fusion requires information that is available from any biometric technique. Fusion at different levels—such as feature level fusion—requires information about features that are incompatible or hidden. Fingerprint minutiae are composed of information that differs from ECG peaks and valleys. Therefore fusion at the feature level is not possible unless the fusion algorithm provides a compatible conversion scheme. Proprietary biometric hardware does not provide information about the features or the algorithms; therefore, features are hidden and not accessible for feature level fusion; however, the result is always available for a decision level fusion.
110

Rozpoznávání podobnosti otisků prstu / Fingerprint identification algorithm

Skoupilová, Alena January 2020 (has links)
The main goal of this thesis is to introduce the meaning of identification and specific meaning of personal identification and mainly fingerprint identification. There is introduction to the meaning of biometrics as a key field for fingerprint recognition. Thesis also introduces principals and types of automatic processing and automatic detection of fingerprints. An existing and succesful methods are being described in the thesis and an automatic fingerprint identification alghoritm is being realized and described. Reability of the alghoritm is tested within experimental database.

Page generated in 0.0714 seconds