• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Security, Privacy and Performance Improvements for Fuzzy Extractors

Brien, Renaud 08 June 2020 (has links)
With the usage of biometrics becoming commonly used in a variety of applications, keeping those biometrics private and secure is an important issue. Indeed, the convenience of using biometrics for authentication is counteracted by the fact that they cannot easily be modified or changed. This can have dire consequences to a person if their biometrics are leaked. In the past decades, various techniques have been proposed to solve this problem. Such techniques range from using and storing randomized templates, using homomorphic encryption, or using biometric encryption techniques such as fuzzy extractors. Fuzzy extractors are a construction that allows the extraction of cryptographic keys from noisy data like biometrics. The key can then be rebuilt from some helper data and another biometric, provided that it is similar enough to the biometrics used to generate the key. This can be achieved through various approaches like the use of a quantizer or an error correcting code. In this thesis, we consider specifically fuzzy extractors for facial images. The first part of this thesis focuses on improving the security, privacy and performance of the extractor for faces first proposed by Sutcu et al. Our improvements make their construction more resistant to partial and total leaks of secure information, as well as improve the performance in a biometric authentication setting. The second part looks at using low density lattice codes (LDLC) as a quantizer in the fuzzy extractor, instead of using component based quantization. Although LDLC have been proposed as a quantizer for a general fuzzy extractor, they have yet to be used or tested for continuous biometrics like face images. We present a construction for a fuzzy extractor scheme using LDLC and we analyze its performance on a publicly available data set of images. Using an LDLC quantizer on this data set has lower accuracy than the improved scheme from the first part of this thesis. On the other hand, the LDLC scheme performs better when the inputs have additive white Gaussian noise (AWGN), as we show through simulated data. As such, we expect it to perform well in general on data and biometrics with variance akin to a AWGN channel.
2

Testing Fuzzy Extractors for Face Biometrics: Generating Deep Datasets

Tambay, Alain Alimou 11 November 2020 (has links)
Biometrics can provide alternative methods for security than conventional authentication methods. There has been much research done in the field of biometrics, and efforts have been made to make them more easily usable in practice. The initial application for our work is a proof of concept for a system that would expedite some low-risk travellers’ arrival into the country while preserving the user’s privacy. This thesis focuses on the subset of problems related to the generation of cryptographic keys from noisy data, biometrics in our case. This thesis was built in two parts. In the first, we implemented a key generating quantization-based fuzzy extractor scheme for facial feature biometrics based on the work by Dodis et al. and Sutcu, Li, and Memon. This scheme was modified to increased user privacy, address some implementation-based issues, and add testing-driven changes to tailor it towards its expected real-world usage. We show that our implementation does not significantly affect the scheme's performance, while providing additional protection against malicious actors that may gain access to the information stored on a server where biometric information is stored. The second part consists of the creation of a process to automate the generation of deep datasets suitable for the testing of similar schemes. The process led to the creation of a larger dataset than those available for free online for minimal work, and showed that these datasets can be further expanded with only little additional effort. This larger dataset allowed for the creation of more representative recognition challenges. We were able to show that our implementation performed similarly to other non-commercial schemes. Further refinement will be necessary if this is to be compared to commercial applications.

Page generated in 0.0515 seconds