• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Development of an ISO 26262 ASIL D compliant verification system

Carlsson, Daniel January 2013 (has links)
In 2011 a new functional safety standard for electronic and electrical systems in vehicles waspublished, called ISO 26262. This standard concerns the whole lifecycle of the safety criticalelements used in cars, including the development process of such elements. As the correctnessof the tools used when developing such an element is critical to the safety of the element,the standard includes requirements concerning the software tools used in the development,including verification tools. These requirements mainly specify that a developer of a safetycritical element should provide proof of their confidence in the software tools they are using.One recommended way to gain this confidence is to use tools developed in accordance to a“relevant subset of [ISO 26262]”.This project aims to develop a verification system in accordance to ISO 26262, exploringhow and what specifications should be included in this “relevant subset” of ISO 26262 andto which extent these can be included in their current form. The work concludes with thedevelopment of a single safety element of the verification system, to give an demonstrationof the viability of such a system.
2

Analyse de la qualité des signatures manuscrites en-ligne par la mesure d'entropie / Quality analysis of online signatures based on entropy measure

Houmani, Nesma 13 January 2011 (has links)
Cette thèse s'inscrit dans le contexte de la vérification d'identité par la signature manuscrite en-ligne. Notre travail concerne plus particulièrement la recherche de nouvelles mesures qui permettent de quantifier la qualité des signatures en-ligne et d'établir des critères automatiques de fiabilité des systèmes de vérification. Nous avons proposé trois mesures de qualité faisant intervenir le concept d’entropie. Nous avons proposé une mesure de qualité au niveau de chaque personne, appelée «Entropie personnelle», calculée sur un ensemble de signatures authentiques d’une personne. L’originalité de l’approche réside dans le fait que l’entropie de la signature est calculée en estimant les densités de probabilité localement, sur des portions, par le biais d’un Modèle de Markov Caché. Nous montrons que notre mesure englobe les critères habituels utilisés dans la littérature pour quantifier la qualité d’une signature, à savoir: la complexité, la variabilité et la lisibilité. Aussi, cette mesure permet de générer, par classification non supervisée, des catégories de personnes, à la fois en termes de variabilité de la signature et de complexité du tracé. En confrontant cette mesure aux performances de systèmes de vérification usuels sur chaque catégorie de personnes, nous avons trouvé que les performances se dégradent de manière significative (d’un facteur 2 au minimum) entre les personnes de la catégorie «haute Entropie» (signatures très variables et peu complexes) et celles de la catégorie «basse Entropie» (signatures les plus stables et les plus complexes). Nous avons ensuite proposé une mesure de qualité basée sur l’entropie relative (distance de Kullback-Leibler), dénommée «Entropie Relative Personnelle» permettant de quantifier la vulnérabilité d’une personne aux attaques (bonnes imitations). Il s’agit là d’un concept original, très peu étudié dans la littérature. La vulnérabilité associée à chaque personne est calculée comme étant la distance de Kullback-Leibler entre les distributions de probabilité locales estimées sur les signatures authentiques de la personne et celles estimées sur les imitations qui lui sont associées. Nous utilisons pour cela deux Modèles de Markov Cachés, l'un est appris sur les signatures authentiques de la personne et l'autre sur les imitations associées à cette personne. Plus la distance de Kullback-Leibler est faible, plus la personne est considérée comme vulnérable aux attaques. Cette mesure est plus appropriée à l’analyse des systèmes biométriques car elle englobe en plus des trois critères habituels de la littérature, la vulnérabilité aux imitations. Enfin, nous avons proposé une mesure de qualité pour les signatures imitées, ce qui est totalement nouveau dans la littérature. Cette mesure de qualité est une extension de l’Entropie Personnelle adaptée au contexte des imitations: nous avons exploité l’information statistique de la personne cible pour mesurer combien la signature imitée réalisée par un imposteur va coller à la fonction de densité de probabilité associée à la personne cible. Nous avons ainsi défini la mesure de qualité des imitations comme étant la dissimilarité existant entre l'entropie associée à la personne à imiter et celle associée à l'imitation. Elle permet lors de l’évaluation des systèmes de vérification de quantifier la qualité des imitations, et ainsi d’apporter une information vis-à-vis de la résistance des systèmes aux attaques. Nous avons aussi montré l’intérêt de notre mesure d’Entropie Personnelle pour améliorer les performances des systèmes de vérification dans des applications réelles. Nous avons montré que la mesure d’Entropie peut être utilisée pour : améliorer la procédure d’enregistrement, quantifier la dégradation de la qualité des signatures due au changement de plateforme, sélectionner les meilleures signatures de référence, identifier les signatures aberrantes, et quantifier la pertinence de certains paramètres pour diminuer la variabilité temporelle. / This thesis is focused on the quality assessment of online signatures and its application to online signature verification systems. Our work aims at introducing new quality measures quantifying the quality of online signatures and thus establishing automatic reliability criteria for verification systems. We proposed three quality measures involving the concept of entropy, widely used in Information Theory. We proposed a novel quality measure per person, called "Personal Entropy" calculated on a set of genuine signatures of such a person. The originality of the approach lies in the fact that the entropy of the genuine signature is computed locally, on portions of such a signature, based on local density estimation by a Hidden Markov Model. We show that our new measure includes the usual criteria of the literature, namely: signature complexity, signature variability and signature legibility. Moreover, this measure allows generating, by an unsupervised classification, 3 coherent writer categories in terms of signature variability and complexity. Confronting this measure to the performance of two widely used verification systems (HMM, DTW) on each Entropy-based category, we show that the performance degrade significantly (by a factor 2 at least) between persons of "high Entropy-based category", containing the most variable and the least complex signatures and those of "low Entropy-based category", containing the most stable and the most complex signatures. We then proposed a novel quality measure based on the concept of relative entropy (also called Kullback-Leibler distance), denoted « Personal Relative Entropy » for quantifying person's vulnerability to attacks (good forgeries). This is an original concept and few studies in the literature are dedicated to this issue. This new measure computes, for a given writer, the Kullback-Leibler distance between the local probability distributions of his/her genuine signatures and those of his/her skilled forgeries: the higher the distance, the better the writer is protected from attacks. We show that such a measure simultaneously incorporates in a single quantity the usual criteria proposed in the literature for writer categorization, namely signature complexity, signature variability, as our Personal Entropy, but also the vulnerability criterion to skilled forgeries. This measure is more appropriate to biometric systems, because it makes a good compromise between the resulting improvement of the FAR and the corresponding degradation of FRR. We also proposed a novel quality measure aiming at quantifying the quality of skilled forgeries, which is totally new in the literature. Such a measure is based on the extension of our former Personal Entropy measure to the framework of skilled forgeries: we exploit the statistical information of the target writer for measuring to what extent an impostor’s hand-draw sticks to the target probability density function. In this framework, the quality of a skilled forgery is quantified as the dissimilarity existing between the target writer’s own Personal Entropy and the entropy of the skilled forgery sample. Our experiments show that this measure allows an assessment of the quality of skilled forgeries of the main online signature databases available to the scientific community, and thus provides information about systems’ resistance to attacks. Finally, we also demonstrated the interest of using our Personal Entropy measure for improving performance of online signature verification systems in real applications. We show that Personal Entropy measure can be used to: improve the enrolment process, quantify the quality degradation of signatures due to the change of platforms, select the best reference signatures, identify the outlier signatures, and quantify the relevance of times functions parameters in the context of temporal variability.
3

Benevolent and Malevolent Adversaries: A Study of GANs and Face Verification Systems

Nazari, Ehsan 22 November 2023 (has links)
Cybersecurity is rapidly evolving, necessitating inventive solutions for emerging challenges. Deep Learning (DL), having demonstrated remarkable capabilities across various domains, has found a significant role within Cybersecurity. This thesis focuses on benevolent and malevolent adversaries. For the benevolent adversaries, we analyze specific applications of DL in Cybersecurity contributing to the enhancement of DL for downstream tasks. Regarding the malevolent adversaries, we explore the question of how resistant to (Cyber) attacks is DL and show vulnerabilities of specific DL-based systems. We begin by focusing on the benevolent adversaries by studying the use of a generative model called Generative Adversarial Networks (GAN) to improve the abilities of DL. In particular, we look at the use of Conditional Generative Adversarial Networks (CGAN) to generate synthetic data and address issues with imbalanced datasets in cybersecurity applications. Imbalanced classes can be a significant issue in this field and can lead to serious problems. We find that CGANs can effectively address this issue, especially in more difficult scenarios. Then, we turn our attention to using CGAN with tabular cybersecurity problems. However, visually assessing the results of a CGAN is not possible when we are dealing with tabular cybersecurity data. To address this issue, we introduce AutoGAN, a method that can train a GAN on both image-based and tabular data, reducing the need for human inspection during GAN training. This opens up new opportunities for using GANs with tabular datasets, including those in cybersecurity that are not image-based. Our experiments show that AutoGAN can achieve comparable or even better results than other methods. Finally, we shift our focus to the malevolent adversaries by looking at the robustness of DL models in the context of automatic face recognition. We know from previous research that DL models can be tricked into making incorrect classifications by adding small, almost unnoticeable changes to an image. These deceptive manipulations are known as adversarial attacks. We aim to expose new vulnerabilities in DL-based Face Verification (FV) systems. We introduce a novel attack method on FV systems, called the DodgePersonation Attack, and a system for categorizing these attacks based on their specific targets. We also propose a new algorithm that significantly improves upon a previous method for making such attacks, increasing the success rate by more than 13%.
4

Automatic Person Verification Using Speech and Face Information

Sanderson, Conrad, conradsand@ieee.org January 2003 (has links)
Identity verification systems are an important part of our every day life. A typical example is the Automatic Teller Machine (ATM) which employs a simple identity verification scheme: the user is asked to enter their secret password after inserting their ATM card; if the password matches the one prescribed to the card, the user is allowed access to their bank account. This scheme suffers from a major drawback: only the validity of the combination of a certain possession (the ATM card) and certain knowledge (the password) is verified. The ATM card can be lost or stolen, and the password can be compromised. Thus new verification methods have emerged, where the password has either been replaced by, or used in addition to, biometrics such as the person’s speech, face image or fingerprints. Apart from the ATM example described above, biometrics can be applied to other areas, such as telephone & internet based banking, airline reservations & check-in, as well as forensic work and law enforcement applications. Biometric systems based on face images and/or speech signals have been shown to be quite effective. However, their performance easily degrades in the presence of a mismatch between training and testing conditions. For speech based systems this is usually in the form of channel distortion and/or ambient noise; for face based systems it can be in the form of a change in the illumination direction. A system which uses more than one biometric at the same time is known as a multi-modal verification system; it is often comprised of several modality experts and a decision stage. Since a multi-modal system uses complimentary discriminative information, lower error rates can be achieved; moreover, such a system can also be more robust, since the contribution of the modality affected by environmental conditions can be decreased. This thesis makes several contributions aimed at increasing the robustness of single- and multi-modal verification systems. Some of the major contributions are listed below. The robustness of a speech based system to ambient noise is increased by using Maximum Auto-Correlation Value (MACV) features, which utilize information from the source part of the speech signal. A new facial feature extraction technique is proposed (termed DCT-mod2), which utilizes polynomial coefficients derived from 2D Discrete Cosine Transform (DCT) coefficients of spatially neighbouring blocks. The DCT-mod2 features are shown to be robust to an illumination direction change as well as being over 80 times quicker to compute than 2D Gabor wavelet derived features. The fragility of Principal Component Analysis (PCA) derived features to an illumination direction change is solved by introducing a pre-processing step utilizing the DCT-mod2 feature extraction. We show that the enhanced PCA technique retains all the positive aspects of traditional PCA (that is, robustness to compression artefacts and white Gaussian noise) while also being robust to the illumination direction change. Several new methods, for use in fusion of speech and face information under noisy conditions, are proposed; these include a weight adjustment procedure, which explicitly measures the quality of the speech signal, and a decision stage comprised of a structurally noise resistant piece-wise linear classifier, which attempts to minimize the effects of noisy conditions via structural constraints on the decision boundary.
5

Verificação de locutores independente de texto: uma análise de robustez a ruído

PINHEIRO, Hector Natan Batista 25 February 2015 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-11-08T19:13:18Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_Final.pdf: 15901621 bytes, checksum: e3bd1c1be70941932d970f61be02e4c1 (MD5) / Made available in DSpace on 2016-11-08T19:13:18Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_Final.pdf: 15901621 bytes, checksum: e3bd1c1be70941932d970f61be02e4c1 (MD5) Previous issue date: 2015-02-25 / O processo de identificação de um determinado indivíduo é realizado milhões de vezes, todos os dias, por organizações dos mais diversos setores. Perguntas como "Quem é esse indivíduo?" ou "É essa pessoa quem ela diz ser?" são realizadas frequentemente por organizações financeiras, sistemas de saúde, sistemas de comércio eletrônico, sistemas de telecomunicações e por instituições governamentais. Identificação biométrica diz respeito ao processo de realizar essa identificação a partir de características físicas ou comportamentais. Tais características são comumente referenciadas como características biométricas e alguns exemplos delas são: face, impressão digital, íris, assinatura e voz. Reconhecimento de locutores é uma modalidade biométrica que se propõe a realizar o processo de identificação pessoal a partir das informações presentes unicamente na voz do indivíduo. Este trabalho foca no desenvolvimento de sistemas de verificação de locutores independente de texto. O principal desafio no desenvolvimento desses sistemas provém das chamadas incompatibilidades que podem ocorrer na aquisição dos sinais de voz. As técnicas propostas para suavizá-las são chamadas de técnicas de compensação e três são os domínios onde elas podem operar: no processo de extração de características do sinal, na construção dos modelos dos locutores e no cálculo do score final do sistema. Além de apresentar uma vasta revisão da literatura do desenvolvimento de sistemas de verificação de locutores independentes de texto, esse trabalho também apresenta as principais técnicas de compensação de características, modelos e scores. Na fase de experimentação, uma análise comparativa das principais técnicas propostas na literatura é apresentada. Além disso, duas técnicas de compensação são propostas, uma do domínio de modelagem e outra do domínio dos scores. A técnica de compensação de score proposta é baseada na Distribuição Normal Acumulada e apresentou, em alguns contextos, resultados superiores aos apresentados pelas principais técnicas da literatura. Já a técnica de compensação de modelo é baseada em uma técnica da literatura que combina dois conceitos: treinamento multi-condicional e Teoria dos Dados Ausentes (Missing Data Theory). A formulação apresentada pelos autores é baseada nos chamados Modelos de União a Posteriori (Posterior Union Models), mas não é completamente adequada para verificação de locutores independente de texto. Este trabalho apresenta uma formulação apropriada para esse contexto que combina os dois conceitos utilizados pelos autores com um tipo de modelagem utilizando UBMs (Universal Background Models). A técnica proposta apresentou ganhos de desempenhos quando comparada à técnica-padrão GMM-UBM, baseada em Modelos de Misturas Gaussianas (GMMs). / The personal identification of individuals is a task executed millions of times every day by organizations from diverse fields. Questions such as "Who is this individual?" or "Is this person who he or she claims to be?" are constantly made by organizations in financial services, health care, e-commerce, telecommunication systems and governments. Biometric identification is the process of identifying people using their physiological or behavioral characteristics. These characteristics are generally known as biometrics and examples of these include face, fingerprint, iris, handwriting and speech. Speaker recognition is a biometric modality which makes the personal identification by using speaker-specific information from the speech. This work focuses on the development of text-independent speaker verification systems. In these systems, speech from an individual is used to verify the claimed identity of that individual. Furthermore, the verification must occur independently of the pronounced word or phrase. The main challenge in the development of speaker recognition systems comes from the mismatches which may occur in the acquisition of the speech signals. The techniques proposed to mitigate the mismatch effects are referred as compensation methods. They may operate in three domains: in the feature extraction process, in the estimation of the speaker models and in the computation of the decision score. Besides presenting a wide description of the main techniques used in the development of text-independent speaker verification systems, this work presents the description of the main feature-, model- and score-based compensation methods. In the experiments, this work shows comprehensive comparisons between the conventional techniques and the alternatively compensations methods. Furthermore, two compensation methods are proposed: one operates in the model domain and the other in the score-domain. The scoredomain proposed compensation method is based on the Normal cumulative distribution function and, in some contexts, outperformed the performance of the main score-domain compensation techniques. On the other hand, the model-domain compensation technique proposed in this work is based on a method presented in the literature which combines two concepts: the multi-condition training and the Missing Data Theory. The formulation proposed by the authors is based on the Posterior Union models and is not completely appropriate for the text-independent speaker verification task. This work proposes a more appropriate formulation for this context which combines the concepts used by the authors with a type of modeling using Universal Background Models (UBMs). The proposed method outperformed the usual GMM-UBM modeling technique, based on Gaussian Mixture Models (GMMs).

Page generated in 0.1306 seconds