• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 182
  • 84
  • 25
  • 14
  • 10
  • 7
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 375
  • 179
  • 100
  • 73
  • 60
  • 57
  • 48
  • 45
  • 43
  • 43
  • 41
  • 40
  • 39
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Bör du v(AR)a rädd för framtiden? : En studie om The Privacy Paradox och potentiella integritetsrisker med Augmented Reality / Should you be sc(AR)ed of the future? : A study about The Privacy Paradox and potential risks with Augmented Reality

Madsen, Angelica, Nymanson, Carl January 2021 (has links)
I en tid där digitaliseringen är mer utbredd än någonsin ökar också mängden data som samlas och delas online. I takt med att nya tekniker utvecklas öppnas det upp för nya utmaningar för integritetsfrågor. En aktiv användare online ägnar sig med största sannolikhet också åt ett eller flera sociala medier, där ändamålen ofta innebär att dela med sig av information till andra. Eftersom tekniken Augmented Reality används mer frekvent i några av de största sociala medieapplikationerna blev studiens syfte att undersöka potentiella integritetsproblem med Augmented Reality. Studiens tillvägagångssätt har bestått av en empirisk datainsamling för att skapa ett teoretiskt ramverk för studien. Utifrån detta har det genomförts en digital enkät samt intervjuer för att närmare undersöka användarens beteende online och The Privacy Paradox. Utifrån undersökningens resultat kunde The Privacy Paradox bekräftas och ge en bättre förståelse för hur användaren agerar genom digitala kanaler. I studien behandlas olika aspekter kring integritetsfrågor såsom användarvillkor, sekretessavtal, datamäklare, framtida konsekvenser och vad tekniken möjliggör. Studien kommer fram till att användare, företaget och dagens teknik tillåter att en känsligare information kan utvinnas genom ett dataintrång. Även om det ännu inte har inträffat ett dataintrång som grundat sig i Augmented Reality före denna studie, finns det en risk att det endast handlar om en tidsfråga innan detta sker. / In a time when digitalization is more widespread than ever, the amount of data collected and shared is increasing. As new technologies develop, challenges for privacy concerns arises. An active online user is likely to engage in one or many social media platforms, where the purpose often involves sharing information with others. Since Augmented Reality is more frequently supported in some of the biggest social media applications, the purpose of this study was to investigate potential privacy concerns with Augmented Reality. The study’s approach consisted of an empirical data collection to create a theoretical framework for the study. Based on this, a digital survey and interviews were conducted to further investigate the user's behavior online and The Privacy Paradox. Based on the results of the survey, The Privacy Paradox could be confirmed and a better understanding of how the user interacts through digital channels was achieved. The study treats different aspects of privacy concerns such as user terms, privacy policies, data brokers, future consequences and what technology enables. The study reached the conclusion that users, businesses and today's technology allow a more sensitive type of information to be collected through a data breach. Even if there has not yet occurred a data breach enabled by Augmented Reality prior to this study, there is a risk that it is only a matter of time until this happens.
362

A performance measurement of a Speaker Verification system based on a variance in data collection for Gaussian Mixture Model and Universal Background Model

Bekli, Zeid, Ouda, William January 2018 (has links)
Voice recognition has become a more focused and researched field in the last century,and new techniques to identify speech has been introduced. A part of voice recognition isspeaker verification which is divided into Front-end and Back-end. The first componentis the front-end or feature extraction where techniques such as Mel-Frequency CepstrumCoefficients (MFCC) is used to extract the speaker specific features of a speech signal,MFCC is mostly used because it is based on the known variations of the humans ear’scritical frequency bandwidth. The second component is the back-end and handles thespeaker modeling. The back-end is based on the Gaussian Mixture Model (GMM) andGaussian Mixture Model-Universal Background Model (GMM-UBM) methods forenrollment and verification of the specific speaker. In addition, normalization techniquessuch as Cepstral Means Subtraction (CMS) and feature warping is also used forrobustness against noise and distortion. In this paper, we are going to build a speakerverification system and experiment with a variance in the amount of training data for thetrue speaker model, and to evaluate the system performance. And further investigate thearea of security in a speaker verification system then two methods are compared (GMMand GMM-UBM) to experiment on which is more secure depending on the amount oftraining data available.This research will therefore give a contribution to how much data is really necessary fora secure system where the False Positive is as close to zero as possible, how will theamount of training data affect the False Negative (FN), and how does this differ betweenGMM and GMM-UBM.The result shows that an increase in speaker specific training data will increase theperformance of the system. However, too much training data has been proven to beunnecessary because the performance of the system will eventually reach its highest point and in this case it was around 48 min of data, and the results also show that the GMMUBM model containing 48- to 60 minutes outperformed the GMM models.
363

Detekce živosti prstu na základě změn papilárních linií / Liveness Detection of a Finger Based on Changes of Papillary Lines

Lichvár, Michal January 2008 (has links)
There are several frauds against biometric systems (BSs) and several techniques exist to secure BSs against these frauds. One of the techniques is liveness detection. To fool fingerprint sensors, latent fingerprints, dummy fingers and wafer-thin layer attached to the finger are being used. Liveness detection is being used also when scanning fingerprints. Several different characteristics of the live finger can be used to detect liveness, for example sweat, conductivity etc. In this thesis, new approach is examined. It is based on the expandability of the finger as an effect of heartbeats/pulsation. As the skin is expanding, also the distances between papillary lines are expanding. Whole finger expands approximately in range of 4,5 ľm , the distance between two neighbor papillary lines in 0,454 ľm . This value collides with wavelength of blue and green light. The result from this work is following. The resolution of the capturing device is not high enough to capture the expandability on distance between two neighbor papillary lines. Also, because of collision with wavelength, the diffraction effect is presented and the result images are influenced by this error.
364

Rozpoznávání živosti otisků prstů / Fingerprint Liveness Recognition

Lodrová, Dana January 2007 (has links)
This document deals with presentation of nowadays software and hardware methods used for fingerprint recognition with focus on liveness testing and thereafter it deals with description of my solution. In order to describe results obtained from study of technical literature, we discuss important terminology of biometric systems at first and further main principles of fingerprint sensors used in practice are shown. From overviewed methods of liveness detection we underline one method based on  perspiration (researched by BioSAL laboratory) and one spectroscopic method researched by Lumidigm Corporation. The study of liveness testing methods inspired me to creation of new type fingerprint sensor which has built-in livennes testing method based on two characteristic properties of living human tisue. In order to test this sensor, we discuss nowadays sensor deception method. It follows from their analysis, that newly designed sensor should be theoretically resistant to each of them.
365

Fehler von Fingerabdruckerkennungssystemen im Kontext / Begreifbare Vermittlung der Fehler einer biometrischen Kontrolltechnologie

Knaut, Andrea 12 September 2017 (has links)
In dieser Arbeit werden zwei Fragen im Zusammenhang mit Fehlern von Fingerabdruckerkennungssystemen untersucht. Erstens: Welche strukturellen Merkmale und begrifflichen Implikationen hat der spezifische Fehlerdiskurs in diesem Teilgebiet der Biometrie? Zur Beantwortung dieser Frage werden im Rahmen einer diskursanalytischen Betrachtung der Fachtexte des Forschungsfeldes die gängigen Fehlertypologien der Biometrie untersucht. Die Arbeitshypothese der Analyse ist, dass der massenhafte Einsatz von Fingerabdruckerkennungssystemen im Alltag trotz aller ihrer Fehler diskursiv durchsetzungsfähig ist. Undzwar nicht unbedingt, weil die Fehler zu vernachlässigen sind, sondern weil die Angst vor „Identitätsbetrug“, die Idee einer Messbarkeit von Identität und die wirtschaftliche und politische Bedeutung von Sicherheitstechniken in einer für unsicher gehaltenen Welt große Wirkmächtigkeit haben. Es wird diskutiert, inwiefern die Auseinandersetzung mit System- und Überwindungsfehlern in der Informatik zu kurz greift. Daher wird ein erweitertes Fehlermodell vorgeschlagen, das an jüngere transdisziplinäre Fehlerforschung anknüpft und als kritisches Analyseinstrument für die Beurteilung der Wechselwirkung zwischen Informatik(-system) und Gesellschaft genutzt werden kann. Zweitens: Wie lassen sich die diskursanalytische Methode und ein experimentelles Hands-On-Lernen zu einem Lern- und Lehrkonzept verbinden, dass eine kritische Vermittlung der Probleme von Fingerabdruckerkennungssystemen ermöglicht? Ausgehend von schulischen Unterrichtskonzepten einer an der Lebenswelt orientierten Informatiklehre sowie der Idee des „be-greifbaren Lernens“ an konkreten Gegenständen wurde ein Lern- und Lehrkonzept für Universität und Schule entwickelt und in drei verschiedenen Institutionen ausprobiert. / In this paper two questions will be addressed relating to deficits in fingerprint recognition systems. Firstly, what structural features and conceptual implications does the analysis of errors have in the field of biometrics? To answer this question, the common error types in biometrics will be examined, as part of an analytical discourse taking into consideration technical texts from the research field. The working hypothesis of this analysis is that the structure of the discourse surrounding fingerprint recognition systems would present no barriers to their widespread implementation in everyday life despite all their faults – not because their shortcomings are negligible but due to the great potency of the fear of “identity fraud”, the notion that identity can be measured, and the economic and political importance of security technologies in a world deemed unsafe. It will be discussed how the examination of system errors and spoofing attacks in computer science falls short in addressing the whole picture of failing fingerprint recognition systems. Therefore an extended error model will be proposed, one which builds on recent transdisciplinary error research and which can be used as a critical tool for analysing and assessing the interaction between computer systems and society. Secondly, how could the analytical discourse method and experimental hands-on learning be combined into a teaching concept that would enable critical teaching of the problems of fingerprint recognition systems? Starting from the school-based teaching concepts of a theory of computer science based on real life and the idea of “hands-on learning” using concrete objects, a teaching concept for universities and schools has been developed and tested in three different institutions.
366

Biometric Multi-modal User Authentication System based on Ensemble Classifier

Assaad, Firas Souhail January 2014 (has links)
No description available.
367

Exploring the sensitivity of Biometric Data: A Comparative Analysis of Theoretical and Human Perspectives

Jose, Dayona January 2024 (has links)
Biometric technology, leveraging distinctive physiological or behavioral traits for identification, has transformed authentication methods. This thesis explores biometric data sensitivity from theoretical and human perspectives. Theoretical analysis examines factors like uniqueness, permanence, and potential misuse, while empirical research surveys societal attitudes towards biometric sensitivity. Discrepancies between theoretical constructs and real-world perceptions underscore the complexity of this issue. Privacy, security, and trust emerge as central concerns, emphasizing the need for comprehensive approaches in biometric technology development and policy-making. The discussion interprets survey findings, highlighting implications for stakeholders. Future research could explore cultural influences on biometric perceptions, conduct longitudinal studies, and investigate innovative solutions to privacy and security concerns. Collaboration between academia, industry, and policymakers is crucial for advancing biometric technology ethically and responsibly in an increasingly digital world.
368

La conception d'un système ultrasonore passif couche mince pour l'évaluation de l'état vibratoire des cordes vocales / A speaker recognition system based on vocal cords’ vibrations

Ishak, Dany 19 December 2017 (has links)
Dans ce travail, une approche de reconnaissance de l’orateur en utilisant un microphone de contact est développée et présentée. L'élément passif de contact est construit à partir d'un matériau piézoélectrique. La position du transducteur piézoélectrique sur le cou de l'individu peut affecter grandement la qualité du signal recueilli et par conséquent les informations qui en sont extraites. Ainsi, le milieu multicouche dans lequel les vibrations des cordes vocales se propagent avant d'être détectées par le transducteur est modélisé. Le meilleur emplacement sur le cou de l’individu pour attacher un élément transducteur particulier est déterminé en mettant en œuvre des techniques de simulation Monte Carlo et, par conséquent, les résultats de la simulation sont vérifiés en utilisant des expériences réelles. La reconnaissance est basée sur le signal généré par les vibrations des cordes vocales lorsqu'un individu parle et non sur le signal vocal à la sortie des lèvres qui est influencé par les résonances dans le conduit vocal. Par conséquent, en raison de la nature variable du signal recueilli, l'analyse a été effectuée en appliquant la technique de transformation de Fourier à court terme pour décomposer le signal en ses composantes de fréquence. Ces fréquences représentent les vibrations des cordes vocales (50-1000 Hz). Les caractéristiques en termes d'intervalle de fréquences sont extraites du spectrogramme résultant. Ensuite, un vecteur 1-D est formé à des fins d'identification. L'identification de l’orateur est effectuée en utilisant deux critères d'évaluation qui sont la mesure de la similarité de corrélation et l'analyse en composantes principales (ACP) en conjonction avec la distance euclidienne. Les résultats montrent qu'un pourcentage élevé de reconnaissance est atteint et que la performance est bien meilleure que de nombreuses techniques existantes dans la littérature. / In this work, a speaker recognition approach using a contact microphone is developed and presented. The contact passive element is constructed from a piezoelectric material. In this context, the position of the piezoelectric transducer on the individual’s neck may greatly affect the quality of the collected signal and consequently the information extracted from it. Thus, the multilayered medium in which the sound propagates before being detected by the transducer is modeled. The best location on the individual’ neck to place a particular transducer element is determined by implementing Monte Carlo simulation techniques and consequently, the simulation results are verified using real experiments. The recognition is based on the signal generated from the vocal cords’ vibrations when an individual is speaking and not on the vocal signal at the output of the lips that is influenced by the resonances in the vocal tract. Therefore, due to the varying nature of the collected signal, the analysis was performed by applying the Short Term Fourier Transform technique to decompose the signal into its frequency components. These frequencies represent the vocal folds’ vibrations (50-1000 Hz). The features in terms of frequencies’ interval are extracted from the resulting spectrogram. Then, a 1-D vector is formed for identification purposes. The identification of the speaker is performed using two evaluation criteria, namely, the correlation similarity measure and the Principal Component Analysis (PCA) in conjunction with the Euclidean distance. The results show that a high percentage of recognition is achieved and the performance is much better than many existing techniques in the literature.
369

E-crimes and e-authentication - a legal perspective

Njotini, Mzukisi Niven 27 October 2016 (has links)
E-crimes continue to generate grave challenges to the ICT regulatory agenda. Because e-crimes involve a wrongful appropriation of information online, it is enquired whether information is property which is capable of being stolen. This then requires an investigation to be made of the law of property. The basis for this scrutiny is to establish if information is property for purposes of the law. Following a study of the Roman-Dutch law approach to property, it is argued that the emergence of an information society makes real rights in information possible. This is the position because information is one of the indispensable assets of an information society. Given the fact that information can be the object of property, its position in the law of theft is investigated. This study is followed by an examination of the conventional risks that ICTs generate. For example, a risk exists that ICTs may be used as the object of e-crimes. Furthermore, there is a risk that ICTs may become a tool in order to appropriate information unlawfully. Accordingly, the scale and impact of e-crimes is more than those of the offline crimes, for example theft or fraud. The severe challenges that ICTs pose to an information society are likely to continue if clarity is not sought regarding: whether ICTs can be regulated or not, if ICTs can be regulated, how should an ICT regulatory framework be structured? A study of the law and regulation for regulatory purposes reveals that ICTs are spheres where regulations apply or should apply. However, better regulations are appropriate in dealing with the dynamics of these technologies. Smart-regulations, meta-regulations or reflexive regulations, self-regulations and co-regulations are concepts that support better regulations. Better regulations enjoin the regulatory industries, for example the state, businesses and computer users to be involved in establishing ICT regulations. These ICT regulations should specifically be in keeping with the existing e-authentication measures. Furthermore, the codes-based theory, the Danger or Artificial Immune Systems (the AIS) theory, the Systems theory and the Good Regulator Theorem ought to inform ICT regulations. The basis for all this should be to establish a holistic approach to e-authentication. This approach must conform to the Precautionary Approach to E-Authentication or PAEA. PAEA accepts the importance of legal rules in the ICT regulatory agenda. However, it argues that flexible regulations could provide a suitable framework within which ICTs and the ICT risks are controlled. In addition, PAEA submit that a state should not be the single role-player in ICT regulations. Social norms, the market and nature or architecture of the technology to be regulated are also fundamental to the ICT regulatory agenda. / Jurisprudence / LL. D.
370

Central de confrontos para um sistema automático de identificação biométrica: uma abordagem de implementação escalável / Matching platform for an automatic biometric identification system: a scalable implementation approach

Nishibe, Caio Arce 19 October 2017 (has links)
Com a popularização do uso da biometria, determinar a identidade de um indivíduo é uma atividade cada vez mais comum em diversos contextos: controle de acesso físico e lógico, controle de fronteiras, identificações criminais e forenses, pagamentos. Sendo assim, existe uma demanda crescente por Sistemas Automáticos de Identificação Biométrica (ABIS) cada vez mais rápidos, com elevada acurácia e que possam operar com um grande volume de dados. Este trabalho apresenta uma abordagem de implementação de uma central de confrontos para um ABIS de grande escala utilizando um framework de computação em memória. Foram realizados experimentos em uma base de dados real com mais de 50 milhões de impressões digitais em um cluster com até 16 nós. Os resultados mostraram a escalabilidade da solução proposta e a capacidade de operar em grandes bases de dados. / With the popularization of biometrics, personal identification is an increasingly common activity in several contexts: physical and logical access control, border control, criminal and forensic identification, payments. Thus, there is a growing demand for faster and accurate Automatic Biometric Identification Systems (ABIS) capable to handle a large volume of biometric data. This work presents an approach to implement a scalable cluster-based matching platform for a large-scale ABIS using an in-memory computing framework. We have conducted some experiments that involved a database with more than 50 million captured fingerprints, in a cluster up to 16 nodes. The results have shown the scalability of the proposed solution and the capability to handle a large biometric database.

Page generated in 0.0937 seconds