• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 7
  • 7
  • 7
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Continuous Authentication using Stylometry

Brocardo, Marcelo Luiz 30 April 2015 (has links)
Static authentication, where user identity is checked once at login time, can be circumvented no matter how strong the authentication mechanism is. Through attacks such as man-in-the-middle and its variants, an authenticated session can be hijacked later after the initial login process has been completed. In the last decade, continuous authentication (CA) using biometrics has emerged as a possible remedy against session hijacking. CA consists of testing the authenticity of the user repeatedly throughout the authenticated session as data becomes available. CA is expected to be carried out unobtrusively, due to its repetitive nature, which means that the authentication information must be collectible without any active involvement of the user and without using any special purpose hardware devices (e.g. biometric readers). Stylometry analysis, which consists of checking whether a target document was written or not by a specific individual, could potentially be used for CA. Although stylometric techniques can achieve high accuracy rates for long documents, it is still challenging to identify an author for short documents, in particular when dealing with large author populations. In this dissertation, we propose a new framework for continuous authentication using authorship verification based on the writing style. Authorship verification can be checked using stylometric techniques through the analysis of linguistic styles and writing characteristics of the authors. Different from traditional authorship verification that focuses on long texts, we tackle the use of short messages. Shorter authentication delay (i.e. smaller data sample) is essential to reduce the window size of the re-authentication period in CA. We validate our method using different block sizes, including 140, 280, and 500 characters, and investigate shallow and deep learning architectures for machine learning classification. Experimental evaluation of the proposed authorship verification approach based on the Enron emails dataset with 76 authors yields an Equal Error Rate (EER) of 8.21% and Twitter dataset with 100 authors yields an EER of 10.08%. The evaluation of the approach using relatively smaller forgery samples with 10 authors yields an EER of 5.48%. / Graduate
2

Adaptation in a deep network

Ruiz, Vito Manuel 08 July 2011 (has links)
Though adaptational effects are found throughout the visual system, the underlying mechanisms and benefits of this phenomenon are not yet known. In this work, the visual system is modeled as a Deep Belief Network, with a novel “post-training” paradigm (i.e. training the network further on certain stimuli) used to simulate adaptation in vivo. An optional sparse variant of the DBN is used to help bring about meaningful and biologically relevant receptive fields, and to examine the effects of sparsification on adaptation in their own right. While results are inconclusive, there is some evidence of an attractive bias effect in the adapting network, whereby the network’s representations are drawn closer to the adapting stimulus. As a similar attractive bias is documented in human perception as a result of adaptation, there is thus evidence that the statistical properties underlying the adapting DBN also have a role in the adapting visual system, including efficient coding and optimal information transfer given limited resources. These results are irrespective of sparsification. As adaptation has never been tested directly in a neural network, to the author’s knowledge, this work sets a precedent for future experiments. / text
3

Methodology and Techniques for Building Modular Brain-Computer Interfaces

Cummer, Jason 05 January 2015 (has links)
Commodity brain-computer interfaces (BCI) are beginning to accompany everything from toys and games to sophisticated health care devices. These contemporary interfaces allow for varying levels of interaction with a computer. Not surprisingly, the more intimately BCIs are integrated into the nervous system, the better the control a user can exert on a system. At one end of the spectrum, implanted systems can enable an individual with full body paralysis to utilize a robot arm and hold hands with their loved ones [28, 62]. On the other end of the spectrum, the untapped potential of commodity devices supporting electroencephalography (EEG) and electromyography (EMG) technologies require innovative approaches and further research. This thesis proposes a modularized software architecture designed to build flexible systems based on input from commodity BCI devices. An exploratory study using a commodity EEG provides concrete assessment of the potential for the modularity of the system to foster innovation and exploration, allowing for a combination of a variety of algorithms for manipulating data and classifying results. Specifically, this study analyzes a pipelined architecture for researchers, starting with the collection of spatio temporal brain data (STBD) from a commodity EEG device and correlating it with intentional behaviour involving keyboard and mouse input. Though classification proves troublesome in the preliminary dataset considered, the architecture demonstrates a unique and flexible combination of a liquid state machine (LSM) and a deep belief network (DBN). Research in methodologies and techniques such as these are required for innovation in BCIs, as commodity devices, processing power, and algorithms continue to improve. Limitations in terms of types of classifiers, their range of expected inputs, discrete versus continuous data, spatial and temporal considerations and alignment with neural networks are also identified. / Graduate / 0317 / 0984 / jasoncummer@gmail.com
4

A multimodal deep learning framework using local feature representations for face recognition

Al-Waisy, Alaa S., Qahwaji, Rami S.R., Ipson, Stanley S., Al-Fahdawi, Shumoos 04 September 2017 (has links)
Yes / The most recent face recognition systems are mainly dependent on feature representations obtained using either local handcrafted-descriptors, such as local binary patterns (LBP), or use a deep learning approach, such as deep belief network (DBN). However, the former usually suffers from the wide variations in face images, while the latter usually discards the local facial features, which are proven to be important for face recognition. In this paper, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the DBN is proposed to address the face recognition problem in unconstrained conditions. Firstly, a novel multimodal local feature extraction approach based on merging the advantages of the Curvelet transform with Fractal dimension is proposed and termed the Curvelet–Fractal approach. The main motivation of this approach is that theCurvelet transform, a newanisotropic and multidirectional transform, can efficiently represent themain structure of the face (e.g., edges and curves), while the Fractal dimension is one of the most powerful texture descriptors for face images. Secondly, a novel framework is proposed, termed the multimodal deep face recognition (MDFR)framework, to add feature representations by training aDBNon top of the local feature representations instead of the pixel intensity representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary to those acquired by the Curvelet–Fractal approach. Finally, the performance of the proposed approaches has been evaluated by conducting a number of extensive experiments on four large-scale face datasets: the SDUMLA-HMT, FERET, CAS-PEAL-R1, and LFW databases. The results obtained from the proposed approaches outperform other state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by achieving new state-of-the-art results on all the employed datasets.
5

Modeling an Embedded Climate System Using Machine Learning

Josefsson, Alexandra January 2021 (has links)
Recent advancements in processing power, storage capabilities, and availability of data, has led to improvements in many applications through the use of machine learning. Using machine learning in control systems was first suggested in the 1990s, but is more recently being implemented. In this thesis, an embedded climate system, which is a type of control system, will be looked at. The ways in which machine learning can be used to replicate portions of the climate system is looked at. Deep Belief Networks are the machine learning models of choice. Firstly, the functionality of a PID controller is replicated using a Deep Belief Network. Then, the functionality of a more complex control path is replicated. The performance of the Deep Belief Networks are evaluated at how they compare to the original control portions, and the performance in hardware. It is found that the Deep Belief Network can quite accurately replicate the behaviour of a PID controller, whilst the performance is worse for the more complex control path. It was seen that the use of delays in input features gave better results than without. A climate system with a Deep Belief Network was also loaded onto hardware. The minimum requirements of memory usage and CPU usage were met. However, the CPU usage was greatly affected, and if this was to be used in practice, work should be done to decrease it. / Många applikationer har förbättras genom användningen av maskininlärning. Maskininlärning för reglersystem föreslogs redan på 1990-talet och har nu börjat tillämpas, eftersom processorkraft, lagringsmöjligheter och tillgänglighet till rådata ökat. I detta examensarbete användes ett inbäddat klimatsystem, som är en typ av reglersystem. Maskininlärningsmodellen Deep Belief Network användes för att undersöka hur delar av klimatsystemet skulle kunna återskapas. Först återskapades funktionaliteten hos en PID-regulator och sedan funktionaliteten av en mer komplex del av reglersystemet Prestandan hos nätverken utvärderades i jämförelse med prestandan i de ursprungliga kontrolldelarna och hårdvaran. Det visade sig att Deep Belief Network utmärkt kunde replikera PID-regulatorns beteende, medan prestandan var lägre för den komplexa delen av reglersystemet. Användningen av fördröjningar i indata till nätverken gav bättre resultat än utan. Ett klimatsystem med ett Deep Belief Network laddades också över på hårdvaran. Minimikrav för minnesanvändning och CPU- användning var uppfyllda, men CPU- användningen påverkades kraftigt. Detta gör, att om maskininlärning ska kunna användas i verkligheten, bör CPU-användningen minskas.
6

A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.

Nassar, Alaa S.N. January 2018 (has links)
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity. / Higher Committee for Education Development in Iraq
7

Self-Organizing Neural Visual Models to Learn Feature Detectors and Motion Tracking Behaviour by Exposure to Real-World Data

Yogeswaran, Arjun January 2018 (has links)
Advances in unsupervised learning and deep neural networks have led to increased performance in a number of domains, and to the ability to draw strong comparisons between the biological method of self-organization conducted by the brain and computational mechanisms. This thesis aims to use real-world data to tackle two areas in the domain of computer vision which have biological equivalents: feature detection and motion tracking. The aforementioned advances have allowed efficient learning of feature representations directly from large sets of unlabeled data instead of using traditional handcrafted features. The first part of this thesis evaluates such representations by comparing regularization and preprocessing methods which incorporate local neighbouring information during training on a single-layer neural network. The networks are trained and tested on the Hollywood2 video dataset, as well as the static CIFAR-10, STL-10, COIL-100, and MNIST image datasets. The induction of topography or simple image blurring via Gaussian filters during training produces better discriminative features as evidenced by the consistent and notable increase in classification results that they produce. In the visual domain, invariant features are desirable such that objects can be classified despite transformations. It is found that most of the compared methods produce more invariant features, however, classification accuracy does not correlate to invariance. The second, and paramount, contribution of this thesis is a biologically-inspired model to explain the emergence of motion tracking behaviour in early development using unsupervised learning. The model’s self-organization is biased by an original concept called retinal constancy, which measures how similar visual contents are between successive frames. In the proposed two-layer deep network, when exposed to real-world video, the first layer learns to encode visual motion, and the second layer learns to relate that motion to gaze movements, which it perceives and creates through bi-directional nodes. This is unique because it uses general machine learning algorithms, and their inherent generative properties, to learn from real-world data. It also implements a biological theory and learns in a fully unsupervised manner. An analysis of its parameters and limitations is conducted, and its tracking performance is evaluated. Results show that this model is able to successfully follow targets in real-world video, despite being trained without supervision on real-world video.

Page generated in 0.0703 seconds