• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 35
  • 1
  • Tagged with
  • 208
  • 34
  • 33
  • 27
  • 19
  • 17
  • 17
  • 16
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Integrity verification of digital images through fragile watermarking and image forensics

Bravo-Solorio, Sergio January 2010 (has links)
No description available.
42

Application of power laws to biometrics, forensics and network traffic analysis

Iorliam, Aamo January 2016 (has links)
Tampering of biometric samples is becoming an important security concern. Attacks can occur in behavioral modalities (e.g. keyboard stroke) as well. Besides biometric data, other important security concerns are related to network traffic data on the Internet. In this thesis, we investigate the application of Power laws for biometrics, forensics and network traffic analysis. Passive detection techniques such as Benford’s law and Zipf’s law have not been investigated for the detection and forensic analysis of malicious and non-malicious tampering of biometric, keystroke and network traffic data. The Benford’s law has been reported in the literature to be very effective in detecting tampering of natural images. In this thesis, our experiments show that the biometric samples do follow the Benford’s law; and that the highest detection and localisation accuracies for the biometric face images and fingerprint images are achieved at 97.41% and 96.40%, respectively. The divergence values of Benford’s law are then used for the classification and source identification of fingerprint images with good accuracies between the range of 76.0357% and 92.4344%. Another research focus in this thesis is on the application and analysis of the Benford’s law and Zipf’s law for keystroke dynamics to differentiate between the behaviour of human beings and non-human beings. The divergence values of the Benford’s law and the P-values of the Zipf’s law based on the latency values of the keystroke data can be used effectively to differentiate between human and non-human behaviours. Finally, the Benford’s law and Zipf’s law are analysed for TCP flow size difference for the detection of malicious traffics on the Internet with AUC values between the range of 0.6858 and 1. Furthermore, the P-values of the Zipf’s law have also been found to differentiate between malicious and non-malicious network traffics, which can be potentially exploited for intrusion detection system applications.
43

Improving less constrained iris recognition

Hu, Yang January 2017 (has links)
The iris has been one of the most reliable biometric traits for automatic human authentication due to its highly stable and distinctive patterns. Traditional iris recognition algorithms have achieved remarkable performance in strictly constrained environments, with the subject standing still and with the iris captured at a close distance. This enables the wide deployment of iris recognition systems in applications such as border control and access control. However, in less constrained environments with the subject at-a-distance and on-the-move, the iris recognition performance is significantly deteriorated, since such environments induce noise and degradations in iris captures. This restricts the applicability and practicality of iris recognition technology for some real-world applications with more open capturing conditions, such as surveillance, forensic and mobile device security applications. Therefore, robust algorithms for less constrained iris recognition are desirable for the wider deployment of iris recognition systems. This thesis focuses on improving less constrained iris recognition. Five methods are proposed to improve the performance of different stages in less constrained iris recognition. First, a robust iris segmentation algorithm is developed using l1-norm regression and model selection. This algorithm formulates iris segmentation as robust l1-norm regression problems. To further enhance the robustness, multiple segmentation results are produced by applying l1-norm regression to different models, and a model selection technique is used to select the most reliable result. Second, an iris liveness detection method using regional features is investigated. This method seeks not only low level features, but also high level feature distributions for more accurate and robust iris liveness detection. Third, a signal-level information fusion algorithm is presented to mitigate the noise in less constrained iris captures. With multiple noisy iris captures, this algorithm proposes a sparse-error low rank matrix factorization model to separate noiseless iris structures and noise. The noiseless structures are preserved and emphasised during the fusion process, while the noise is suppressed, in order to obtain more reliable signals for recognition. Fourth, a method to generate optimal iris codes is proposed. This method considers iris code generation from the perspective of optimization. It formulates traditional iris code generation method as an optimization problem; an additional objective term modelling the spatial correlations in iris codes is applied to this optimization problem to produce more effective iris codes. Fifth, an iris weight map method is studied for robust iris matching. This method considers both intra-class bit stability and inter-class bit discriminability in iris codes. It emphasises highly stable and discriminative bits for iris matching, enhancing the robustness of iris matching. Comprehensive experimental analysis are performed on benchmark datasets for each of the above methods. The results indicate that the presented methods are effective for less constrained iris recognition, generally improving state-of-the-art performance.
44

Novel template ageing techniques to minimise the effect of ageing in biometric systems

Pg Hj Mohd Yassin, D. K. Hayati Bte January 2016 (has links)
Effect of ageing on biometric systems and particularly its impact on face recognition systems. Being biological tissue in nature, facial biometric trait undergoes ageing. As a result developing biometric applications for long-term use becomes a particularly challenging task. Despite the rising attention on facial ageing, longitudinal study of face recognition remains an understudied problem in comparison to facial variations due to pose, illumination and expression changes. Regardless of any adopted representation, biometric patterns are always affected by the change in the face appearance due to ageing. In order to overcome this problem either evaluation of the changes in facial appearance over time or template-age transformation-based techniques are recommended. By using a database comprising images acquired over a 5-years period, this thesis explores techniques for recognising face images for identify verification. A detailed investigation analyses the challenges due to ageing with respect to the performance of biometric systems. This study provides a comprehensive analysis looking at both lateral age as well as longitudinal ageing. This thesis also proposes novel approaches for template ageing to compensate the ageing effects for verification purposes. The approach will explore both linear and nonlinear transformation mapping methods. Furthermore, the compound effect of ageing with other variate (such as gender, age group) are systematically analysed. With the implementation of the novel approach, it can be seen that the GAR (Genuine Accept Rate) improved significantly.
45

Audio-visual speech processing for multimedia localisation

Benatan, Matthew Aaron January 2016 (has links)
For many years, film and television have dominated the entertainment industry. Recently, with the introduction of a range of digital formats and mobile devices, multimedia’s ubiquity as the dominant form of entertainment has increased dramatically. This, in turn, has increased demand on the entertainment industry, with production companies looking to increase their revenue by providing entertainment media to a growing international market. This brings with it challenges in the form of multimedia localisation - the process of preparing content for international distribution. The industry is now looking to modernise production processes - moving what were once wholly manual practices to semi-automated workflows. A key aspect of the localisation process is the alignment of content, such as subtitles or audio, when adapting content from one region to another. One method of automating this is through using audio content as a guide, providing a solution via audio-to-text alignment. While many approaches for audio-to-text alignment currently exist, these all require language models - meaning that dozens of languages models would be required for these approaches to be reliably implemented in large production companies. To address this, this thesis explores the development of audio-to-text alignment procedures which do not rely on language models, instead providing a language independent method for aligning multimedia content. To achieve this, the project explores both audio and visual speech processing, with a focus on voice activity detection, as a means for segmenting and aligning audio and text data. The thesis first presents a novel method for detecting speech activity in entertainment media. This method is compared with current state of the art, and demonstrates significant improvement over baseline methods. Secondly, the thesis explores a novel set of features for detecting voice activity in visual speech data. Here, we show that the combination of landmark and appearance-based features outperforms recent methods for visual voice activity detection, and specifically that the incorporation of landmark features is particularly crucial when presented with challenging natural speech data. Lastly, a speech activity-based alignment framework is presented which demonstrates encouraging results. Here, we show that Dynamic Time Warping (DTW) can be used for segment matching and alignment of audio and subtitle data, and we also present a novel method for aligning scene-level content which outperforms DTW for sequence alignment of finer-level data. To conclude, we demonstrate that combining global and local alignment approaches achieves strong alignment estimates, but that the resulting output is not sufficient for wholly automated subtitle alignment. We therefore propose that this be used as a platform for the development of lexical-discovery based alignment techniques, as the general alignment provided by our system would improve symbolic sequence discovery for sparse dictionary-based systems.
46

Advances in compositional fitting of active appearance models

Alabort Medina, Joan January 2016 (has links)
This thesis presents a detailed and complete study of compositional gradient descent (CGD) algorithms for fitting active appearance models (AAM) and advances the state-of-the-art in generative AAM fitting by incorporating: (i) novel robust texture representations; (ii) novel cost functions and compositional types; and (iii) combined fitting approaches with complementary deformable models; into the original CGD framework. In particular, a robust texture representation based on image gradient orientations is used to define a new type of generative deformable model that generalizes well to variations in identity, pose, expression, illumination and occlusions and that can be fitted to images using standard CGD algorithms. Moreover, a novel Bayesian formulation of the AAM fitting problem, which can be interpreted as a probabilistic generalization of the well-known project-out inverse compositional (PIC) algorithm, is proposed along with two new types of composition, asymmetric and bidirectional, that lead to better convergent and more robust CGD fitting algorithms. At the same time, interesting insights into existent strategies used to derive fast and exact simultaneous CGD algorithms are provided by reinterpreting them as direct applications of the Schur complement and the Wiberg method. Finally, CGD algorithms are combined with similar generative fitting techniques for constrained local models (CLM) to create a unified probabilistic fitting framework that combines the strengths of both models (AAM and CLM) and produces state-of-the-art results on the problem of non-rigid face alignment in the wild.
47

Methods for addressing data diversity in automatic speech recognition

Doulaty Bashkand, Mortaza January 2017 (has links)
The performance of speech recognition systems is known to degrade in mismatched conditions, where the acoustic environment and the speaker population significantly differ between the training and target test data. Performance degradation due to the mismatch is widely reported in the literature, particularly for diverse datasets. This thesis approaches the mismatch problem in diverse datasets with various strategies including data refinement, variability modelling and speech recognition model adaptation. These strategies are realised in six novel contributions. The first contribution is a data subset selection technique using likelihood ratio derived from a target test set quantifying mismatch. The second contribution is a multi-style training method using data augmentation. The existing training data is augmented using a distribution of variabilities learnt from a target dataset, resulting in a matched set. The third contribution is a new approach for genre identification in diverse media data with the aim of reducing the mismatch in an adaptation framework. The fourth contribution is a novel method which performs an unsupervised domain discovery using latent Dirichlet allocation. Since the latent domains have a high correlation with some subjective meta-data tags, such as genre labels of media data, features derived from the latent domains are successfully applied to the genre and broadcast show identification tasks. The fifth contribution extends the latent modelling technique for acoustic model adaptation, where latent-domain specific models are adapted from a base model. As the sixth contribution, an alternative adaptation approach is proposed where subspace adaptation of deep neural network acoustic models is performed using the proposed latent-domain aware training procedure. All of the proposed techniques for mismatch reduction are verified using diverse datasets. Using data selection, data augmentation and latent-domain model adaptation methods the mismatch between training and testing conditions of diverse ASR systems are reduced, resulting in more robust speech recognition systems.
48

Mobile security and smart systems : multi-modal biometric authentication on mobile devices

Huang, Xuan January 2013 (has links)
With increased use of mobile phones that support mobile commerce, there is a need to examine the authentication of users. The password-based authentication techniques are not reliable with many passwords being too simple. A biometric authentication system is becoming more commonplace and is widely used in security fields because of its special stability and uniqueness. Within this context, the researcher has developed a fuzzy logic based multi-modal biometric authentication system to verify the identity of a mobile phone user. The research presented in this thesis involves three parts of work. Firstly, a model to support the authentication of mobile commerce has been proposed. Within this model, a number of different authentication levels have been defined in the system which sought to achieve the balance between usability and security. Secondly, the researcher has developed a multi-modal biometric authentication system which involves typing behaviour recognition, face recognition and speaker recognition techniques to establish the identity of the user on the mobile phone. However, there are some issues with deterministic biometric authentication systems. Because of this, a fuzzy logic model which can determine the transaction risk in m-commerce and the recognition result from biometric authentication engine has been built. In the experimental stage, the researcher simulates a mobile commerce environment. At one extreme, users will just want to obtain the item and not enter any identity. They are prepared to accept the low level of risk when the transaction is of low value. On the other extreme for a high value transaction users will accept multiple levels of security and would not want the transaction to go through without any checking. The experimental results showed that the fuzzy logic based multi-modal authentication system can achieve a low equal error rate (EER) of 0.63%, and by using the fuzzy logic model, it can effectively reduce the false rejection rate (FRR). There is also a reduction in the environmental influence in the fuzzy logic based biometric authentication. There are three contributions of the thesis: firstly, this research has proposed a model to support the authentication in mobile commerce. Secondly, a multi-modal biometric authentication system was developed. Another major contribution is the development of a fuzzy logic based multi-modal biometric authentication system which is able to overcome the issues of deterministic biometric systems. Overall, the results gained in this thesis prove that using the multi-modal biometric authentication system, itis possible to establish the identity of the user on a mobile phone. The fuzzy logic based authentication model can make the multi-modal biometric system more accurate, and also reduce the influence of external environmental factors. A holistic interpretation of the research indicated that the mobile security and smart system can help mobile commerce become more secure and more flexible in future.
49

Identification by a hybrid 3D/2D gait recognition algorithm

Abdulsattar, Fatimah January 2016 (has links)
Recently, the research community has given much interest in gait as a biometric. However, one of the key challenges that affects gait recognition performance is its susceptibility to view variation. Much work has been done to deal with this problem. The implicit assumptions made by most of these studies are that the view variation in one gait cycle is small and that people walk only along straight trajectories. These are often wrong. Our strategy for view independence is to enrol people using their 3D volumetric data since a synthetic image can be generated and used to match a probe image. A set of experiments was conducted to illustrate the potential of matching 3D volumetric data against gait images from single cameras inside the Biometric Tunnel at Southampton University using the Gait Energy Image as gait features. The results show an average Correct Classification Rate (CCR) of 97% for matching against affine cameras and 42% for matching against perspective cameras with large changes in appearance. We modified and expanded the Tunnel systems to improve the quality of the 3D reconstruction and to provide asynchronous gait images from two independent cameras. Two gait datasets have been collected; one with 17 people walking along a straight line and a second with 50 people walking along straight and curved trajectories. The first dataset was analysed with an algorithm in which 3D volumes were aligned according to the starting position of the 2D gait cycle in 3D space and the sagittal plane of the walking people. When gait features were extracted from each frame using Generic Fourier Descriptors and compared using Dynamic Time Warping, a CCR of up to 98.8% was achieved. A full performance analysis was performed and camera calibration accuracy was shown to be the most import factor. The shortcomings of this algorithm were that it is not completely view-independent and it is affected by changes in walking directions. A second algorithm was developed to overcome the previous limitations. In this, the alignment was based on three key frames at mid-stance phase. The motion in the first and second parts of the gait cycle was assumed to be linear. The second dataset was used for evaluating the algorithm and a CCR of 99% was achieved. However, when the probe consisted of people walking on a curved trajectory, the CCR dropped to 82%. But when the gallery was also taken from curved walking, the CCR returned to 99%. The algorithm was also evaluated using data from the Kyushu University 4D Gait Database where normal walking achieved 98% and curved walking achieved 68%. Inspection of the data indicated that the assumption made previously that straight ahead walking and curved walking are similar, is invalid. Finally, an investigation into more appropriate features was also carried out but this only gave a slight improvement.
50

Distant speech recognition of natural spontaneous multi-party conversations

Liu, Yulan January 2017 (has links)
Distant speech recognition (DSR) has gained wide interest recently. While deep networks keep improving ASR overall, the performance gap remains between using close-talking recordings and distant recordings. Therefore the work in this thesis aims at providing some insights for further improvement of DSR performance. The investigation starts with collecting the first multi-microphone and multi-media corpus of natural spontaneous multi-party conversations in native English with the speaker location tracked, i.e. the Sheffield Wargame Corpus (SWC). The state-of-the-art recognition systems with the acoustic models trained standalone and adapted both show word error rates (WERs) above 40% on headset recordings and above 70% on distant recordings. A comparison between SWC and AMI corpus suggests a few unique properties in the real natural spontaneous conversations, e.g. the very short utterances and the emotional speech. Further experimental analysis based on simulated data and real data quantifies the impact of such influence factors on DSR performance, and illustrates the complex interaction among multiple factors which makes the treatment of each influence factor much more difficult. The reverberation factor is studied further. It is shown that the reverberation effect on speech features could be accurately modelled with a temporal convolution in the complex spectrogram domain. Based on that a polynomial reverberation score is proposed to measure the distortion level of short utterances. Compared to existing reverberation metrics like C50, it avoids a rigid early-late-reverberation partition without compromising the performance on ranking the reverberation level of recording environments and channels. Furthermore, the existing reverberation measurement is signal independent thus unable to accurately estimate the reverberation distortion level in short recordings. Inspired by the phonetic analysis on the reverberation distortion via self-masking and overlap-masking, a novel partition of reverberation distortion into the intra-phone smearing and the inter-phone smearing is proposed, so that the reverberation distortion level is first estimated on each part and then combined.

Page generated in 0.0438 seconds