• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 35
  • 1
  • Tagged with
  • 208
  • 34
  • 33
  • 27
  • 19
  • 17
  • 17
  • 16
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Empirical approach towards investigating usability, guessability and social factors affecting graphical based passwords security

Jebriel, Salem Meftah January 2014 (has links)
This thesis investigates the usability and security of recognition-based graphical authentication schemes in which users provide simple images. These images can either be drawn on paper and scanned into the computer, or alternatively, they can be created with a computer paint program. In our first study, looked at how culture and gender might affect the types of images drawn. A large number of simple drawings were provided by Libyan, Scottish and Nigerian participants and then divided into categories. Our research found that many doodles (perhaps as many as 20%) contained clues about the participants’ own culture or gender. This figure could be reduced by providing simple guidelines on the types of drawings which should be avoided. Our second study continued this theme and asked the participants to try to guess the culture of the person who provided the image. This provided examples of easily guessable and harder to guess images. Our third study we built a system to automatically register simple images provided by users. This involved creating a website where the users could register their images and which they could later login to. Image analysis software was also written which corrected any mistakes the user might make when scanning in their images or using the Paint program. This research showed that it was possible to build an automatic registration system, and that users preferred using a paint tool rather than drawing on paper and then scanning in the drawing. This study also exposed poor security in some user habits, since many users kept their drawings or image files. This research represents one of the first studies of interference effects where users have to choose two different graphical passwords. Around half of the users provided very similar set of drawings. The last study conducted an experiment to find the best way of avoiding ‘shoulder surfing’ attacks to security when selecting simple images during the login stage. Pairs of participants played the parts of the observer and the user logging in. The most secure approaches were selecting using a single keystroke and selecting rows and columns with two key strokes.
112

Sound for the exploration of space physics data

Diaz Merced, Wanda Liz January 2013 (has links)
Current analysis techniques for space physics 2D numerical data are based on scruti-nising the data with the eyes. Space physics data sets acquired from the natural lab of the interstellar medium may contain events that may be masked by noise making it difficult to identify. This thesis presents research on the use of sound as an adjunct to current data visualisation techniques to explore, analyse and augment signatures in space physics data. This research presents a new sonification technique to decom-pose a space physics data set into different components (frequency, oscillatory modes, etc…) of interest, and its use as an adjunct to data visualisation to explore and analyse space science data sets which are characterised by non-linearity (a system which does not satisfy the superposition principle, or whose output is not propor-tional to its input). Integrating aspects of multisensory perceptualization, human at tention mechanisms, the question addressed by this dissertation is: Does sound used as an adjunct to current data visualisation, augment the perception of signatures in space physics data masked by noise? To answer this question, the following additional questions had to be answered: a) Is sound used as an adjunct to visualisation effective in increasing sensi-tivity to signals occurring at attended, unattended, unexpected locations, extended in space, when the occurrence of the signal is in presence of a dynamically changing competing cognitive load (noise), that makes the signal visually ambiguous? b) How can multimodal perceptualization (sound as an adjunct to visualisa-tion) and attention control mechanisms, be combined to help allocate at-tention to identify visually ambiguous signals? One aim of these questions is to investigate the effectiveness of the use of sound to-gether with visual display to increase sensitivity to signal detection in presence of visual noise in the data as compared to visual display only. Radio, particle, wave and high energy data is explored using a sonification technique developed as part of this research. The sonification technique developed as part of this research, its application and re-sults are numerically validated and presented. This thesis presents the results of three experiments and results of a training experiment. In all the 4 experiments, the volun-teers were using sound as an adjunct to data visualisation to identify changes in graphical visual and audio representations and these results are compared with those of using audio rendering only and visual rendering only. In the first experiment audio rendering did not result in significant benefits when used alone or with a visual display. With the second and third experiments, the audio as an adjunct to visual rendering became significant when a fourth cue was added to the spectra. The fourth cue con-sisted of a red line sweeping across the visual display at the rate the sound was played, to synchronise the audio and visual present. The results prove that a third congruent multimodal stimulus in synchrony with the sound helps space scientists identify events masked by noise in 2D data. Results of training experiments are reported.
113

Realisation of computer generated integral three dimensional images

Cartwright, Paul January 2000 (has links)
No description available.
114

Automatic detection of human skin in two-dimensional and complex imagery

Chenaoua, Kamal S. January 2015 (has links)
No description available.
115

Méthodes robustes en traitement d'image pour la détection et la caractérisation d'objets compacts : application à la biologie / Robust image analysis methods for the detection and the characterization of compact objects : application to biology

Marin, Ambroise 05 July 2019 (has links)
Dans le domaine de la microbiologie, de nombreuses expériences se basent sur une fine observation des micro-organismes. De par leur intérêt dans le développement de procédés agroalimentaires modernes, il est important d’étudier leur développement et leur taux de survie dans des conditions environnementales spécifiques telles que des stress osmotiques ou thermiques. L’imagerie microscopique est un des outils les plus utilisés pour observer les micro-organismes. L’interprétation manuelle des images acquises pose des problèmes de subjectivité, de coût et reproductibilité. Cette thèse propose le développement d’outils d’analyse d’image standardisés permettant l’interprétation des images à deux échelles :- A l’échelle de la lame d’observations : l’utilisation de lames de comptage spécifiques (Malassez) permet, à partir du comptage des cellules présente dans la zone d’intérêt de la lame, de déduire la concentration cellulaire d’une solution de Saccharomyces cerevisiae soumises à un stress osmotique. Les outils développés permettent l’identification et la caractérisation de cette zone d’intérêt (grille) puis le comptage précis des cellules.- A l’échelle de la cellule : une souche mutante de Saccharomyces cerevisiae permet d’observer en fluorescence la protéine Pab1p-GFP impliquée dans la formation d’agrégats ribo-nucléoprotéiques intracellulaires consécutifs à un stress thermique. Les outils développés permettent d’obtenir une vue statistique du développement de ces agrégats grâce à l’automatisation de l’estimation de leur nombre pour un très grand nombre de cellules. / In the field of microbiology, many experiments are based on a fine observation of microorganisms. Because of their interest in the development of modern agri-food processes, it is important to study their development and survival rate under specific environmental conditions such as osmotic or thermal stress. Microscopic imaging is one of the most used tools for observing microorganisms. The manual interpretation of acquired images raises problems of subjectivity, cost and reproducibility. This thesis proposes the development of standardized image analysis tools allowing the interpretation of images at two scales:- At the scale of the observation slide: the use of specific counting slides (Malassez) allows, from the counting of the cells present in the zone of interest of the slide, to deduce the cell concentration of a solution of Saccharomyces cerevisiae subjected to osmotic stress. The tools developed allow for the identification and characterization of this area of interest (grid) and precise counting of the cells.- At the cell scale: a mutant strain of Saccharomyces cerevisiae allows for the observation in fluorescence the Pab1p-GFP protein involved in the formation of intracellular ribo-nucleoprotein aggregates consecutive to thermal stress. The tools developed allows for obtaining a statistical view of the development of these aggregates by automating the estimation of their number for a very large number of cells.
116

3D face recognition using multicomponent feature extraction from the nasal region and its environs

Gao, Jiangning January 2016 (has links)
This thesis is dedicated to extracting expression robust features for 3D face recognition. The use of 3D imaging enables the extraction of discriminative features that can significantly improve the recognition performance due to the availability of facial surface information such as depth, surface normals and curvature. Expression robust analysis using information from both depth and surface normals is investigated by dividing the main facial region into patches of different scales. The nasal region and adjoining parts of the cheeks are utilized as they are more consistent over different expressions and are hard to deliberately occlude. In addition, in comparison with other parts of the face, these regions have a high potential to produce discriminative features for recognition and overcome pose variations. An overview and classification methodology of the widely used 3D face databases are first introduced to provide an appropriate reference for 3D face database selection. Using the FRGC and Bosphorus databases, a low complexity pattern rejector for expression robust 3D face recognition is proposed by matching curves on the nasal and its environs, which results in a low-dimension feature set of only 60 points. To extract discriminative features more locally, a novel multi-scale and multi-component local shape descriptor is further proposed, which achieves more competitive performances under the identification and verification scenarios. In contrast with many of the existing work on 3D face recognition that consider captures obtained with laser scanners or structured light, this thesis also investigates applications to reconstructed 3D captures from lower cost photometric stereo imaging systems that have applications in real-world situations. To this end, the performance of the expression robust face recognition algorithms developed for captures from laser scanners are further evaluated on the Photoface database, which contains naturalistic expression variations. To improve the recognition performance of all types of 3D captures, a universal landmarking algorithm is proposed that makes uses of different components of the surface normals. Using facial profile signatures and thresholded surface normal maps, facial roll and yaw rotations are calibrated and five main landmarks are robustly detected on the well-aligned 3D nasal region. The landmarking results show that the detected landmarks demonstrate high within-class consistency and can achieve good recognition performances under different expressions. This is also the first landmarking work specifically developed for the reconstructed 3D captures from photometric stereo imaging systems.
117

Unsupervised neural and Bayesian models for zero-resource speech processing

Kamper, Herman January 2017 (has links)
Zero-resource speech processing is a growing research area which aims to develop methods that can discover linguistic structure and representations directly from unlabelled speech audio. Such unsupervised methods would allow speech technology to be developed in settings where transcriptions, pronunciation dictionaries, and text for language modelling are not available. Similar methods are required for cognitive models of language acquisition in human infants, and for developing robotic applications that are able to automatically learn language in a novel linguistic environment. There are two central problems in zero-resource speech processing: (i) finding frame-level feature representations which make it easier to discriminate between linguistic units (phones or words), and (ii) segmenting and clustering unlabelled speech into meaningful units. The claim of this thesis is that both top-down modelling (using knowledge of higher-level units to to learn, discover and gain insight into their lower-level constituents) as well as bottom-up modelling (piecing together lower-level features to give rise to more complex higher-level structures) are advantageous in tackling these two problems. The thesis is divided into three parts. The first part introduces a new autoencoder-like deep neural network for unsupervised frame-level representation learning. This correspondence autoencoder (cAE) uses weak top-down supervision from an unsupervised term discovery system that identifies noisy word-like terms in unlabelled speech data. In an intrinsic evaluation of frame-level representations, the cAE outperforms several state-of-the-art bottom-up and top-down approaches, achieving a relative improvement of more than 60% over the previous best system. This shows that the cAE is particularly effective in using top-down knowledge of longer-spanning patterns in the data; at the same time, we find that the cAE is only able to learn useful representations when it is initialized using bottom-up pretraining on a large set of unlabelled speech. The second part of the thesis presents a novel unsupervised segmental Bayesian model that segments unlabelled speech data and clusters the segments into hypothesized word groupings. The result is a complete unsupervised tokenization of the input speech in terms of discovered word types|the system essentially performs unsupervised speech recognition. In this approach, a potential word segment (of arbitrary length) is embedded in a fixed-dimensional vector space. The model, implemented as a Gibbs sampler, then builds a whole-word acoustic model in this embedding space while jointly performing segmentation. We first evaluate the approach in a small-vocabulary multi-speaker connected digit recognition task, where we report unsupervised word error rates (WER) by mapping the unsupervised decoded output to ground truth transcriptions. The model achieves around 20% WER, outperforming a previous HMM-based system by about 10% absolute. To achieve this performance, the acoustic word embedding function (which maps variable-duration segments to single vectors) is refined in a top-down manner by using terms discovered by the model in an outer loop of segmentation. The third and final part of the study extends the small-vocabulary system in order to handle larger vocabularies in conversational speech data. To our knowledge, this is the first full-coverage segmentation and clustering system that is applied to large-vocabulary multi-speaker data. To improve efficiency, the system incorporates a bottom-up syllable boundary detection method to eliminate unlikely word boundaries. We compare the system on English and Xitsonga datasets to several state-of-the-art baselines. We show that by imposing a consistent top-down segmentation while also using bottom-up knowledge from detected syllable boundaries, both single-speaker and multi-speaker versions of our system outperform a purely bottom-up single-speaker syllable-based approach. We also show that the discovered clusters can be made less speaker- and gender-specific by using features from the cAE (which incorporates both top-down and bottom-up learning). The system's discovered clusters are still less pure than those of two multi-speaker unsupervised term discovery systems, but provide far greater coverage. In summary, the different models and systems presented in this thesis show that both top-down and bottom-up modelling can improve representation learning, segmentation and clustering of unlabelled speech data.
118

A generic computer platform for efficient iris recognition

Ponder, Christopher John January 2015 (has links)
This document presents the work carried out for the purposes of completing the Engineering Doctorate (EngD) program at the Institute for System Level Integration (iSLI), which was a partnership between the universities of Edinburgh, Glasgow, Heriot-Watt and Strathclyde. The EngD is normally undertaken with an industrial sponsor, but due to a set of unforeseen circumstances this was not the case for this work. However, the work was still undertaken to the same standards as would be expected by an industrial sponsor. An individual’s biometrics include fingerprints, palm-prints, retinal, iris and speech patterns. Even the way people move and sign their name has been shown to be uniquely associated with that individual. This work focuses on the recognition of an individual’s iris patterns. The results reported in the literature are often presented in such a manner that direct comparison between methods is difficult. There is also minimal code resource and no tool available to help simplify the process of developing iris recognition algorithms, so individual developers are required to write the necessary software almost every time. Finally, segmentation performance is currently only measurable using manual evaluation, which is time consuming and prone to human error. This thesis presents a completely novel generic platform for the purposes of developing, testing and evaluating iris recognition algorithms which is designed to simplify the process of developing and testing iris recognition algorithms. Existing open-source algorithms are integrated into the generic platform and are evaluated using the results it produces. Three iris recognition segmentation algorithms and one normalisation algorithm are proposed. Three of the algorithms increased true match recognition performance by between two and 45 percentage points when compared to the available open-source algorithms and methods found in the literature. A matching algorithm was developed that significantly speeds up the process of analysing the results of encoding. Lastly, this work also proposes a method of automatically evaluating the performance of segmentation algorithms, so minimising the need for manual evaluation.
119

The selective use of gaze in automatic speech recognition

Shen, Ao January 2014 (has links)
The performance of automatic speech recognition (ASR) degrades significantly in natural environments compared to in laboratory assessments. Being a major source of interference, acoustic noise affects speech intelligibility during the ASR process. There are two main problems caused by the acoustic noise. The first is the speech signal contamination. The second is the speakers' vocal and non-vocal behavioural changes. These phenomena elicit mismatch between the ASR training and recognition conditions, which leads to considerable performance degradation. To improve noise-robustness, exploiting prior knowledge of the acoustic noise in speech enhancement, feature extraction and recognition models are popular approaches. An alternative approach presented in this thesis is to introduce eye gaze as an extra modality. Eye gaze behaviours have roles in interaction and contain information about cognition and visual attention; not all behaviours are relevant to speech. Therefore, gaze behaviours are used selectively to improve ASR performance. This is achieved by inference procedures using noise-dependant models of gaze behaviours and their temporal and semantic relationship with speech. `Selective gaze-contingent ASR' systems are proposed and evaluated on a corpus of eye movement and related speech in different clean, noisy environments. The best performing systems utilise both acoustic and language model adaptation.
120

Envisioning technology through discourse : a case study of biometrics in the National Identity Scheme in the United Kingdom

Martin, Aaron K. January 2011 (has links)
Around the globe, governments are pursuing policies that depend on information technology (IT). The United Kingdom’s National Identity Scheme was a government proposal for a national identity system, based on biometrics. These proposals for biometrics provide us with an opportunity to explore the diverse and shifting discourses that accompany the attempted diffusion of a controversial IT innovation. This thesis offers a longitudinal case study of these visionary discourses. I begin with a critical review of the literature on biometrics, drawing attention to the lack of in-depth studies that explore the discursive and organizational dynamics accompanying their implementation on a national scale. I then devise a theoretical framework to study these speculative and future-directed discourses based on concepts and ideas from organizing visions theory, the sociology of expectations, and critical approaches to studying the public’s understanding of technology. A methodological discussion ensues in which I explain my research approach and methods for data collection and analysis, including techniques for critical discourse analysis. After briefly introducing the case study, I proceed to the two-part analysis. First is an analysis of government actors’ discourses on biometrics, revolving around formal policy communications; second is an analysis of media discourses and parliamentary debates around certain critical moments for biometrics in the Scheme. The analysis reveals how the uncertain concept of biometrics provided a strategic rhetorical device whereby government spokespeople were able to offer a flexible yet incomplete vision for the technology. I contend that, despite being distinctive and offering some practical value to the proposals for national identity cards, the government’s discourses on biometrics remained insufficiently intelligible, uninformative, and implausible. The concluding discussion explains the unraveling visions for biometrics in the case, offers a theoretical contribution based on the case analysis, and provides insights about discourses on the ‘publics’ of new technology such as biometrics.

Page generated in 0.0163 seconds