• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 87
  • 9
  • 8
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 303
  • 303
  • 241
  • 61
  • 61
  • 51
  • 37
  • 29
  • 27
  • 25
  • 25
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Automatic Person Verification Using Speech and Face Information

Sanderson, Conrad, conradsand@ieee.org January 2003 (has links)
Identity verification systems are an important part of our every day life. A typical example is the Automatic Teller Machine (ATM) which employs a simple identity verification scheme: the user is asked to enter their secret password after inserting their ATM card; if the password matches the one prescribed to the card, the user is allowed access to their bank account. This scheme suffers from a major drawback: only the validity of the combination of a certain possession (the ATM card) and certain knowledge (the password) is verified. The ATM card can be lost or stolen, and the password can be compromised. Thus new verification methods have emerged, where the password has either been replaced by, or used in addition to, biometrics such as the person’s speech, face image or fingerprints. Apart from the ATM example described above, biometrics can be applied to other areas, such as telephone & internet based banking, airline reservations & check-in, as well as forensic work and law enforcement applications. Biometric systems based on face images and/or speech signals have been shown to be quite effective. However, their performance easily degrades in the presence of a mismatch between training and testing conditions. For speech based systems this is usually in the form of channel distortion and/or ambient noise; for face based systems it can be in the form of a change in the illumination direction. A system which uses more than one biometric at the same time is known as a multi-modal verification system; it is often comprised of several modality experts and a decision stage. Since a multi-modal system uses complimentary discriminative information, lower error rates can be achieved; moreover, such a system can also be more robust, since the contribution of the modality affected by environmental conditions can be decreased. This thesis makes several contributions aimed at increasing the robustness of single- and multi-modal verification systems. Some of the major contributions are listed below. The robustness of a speech based system to ambient noise is increased by using Maximum Auto-Correlation Value (MACV) features, which utilize information from the source part of the speech signal. A new facial feature extraction technique is proposed (termed DCT-mod2), which utilizes polynomial coefficients derived from 2D Discrete Cosine Transform (DCT) coefficients of spatially neighbouring blocks. The DCT-mod2 features are shown to be robust to an illumination direction change as well as being over 80 times quicker to compute than 2D Gabor wavelet derived features. The fragility of Principal Component Analysis (PCA) derived features to an illumination direction change is solved by introducing a pre-processing step utilizing the DCT-mod2 feature extraction. We show that the enhanced PCA technique retains all the positive aspects of traditional PCA (that is, robustness to compression artefacts and white Gaussian noise) while also being robust to the illumination direction change. Several new methods, for use in fusion of speech and face information under noisy conditions, are proposed; these include a weight adjustment procedure, which explicitly measures the quality of the speech signal, and a decision stage comprised of a structurally noise resistant piece-wise linear classifier, which attempts to minimize the effects of noisy conditions via structural constraints on the decision boundary.
212

Curve Estimation and Signal Discrimination in Spatial Problems

Rau, Christian, rau@maths.anu.edu.au January 2003 (has links)
In many instances arising prominently, but not exclusively, in imaging problems, it is important to condense the salient information so as to obtain a low-dimensional approximant of the data. This thesis is concerned with two basic situations which call for such a dimension reduction. The first of these is the statistical recovery of smooth edges in regression and density surfaces. The edges are understood to be contiguous curves, although they are allowed to meander almost arbitrarily through the plane, and may even split at a finite number of points to yield an edge graph. A novel locally-parametric nonparametric method is proposed which enjoys the benefit of being relatively easy to implement via a `tracking' approach. These topics are discussed in Chapters 2 and 3, with pertaining background material being given in the Appendix. In Chapter 4 we construct concomitant confidence bands for this estimator, which have asymptotically correct coverage probability. The construction can be likened to only a few existing approaches, and may thus be considered as our main contribution. ¶ Chapter 5 discusses numerical issues pertaining to the edge and confidence band estimators of Chapters 2-4. Connections are drawn to popular topics which originated in the fields of computer vision and signal processing, and which surround edge detection. These connections are exploited so as to obtain greater robustness of the likelihood estimator, such as with the presence of sharp corners. ¶ Chapter 6 addresses a dimension reduction problem for spatial data where the ultimate objective of the analysis is the discrimination of these data into one of a few pre-specified groups. In the dimension reduction step, an instrumental role is played by the recently developed methodology of functional data analysis. Relatively standar non-linear image processing techniques, as well as wavelet shrinkage, are used prior to this step. A case study for remotely-sensed navigation radar data exemplifies the methodology of Chapter 6.
213

Towards a Road Safety Development Index (RSDI) : Development of an International Index to Measure Road Safety Performance

Al Haji, Ghazwan January 2005 (has links)
<p>Aim. This study suggests a set of methodologies to combine different indicators of road safety into a single index. The RSDI is a simple and quick composite index, which may become a significant measurement in comparing, ranking and determining road safety levels in different countries and regions worldwide. Design. One particular concern in designing a Road Safety Development Index (RSDI) is to come up with a comprehensive set of exposure and risk indicators which includes as far as possible the main parameters in road safety related to human-vehicle-road and country patterns instead of considering few and isolated indicators such as accident rates. The RSDI gives a broad picture compared to the traditional models in road safety.</p><p> Challenges. The differences in definitions, non-collection of data, no reliability of data and underreporting are problems for the construction of RSDI. In addition, the index should be as relevant as possible for different countries of the world, especially in developing countries.</p><p>Empirical study. This study empirically compares the road safety situation and trends between ten Southeast Asian countries and Sweden for the period 1994- 2003. Methodologies. Eleven indicators are chosen in RSDI, which have been categorised in nine dimensions. Four main approaches (objective and subjective) are used to calculate RSDI and determine which one is the best. One approach uses equal weights for all indicators and countries, whereas the other approaches give different weights depending on the importance of indicators.</p><p>Findings. The thesis examines the RSDI for the ten ASEAN countries and Sweden in 2003. The results from this study indicate a remarkable difference between ASEAN countries even at the same level of motorisation. Singapore and Brunei seem to have the best RSDI record among the ASEAN countries according to the indicators used, while Laos, Cambodia and Vietnam show lower RSDI records. Conclusions. The RSDI results seem very promising and worth testing further applications with bigger samples of countries and from different parts of the world.</p> / ISRN/Report code: LiU-Tek-Lic 2005:29
214

A Systems Approach to Identify Indicators for Integrated Coastal Zone Management

Sanò, Marcello 09 June 2009 (has links)
El objetivo de la tesis es establecer un marco metodológico para la identificación de indicadores GIZC orientados a problemas y temas de interés, para contextos geográficos específicos. La tesis parte de la idea de que los sistemas de indicadores, utilizados para medir el estado de la costa y la implementación de proyectos de Gestión Integrada de las Zonas Costeras (GIZC), deben orientarse a problemas concretos de la zona de estudio y que su validez debe ser comprobada no sólo por la opinión de los expertos, sino también por la percepción de los usuarios y por el análisis estadístico cuantitativo. / The problem addressed by this thesis is the identification of site-specific and problem-oriented sets of indicators, to be used to determine baseline conditions and to monitor the effect of ICZM initiatives.The approach followed integrates contributions from coastal experts and stakeholders, systems theory, and the use of multivariate analysis techniques in order to provide a cost-effective set of indicators, oriented to site-specific problems, with a broad system perspective.A systems approach, based on systems thinking theory and practice, is developed and tested in this thesis to design models of coastal systems, through the identification of the system's components and relations, using the contribution of experts and stakeholders.Quantitative analysis of the system is then carried out, assessing the contribution of stakeholders and using multivariate statistics (principal components analysis), in order to understand the structure of the system, including relationships between variables.The simplification of the system (reduction of the number of variables) is one of the main outcomes, both in the participatory system's design and in the quantitative multivariate analysis, aiming at a cost-effective set of key variables to be used as indicators for coastal management.
215

Investigation of probabilistic principal component analysis compared to proper orthogonal decomposition methods for basis extraction and missing data estimation

Lee, Kyunghoon 21 May 2010 (has links)
The identification of flow characteristics and the reduction of high-dimensional simulation data have capitalized on an orthogonal basis achieved by proper orthogonal decomposition (POD), also known as principal component analysis (PCA) or the Karhunen-Loeve transform (KLT). In the realm of aerospace engineering, an orthogonal basis is versatile for diverse applications, especially associated with reduced-order modeling (ROM) as follows: a low-dimensional turbulence model, an unsteady aerodynamic model for aeroelasticity and flow control, and a steady aerodynamic model for airfoil shape design. Provided that a given data set lacks parts of its data, POD is required to adopt a least-squares formulation, leading to gappy POD, using a gappy norm that is a variant of an L2 norm dealing with only known data. Although gappy POD is originally devised to restore marred images, its application has spread to aerospace engineering for the following reason: various engineering problems can be reformulated in forms of missing data estimation to exploit gappy POD. Similar to POD, gappy POD has a broad range of applications such as optimal flow sensor placement, experimental and numerical flow data assimilation, and impaired particle image velocimetry (PIV) data restoration. Apart from POD and gappy POD, both of which are deterministic formulations, probabilistic principal component analysis (PPCA), a probabilistic generalization of PCA, has been used in the pattern recognition field for speech recognition and in the oceanography area for empirical orthogonal functions in the presence of missing data. In formulation, PPCA presumes a linear latent variable model relating an observed variable with a latent variable that is inferred only from an observed variable through a linear mapping called factor-loading. To evaluate the maximum likelihood estimates (MLEs) of PPCA parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). By virtue of the EM algorithm, the EM-PCA is capable of not only extracting a basis but also restoring missing data through iterations whether the given data are intact or not. Therefore, the EM-PCA can potentially substitute for both POD and gappy POD inasmuch as its accuracy and efficiency are comparable to those of POD and gappy POD. In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data-missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ~ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set.
216

Target tracking using residual vector quantization

Aslam, Salman Muhammad 18 November 2011 (has links)
In this work, our goal is to track visual targets using residual vector quantization (RVQ). We compare our results with principal components analysis (PCA) and tree structured vector quantization (TSVQ) based tracking. This work is significant since PCA is commonly used in the Pattern Recognition, Machine Learning and Computer Vision communities. On the other hand, TSVQ is commonly used in the Signal Processing and data compression communities. RVQ with more than two stages has not received much attention due to the difficulty in producing stable designs. In this work, we bring together these different approaches into an integrated tracking framework and show that RVQ tracking performs best according to multiple criteria on publicly available datasets. Moreover, an advantage of our approach is a learning-based tracker that builds the target model while it tracks, thus avoiding the costly step of building target models prior to tracking.
217

Modeling of linkage disequilibrium in whole genome genetic association studies. / Modélisation du déséquilibre de liaison dans les études d’association génome entier

Johnson, Randall 19 December 2014 (has links)
L’approche GWAS est un outil essentiel pour la découverte de gènes associés aux maladies, mais elle pose des problèmes de puissance statistique quand il est impossible d’échantillonner génétiquement des dizaines de milliers de sujets. Les résultats présentés ici—ALDsuite, un programme en utilisant une correction nouvelle et efficace pour le déséquilibre de liaison (DL) ancestrale de la population locale, en permettant l'utilisation de marqueurs denses dans le MALD, et la démonstration que la méthode simpleM fournit une correction optimale pour les comparaisons multiples dans le GWAS—réaffirment la valeur de l'analyse en composantes principales (APC) pour capturer l’essence de la complexité des systèmes de grande dimension. L’APC est déjà la norme pour corriger la structure de la population dans le GWAS; mes résultats indiquent qu’elle est aussi une stratégie générale pour faire face à la forte dimensionnalité des données génomiques d'association. / GWAS is an essential tool for disease gene discovery, but has severe problems of statistical power when it is impractical to genetically sample tens of thousands of subjects. The results presented here—a novel, effective correction for local ancestral population LD allowing use of dense markers in MALD using the ALDsuite and the demonstration that the simpleM method provides an optimum Bonferroni correction for multiple comparisons in GWAS, reiterate the value of PCA for capturing the essential part of the complexity of high- dimensional systems. PCA is already standard for correcting for population substructure in GWAS; my results point to it’s broader applicability as a general strategy for dealing with the high dimensionality of genomic association data.
218

A multivariate approach to QSAR

Hellberg, Sven January 1986 (has links)
Quantitative structure-activity relationships (OSAR) constitute empirical analogy models connecting chemical structure and biological activity. The analogy approach to QSAR assume that the factors important in the biological system also are contained in chemical model systems. The development of a QSAR can be divided into subproblems: 1. to quantify chemical structure in terms of latent variables expressing analogy, 2. to design test series of compounds, 3. to measure biological activity and 4. to construct a mathematical model connecting chemical structure and biological activity. In this thesis it is proposed that many possibly relevant descriptors should be considered simultaneously in order to efficiently capture the unknown factors inherent in the descriptors. The importance of multivariately and multipositionally varied test series is discussed. Multivariate projection methods such as PCA and PLS are shown to be appropriate far QSAR and to closely correspond to the analogy assumption. The multivariate analogy approach is applied to a beta- adrenergic agents, b haloalkanes, c halogenated ethyl methyl ethers and d four different families of peptides. / <p>Diss. (sammanfattning) Umeå : Umeå universitet, 1986, härtill 8 uppsatser</p> / digitalisering@umu
219

Super-resolution image processing with application to face recognition

Lin, Frank Chi-Hao January 2008 (has links)
Subject identification from surveillance imagery has become an important task for forensic investigation. Good quality images of the subjects are essential for the surveillance footage to be useful. However, surveillance videos are of low resolution due to data storage requirements. In addition, subjects typically occupy a small portion of a camera's field of view. Faces, which are of primary interest, occupy an even smaller array of pixels. For reliable face recognition from surveillance video, there is a need to generate higher resolution images of the subject's face from low-resolution video. Super-resolution image reconstruction is a signal processing based approach that aims to reconstruct a high-resolution image by combining a number of low-resolution images. The low-resolution images that differ by a sub-pixel shift contain complementary information as they are different "snapshots" of the same scene. Once geometrically registered onto a common high-resolution grid, they can be merged into a single image with higher resolution. As super-resolution is a computationally intensive process, traditional reconstruction-based super-resolution methods simplify the problem by restricting the correspondence between low-resolution frames to global motion such as translational and affine transformation. Surveillance footage however, consists of independently moving non-rigid objects such as faces. Applying global registration methods result in registration errors that lead to artefacts that adversely affect recognition. The human face also presents additional problems such as selfocclusion and reflectance variation that even local registration methods find difficult to model. In this dissertation, a robust optical flow-based super-resolution technique was proposed to overcome these difficulties. Real surveillance footage and the Terrascope database were used to compare the reconstruction quality of the proposed method against interpolation and existing super-resolution algorithms. Results show that the proposed robust optical flow-based method consistently produced more accurate reconstructions. This dissertation also outlines a systematic investigation of how super-resolution affects automatic face recognition algorithms with an emphasis on comparing reconstruction- and learning-based super-resolution approaches. While reconstruction-based super-resolution approaches like the proposed method attempt to recover the aliased high frequency information, learning-based methods synthesise them instead. Learning-based methods are able to synthesise plausible high frequency detail at high magnification ratios but the appearance of the face may change to the extent that the person no longer looks like him/herself. Although super-resolution has been applied to facial imagery, very little has been reported elsewhere on measuring the performance changes from super-resolved images. Intuitively, super-resolution improves image fidelity, and hence should improve the ability to distinguish between faces and consequently automatic face recognition accuracy. This is the first study to comprehensively investigate the effect of super-resolution on face recognition. Since super-resolution is a computationally intensive process it is important to understand the benefits in relation to the trade-off in computations. A framework for testing face recognition algorithms with multi-resolution images was proposed, using the XM2VTS database as a sample implementation. Results show that super-resolution offers a small improvement over bilinear interpolation in recognition performance in the absence of noise and that super-resolution is more beneficial when the input images are noisy since noise is attenuated during the frame fusion process.
220

Εφαρμογή της παραγοντικής ανάλυσης για την ανίχνευση και περιγραφή της κατανάλωσης αλκοολούχων ποτών του ελληνικού πληθυσμού

Ρεκούτη, Αγγελική 21 October 2011 (has links)
Σκοπός της εργασίας αυτής είναι να εφαρμόσουμε την Παραγοντική Ανάλυση στο δείγμα μας, έτσι ώστε να ανιχνεύσουμε και να περιγράψουμε τις καταναλωτικές συνήθειες του Ελληνικού πληθυσμού ως προς την κατανάλωση 9 κατηγοριών αλκοολούχων ποτών. Η εφαρμογή της μεθόδου γίνεται με την χρήση του στατιστικού προγράμματος SPSS. Στο πρώτο κεφάλαιο παρουσιάζεται η οικογένεια μεθόδων επίλυσης του προβλήματος και στο δεύτερο η μέθοδος που επιλέχτηκε για την επίλυση, η Παραγοντική Ανάλυση. Προσδιορίζουμε το αντικείμενο, τα στάδια σχεδιασμού και τις προϋποθέσεις της μεθόδου, καθώς και τα κριτήρια αξιολόγησης των αποτελεσμάτων. Τα κεφάλαια που ακολουθούν αποτελούν το πρακτικό μέρος της εργασίας. Στο 3ο κεφάλαιο αναφέρουμε την πηγή των δεδομένων μας και την διεξαγωγή του τρόπου συλλογής τους. Ακολουθεί ο εντοπισμός των «χαμένων» απαντήσεων και εφαρμόζεται η Ανάλυση των Χαμένων Τιμών (Missing Values Analysis) για τον προσδιορισμό του είδους αυτών και την αποκατάσταση τους στο δείγμα. Στην συνέχεια παρουσιάζουμε το δείγμα μας με τη βοήθεια της περιγραφικής στατιστικής και τέλος δημιουργούμε και περιγράφουμε το τελικό μητρώο δεδομένων το οποίο θα αναλύσουμε παραγοντικά. Στο 4ο και τελευταίο κεφάλαιο διερευνάται η καταλληλότητα του δείγματος για την εφαρμογή της Παραγοντικής Ανάλυσης με τον έλεγχο της ικανοποίησης των προϋποθέσεων της μεθόδου. Ακολουθεί η παράλληλη μελέτη του δείγματος συμπεριλαμβάνοντας και μη στην επίλυση τις ακραίες τιμές (outliers) που εντοπίστηκαν. Καταλήγοντας στο συμπέρασμα ότι οι ακραίες τιμές δεν επηρεάζουν τα αποτελέσματα της μεθόδου, εφαρμόζουμε την Παραγοντική Ανάλυση με τη χρήση της μεθόδου των κυρίων συνιστωσών και αναφέρουμε αναλυτικά όλα τα βήματα μέχρι να καταλήξουμε στα τελικά συμπεράσματα μας. / The purpose of this paper is to apply the Factor Analysis to our sample in order to detect and describe patterns concerning the consumption of 9 categories of alcoholic beverages by the Greek population. For the application of the method, we use the statistical program SPSS. The first chapter presents the available methods for solving this problem and the second one presents the chosen method, namely Factor Analysis. We specify the objective of the analysis, the design and the critical assumptions of the method, as well as the criteria for the evaluation of the results. In the third chapter we present the source of our data and how the sampling was performed. Furthermore, we identify the missing values and we apply the Missing Values Analysis to determine their type. We also present our sample using descriptive statistics and then create and describe the final matrix which we analyze with Factor Analysis. In the fourth and last chapter we investigate the suitability of our samples for applying Factor Analysis. In the sequence, we perform the parallel study of our sample both including and not including the extreme values that we identified (which we call “outliers”). We conclude that the outliers do not affect the results of our method and then apply Factor Analysis using the extraction method of Principal Components. We also mention in detail all steps until reaching our final conclusions.

Page generated in 0.0632 seconds