• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 385
  • 168
  • 46
  • 44
  • 28
  • 21
  • 19
  • 18
  • 17
  • 17
  • 15
  • 6
  • 4
  • 3
  • 3
  • Tagged with
  • 940
  • 940
  • 742
  • 149
  • 146
  • 142
  • 124
  • 113
  • 97
  • 86
  • 75
  • 72
  • 70
  • 63
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

A Vision-Based Approach For Unsupervised Modeling Of Signs Embedded In Continuous Sentences

Nayak, Sunita 07 July 2005 (has links)
The common practice in sign language recognition is to first construct individual sign models, in terms of discrete state transitions, mostly represented using Hidden Markov Models, from manually isolated sign samples and then to use them to recognize signs in continuous sentences. In this thesis we use a continuous state space model, where the states are based on purely image-based features, without the use of special gloves. We also present an unsupervised approach to both extract and learn models for continuous basic units of signs, which we term as signemes, from continuous sentences. Given a set of sentences with a common sign, we can automatically learn the model for part of the sign,or signeme, that is least affected by coarticulation effects. We tested our idea using the publicly available Boston SignStreamDataset by building signeme models of 18 signs. We test the quality of the models by considering how well we can localize the sign in a new sentence. We also present the concept of smooth continuous curve based models formed using functional splines and curve registration. We illustrate this idea using 16 signs.
112

Modelling Distance Functions Induced by Face Recognition Algorithms

Chaudhari, Soumee 09 November 2004 (has links)
Face recognition algorithms has in the past few years become a very active area of research in the fields of computer vision, image processing, and cognitive psychology. This has spawned various algorithms of different complexities. The concept of principal component analysis(PCA) is a popular mode of face recognition algorithm and has often been used to benchmark other face recognition algorithms for identification and verification scenarios. However in this thesis, we try to analyze different face recognition algorithms at a deeper level. The objective is to model the distances output by any face recognition algorithm as a function of the input images. We achieve this by creating an affine eigen space from the PCA space such that it can approximate the results of the face recognition algorithm under consideration as closely as possible. Holistic template matching algorithms like the Linear Discriminant Analysis algorithm( LDA), the Bayesian Intrapersonal/Extrapersonal classifier(BIC), as well as local feature based algorithms like the Elastic Bunch Graph Matching algorithm(EBGM) and a commercial face recognition algorithm are selected for our experiments. We experiment on two different data sets, the FERET data set and the Notre Dame data set. The FERET data set consists of images of subjects with variation in both time and expression. The Notre Dame data set consists of images of subjects with variation in time. We train our affine approximation algorithm on 25 subjects and test with 300 subjects from the FERET data set and 415 subjects from the Notre Dame data set. We also analyze the effect of different distance metrics used by the face recognition algorithm on the accuracy of the approximation. We study the quality of the approximation in the context of recognition for the identification and verification scenarios, characterized by cumulative match score curves (CMC) and receiver operator curves (ROC), respectively. Our studies indicate that both the holistic template matching algorithms as well as feature based algorithms can be well approximated. We also find the affine approximation training can be generalized across covariates. For the data with time variation, we find that the rank order of approximation performance is BIC, LDA, EBGM, and commercial. For the data with expression variation, the rank order is LDA, BIC, commercial, and EBGM. Experiments to approximate PCA with distance measures other than Euclidean also performed very well. PCA+Euclidean distance is best approximated followed by PCA+MahL1, PCA+MahCosine, and PCA+Covariance.
113

An Indepth Analysis of Face Recognition Algorithms using Affine Approximations

Reguna, Lakshmi 19 May 2003 (has links)
In order to foster the maturity of face recognition analysis as a science, a well implemented baseline algorithm and good performance metrics are highly essential to benchmark progress. In the past, face recognition algorithms based on Principal Components Analysis(PCA) have often been used as a baseline algorithm. The objective of this thesis is to develop a strategy to estimate the best affine transformation, which when applied to the eigen space of the PCA face recognition algorithm can approximate the results of any given face recognition algorithm. The affine approximation strategy outputs an optimal affine transform that approximates the similarity matrix of the distances between a given set of faces generated by any given face recognition algorithm. The affine approximation strategy would help in comparing how close a face recognition algorithm is to the PCA based face recognition algorithm. This thesis work shows how the affine approximation algorithm can be used as a valuable tool to evaluate face recognition algorithms at a deep level. Two test algorithms were choosen to demonstrate the usefulness of the affine approximation strategy. They are the Linear Discriminant Analysis(LDA) based face recognition algorithm and the Bayesian interpersonal and intrapersonal classifier based face recognition algorithm. Our studies indicate that both the algorithms can be approximated well. These conclusions were arrived based on the results produced by analyzing the raw similarity scores and by studying the identification and verification performance of the algorithms. Two training scenarios were considered, one in which both the face recognition and the affine approximation algorithm were trained on the same data set and in the other, different data sets were used to train both the algorithms. Gross error measures like the average RMS error and Stress-1 error were used to directly compare the raw similarity scores. The histogram of the difference between the similarity matrixes also clearly showed that the error spread is small for the affine approximation algorithm. The performance of the algorithms in the identification and the verification scenario were characterized using traditional CMS and ROC curves. The McNemar's test showed that the difference between the CMS and the ROC curves generated by the test face recognition algorithms and the affine approximation strategy is not statistically significant. The results were statistically insignificant at rank 1 for the first training scenario but for the second training scenario they became insignificant only at higher ranks. This difference in performance can be attributed to the different training sets used in the second training scenario.
114

Application of supervised and unsupervised learning to analysis of the arterial pressure pulse

Walsh, Andrew Michael, Graduate school of biomedical engineering, UNSW January 2006 (has links)
This thesis presents an investigation of statistical analytical methods applied to the analysis of the shape of the arterial pressure waveform. The arterial pulse is analysed by a selection of both supervised and unsupervised methods of learning. Supervised learning methods are generally better known as regression. Unsupervised learning methods seek patterns in data without the specification of a target variable. The theoretical relationship between arterial pressure and wave shape is first investigated by study of a transmission line model of the arterial tree. A meta-database of pulse waveforms obtained by the SphygmoCor"??" device is then analysed by the unsupervised learning technique of Self Organising Maps (SOM). The map patterns indicate that the observed arterial pressures affect the wave shape in a similar way as predicted by the theoretical model. A database of continuous arterial pressure obtained by catheter line during sleep is used to derive supervised models that enable estimation of arterial pressures, based on the measured wave shapes. Independent component analysis (ICA) is also used in a supervised learning methodology to show the theoretical plausibility of separating the pressure signals from unwanted noise components. The accuracy and repeatability of the SphygmoCor?? device is measured and discussed. Alternative regression models are introduced that improve on the existing models in the estimation of central cardiovascular parameters from peripheral arterial wave shapes. Results of this investigation show that from the information in the wave shape, it is possible, in theory, to estimate the continuous underlying pressures within the artery to a degree of accuracy acceptable to the Association for the Advancement of Medical Instrumentation. This could facilitate a new role for non-invasive sphygmographic devices, to be used not only for feature estimation but as alternatives to invasive arterial pressure sensors in the measurement of continuous blood pressure.
115

Dimensionality Reduction Using Factor Analysis

Khosla, Nitin, n/a January 2006 (has links)
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
116

Human Promoter Recognition Based on Principal Component Analysis

Li, Xiaomeng January 2008 (has links)
Master of Engineering / This thesis presents an innovative human promoter recognition model HPR-PCA. Principal component analysis (PCA) is applied on context feature selection DNA sequences and the prediction network is built with the artificial neural network (ANN). A thorough literature review of all the relevant topics in the promoter prediction field is also provided. As the main technique of HPR-PCA, the application of PCA on feature selection is firstly developed. In order to find informative and discriminative features for effective classification, PCA is applied on the different n-mer promoter and exon combined frequency matrices, and principal components (PCs) of each matrix are generated to construct the new feature space. ANN built classifiers are used to test the discriminability of each feature space. Finally, the 3 and 5-mer feature matrix is selected as the context feature in this model. Two proposed schemes of HPR-PCA model are discussed and the implementations of sub-modules in each scheme are introduced. The context features selected by PCA are III used to build three promoter and non-promoter classifiers. CpG-island modules are embedded into models in different ways. In the comparison, Scheme I obtains better prediction results on two test sets so it is adopted as the model for HPR-PCA for further evaluation. Three existing promoter prediction systems are used to compare to HPR-PCA on three test sets including the chromosome 22 sequence. The performance of HPR-PCA is outstanding compared to the other four systems.
117

Denoising of Infrared Images Using Independent Component Analysis

Björling, Robin January 2005 (has links)
<p>Denna uppsats syftar till att undersöka användbarheten av metoden Independent Component Analysis (ICA) för brusreducering av bilder tagna av infraröda kameror. Speciellt fokus ligger på att reducera additivt brus. Bruset delas upp i två delar, det Gaussiska bruset samt det sensorspecifika mönsterbruset. För att reducera det Gaussiska bruset används en populär metod kallad sparse code shrinkage som bygger på ICA. En ny metod, även den byggandes på ICA, utvecklas för att reducera mönsterbrus. För varje sensor utförs, i den nya metoden, en analys av bilddata för att manuellt identifiera typiska mönsterbruskomponenter. Dessa komponenter används därefter för att reducera mönsterbruset i bilder tagna av den aktuella sensorn. Det visas att metoderna ger goda resultat på infraröda bilder. Algoritmerna testas både på syntetiska såväl som på verkliga bilder och resultat presenteras och jämförs med andra algoritmer.</p> / <p>The purpose of this thesis is to evaluate the applicability of the method Independent Component Analysis (ICA) for noise reduction of infrared images. The focus lies on reducing the additive uncorrelated noise and the sensor specific additive Fixed Pattern Noise (FPN). The well known method sparse code shrinkage, in combination with ICA, is applied to reduce the uncorrelated noise degrading infrared images. The result is compared to an adaptive Wiener filter. A novel method, also based on ICA, for reducing FPN is developed. An independent component analysis is made on images from an infrared sensor and typical fixed pattern noise components are manually identified. The identified components are used to fast and effectively reduce the FPN in images taken by the specific sensor. It is shown that both the FPN reduction algorithm and the sparse code shrinkage method work well for infrared images. The algorithms are tested on synthetic as well as on real images and the performance is measured.</p>
118

Improved effort estimation of software projects based on metrics

Andersson, Veronika, Sjöstedt, Hanna January 2005 (has links)
<p>Saab Ericsson Space AB develops products for space for a predetermined price. Since the price is fixed, it is crucial to have a reliable prediction model to estimate the effort needed to develop the product. In general software effort estimation is difficult, and at the software department this is a problem.</p><p>By analyzing metrics, collected from former projects, different prediction models are developed to estimate the number of person hours a software project will require. Models for predicting the effort before a project begins is first developed. Only a few variables are known at this state of a project. The models developed are compared to a current model used at the company. Linear regression models improve the estimate error with nine percent units and nonlinear regression models improve the result even more. The model used today is also calibrated to improve its predictions. A principal component regression model is developed as well. Also a model to improve the estimate during an ongoing project is developed. This is a new approach, and comparison with the first estimate is the only evaluation.</p><p>The result is an improved prediction model. There are several models that perform better than the one used today. In the discussion, positive and negative aspects of the models are debated, leading to the choice of a model, recommended for future use.</p>
119

Do Self-Sustainable MFI:s help alleviate relative poverty?

Stenbäcken, Rasmus January 2006 (has links)
<p>The subject of this paper is microfinance and the question: Do self-sustainable MFI:s alleviate poverty?.</p><p>A MFI is a micro financial institution, a regular bank or a NGO that has transformed into a licensed financial institutions, focused on microenterprises. To answer the question data has been gathered in Ecuador, South America. South America have a large amount of self sustainable MFI:s. Ecuador was selected as the country to be studied as it has an intermediate level of market penetration in the micro financial sector. To determine relative poverty before and after the access to microcredit, interviews were used. The data retrieved in the interviews was used to determine the impact of micro credit on different aspects of relative poverty using the Difference in Difference method.</p><p>Significant differences are found between old and new clients as well as for the change over time. But no significant results are found for the difference in change over time for clients compared to the non-clients. The author argues that the insignificant result can either be a result of a too small sample size, disturbances in the sample selection or that this specific kind of institution have little or no affect on the current clients economical development.</p>
120

Contrast properties of entropic criteria for blind source separation : a unifying framework based on information-theoretic inequalities

Vrins, Frédéric D. 02 March 2007 (has links)
In the recent years, Independent Component Analysis (ICA) has become a fundamental tool in adaptive signal and data processing, especially in the field of Blind Source Separation (BSS). Even though there exist some methods for which an algebraic solution to the ICA problem may be found, other iterative methods are very popular. Among them is the class of information-theoretic approaches, laying on entropies. The associated objective functions are maximized based on optimization schemes, and on gradient-ascent techniques in particular. Two major issues in this field are the following: 1) Does the global maximum point of these entropic objectives correspond to a satisfactory solution of BSS ? and 2) as gradient techniques are used, optimization algorithms look in fact for local maximum points, so what about the meaning of these local optima from the BSS problem point of view? Even though there are some partial answers to these questions in the literature, most of them are based on simulation and conjectures; formal developments are often lacking. This thesis aims at filling this lack and providing intuitive justifications, too. We focus the analysis on Rényi's entropy-based contrast functions. Our results show that, generally speaking, Rényi's entropy is not a suitable contrast function for BSS, even though we recover the well-known results saying that Shannon's entropy-based objectives are contrast functions. We also show that the range-based contrast functions can be built under some conditions on the sources. The BSS problem is stated in the first chapter, and viewed under the information (theory) angle. The two next chapters address specifically the above questions. Finally, the last chapter deals with range-based ICA, the only ``entropy-based contrast' which, based on the enclosed results, is also a <i>discriminant</i> contrast function, in the sense that it is theoretically free of spurious local optima. Geometrical interpretations and surprising examples are given. The interest of this approach is confirmed by testing the algorithm on the MLSP 2006 data analysis competition benchmark; the proposed method outperforms the previously obtained results on large-scale and noisy mixture samples obtained through ill-conditioned mixing matrices.

Page generated in 0.0788 seconds