• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 743
  • 743
  • 743
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

An Indepth Analysis of Face Recognition Algorithms using Affine Approximations

Reguna, Lakshmi 19 May 2003 (has links)
In order to foster the maturity of face recognition analysis as a science, a well implemented baseline algorithm and good performance metrics are highly essential to benchmark progress. In the past, face recognition algorithms based on Principal Components Analysis(PCA) have often been used as a baseline algorithm. The objective of this thesis is to develop a strategy to estimate the best affine transformation, which when applied to the eigen space of the PCA face recognition algorithm can approximate the results of any given face recognition algorithm. The affine approximation strategy outputs an optimal affine transform that approximates the similarity matrix of the distances between a given set of faces generated by any given face recognition algorithm. The affine approximation strategy would help in comparing how close a face recognition algorithm is to the PCA based face recognition algorithm. This thesis work shows how the affine approximation algorithm can be used as a valuable tool to evaluate face recognition algorithms at a deep level. Two test algorithms were choosen to demonstrate the usefulness of the affine approximation strategy. They are the Linear Discriminant Analysis(LDA) based face recognition algorithm and the Bayesian interpersonal and intrapersonal classifier based face recognition algorithm. Our studies indicate that both the algorithms can be approximated well. These conclusions were arrived based on the results produced by analyzing the raw similarity scores and by studying the identification and verification performance of the algorithms. Two training scenarios were considered, one in which both the face recognition and the affine approximation algorithm were trained on the same data set and in the other, different data sets were used to train both the algorithms. Gross error measures like the average RMS error and Stress-1 error were used to directly compare the raw similarity scores. The histogram of the difference between the similarity matrixes also clearly showed that the error spread is small for the affine approximation algorithm. The performance of the algorithms in the identification and the verification scenario were characterized using traditional CMS and ROC curves. The McNemar's test showed that the difference between the CMS and the ROC curves generated by the test face recognition algorithms and the affine approximation strategy is not statistically significant. The results were statistically insignificant at rank 1 for the first training scenario but for the second training scenario they became insignificant only at higher ranks. This difference in performance can be attributed to the different training sets used in the second training scenario.
92

Dimensionality Reduction Using Factor Analysis

Khosla, Nitin, n/a January 2006 (has links)
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
93

Human Promoter Recognition Based on Principal Component Analysis

Li, Xiaomeng January 2008 (has links)
Master of Engineering / This thesis presents an innovative human promoter recognition model HPR-PCA. Principal component analysis (PCA) is applied on context feature selection DNA sequences and the prediction network is built with the artificial neural network (ANN). A thorough literature review of all the relevant topics in the promoter prediction field is also provided. As the main technique of HPR-PCA, the application of PCA on feature selection is firstly developed. In order to find informative and discriminative features for effective classification, PCA is applied on the different n-mer promoter and exon combined frequency matrices, and principal components (PCs) of each matrix are generated to construct the new feature space. ANN built classifiers are used to test the discriminability of each feature space. Finally, the 3 and 5-mer feature matrix is selected as the context feature in this model. Two proposed schemes of HPR-PCA model are discussed and the implementations of sub-modules in each scheme are introduced. The context features selected by PCA are III used to build three promoter and non-promoter classifiers. CpG-island modules are embedded into models in different ways. In the comparison, Scheme I obtains better prediction results on two test sets so it is adopted as the model for HPR-PCA for further evaluation. Three existing promoter prediction systems are used to compare to HPR-PCA on three test sets including the chromosome 22 sequence. The performance of HPR-PCA is outstanding compared to the other four systems.
94

Improved effort estimation of software projects based on metrics

Andersson, Veronika, Sjöstedt, Hanna January 2005 (has links)
<p>Saab Ericsson Space AB develops products for space for a predetermined price. Since the price is fixed, it is crucial to have a reliable prediction model to estimate the effort needed to develop the product. In general software effort estimation is difficult, and at the software department this is a problem.</p><p>By analyzing metrics, collected from former projects, different prediction models are developed to estimate the number of person hours a software project will require. Models for predicting the effort before a project begins is first developed. Only a few variables are known at this state of a project. The models developed are compared to a current model used at the company. Linear regression models improve the estimate error with nine percent units and nonlinear regression models improve the result even more. The model used today is also calibrated to improve its predictions. A principal component regression model is developed as well. Also a model to improve the estimate during an ongoing project is developed. This is a new approach, and comparison with the first estimate is the only evaluation.</p><p>The result is an improved prediction model. There are several models that perform better than the one used today. In the discussion, positive and negative aspects of the models are debated, leading to the choice of a model, recommended for future use.</p>
95

Do Self-Sustainable MFI:s help alleviate relative poverty?

Stenbäcken, Rasmus January 2006 (has links)
<p>The subject of this paper is microfinance and the question: Do self-sustainable MFI:s alleviate poverty?.</p><p>A MFI is a micro financial institution, a regular bank or a NGO that has transformed into a licensed financial institutions, focused on microenterprises. To answer the question data has been gathered in Ecuador, South America. South America have a large amount of self sustainable MFI:s. Ecuador was selected as the country to be studied as it has an intermediate level of market penetration in the micro financial sector. To determine relative poverty before and after the access to microcredit, interviews were used. The data retrieved in the interviews was used to determine the impact of micro credit on different aspects of relative poverty using the Difference in Difference method.</p><p>Significant differences are found between old and new clients as well as for the change over time. But no significant results are found for the difference in change over time for clients compared to the non-clients. The author argues that the insignificant result can either be a result of a too small sample size, disturbances in the sample selection or that this specific kind of institution have little or no affect on the current clients economical development.</p>
96

Correlation Analysis for the Influence of Air Pollutants and Meteorological Factors on Low Atmospheric Visibility in the Taipei Basin

Li, Jian-jhang 07 September 2007 (has links)
This study aims to investigate the influence of air pollutant concentration and meteorological factors on the atmospheric visibility in Taipei basin. First of all, we collected air quality data measured by ambient air quality monitoring stations of EPA (Environmental Protection Administration) and the meteorological factors monitored by Tamsui and Taipei meteorological stations separately, based on the range of three observation directions. We then analyzed the data by PCA (principal component analysis) to determine the main effective factors on atmospheric visibility under low visibility condition. In order to comprehend the spatial and temporal distribution of atmospheric visibility, we collected the atmospheric visibility data from Taipei meteorological observation stations for the past twenty-two years (1984~2005), it showed that the atmospheric visibility increased gradually. The seasonal variation of visibility was also observed, the best season was autumn (10.7 km) and the worst season was spring (7.5 km). Furthermore, according to the monthly statistical results, the visibility trends in the Taipei Basin can be separated into three typical periods: low visibility period (January to May), transitional period (June to September), high visibility period (October to December). The average atmospheric visibilities observed at the Tamsui, Songshan, and Sindian directions were 10.66 km , 9.54 km and 8.44 km, respectively. In general, the visibility at the Tamsui direction was slightly higher than those from other two directions. The results showed that atmospheric visibility was influenced not only by air pollutant levels and meteorological factors, but also affected by local topography of Taipei Basin. This study revealed that the atmospheric visibility data led on the Tamsui, Songshan observation directions is better. Four intensive observations of atmospheric visibility were conducted during March 28~April 1, July 4~8, September 19~23, and November 14~18 in the year of 2006, respectively. The results showed that the atmospheric visibilities at Tamsui direction were generally higher than other two directions. The visibilities observed in the afternoon were generally higher than those in the morning. Results obtained from the principle component analysis showed that the atmospheric visibility in the Taipei Basin were mainly influenced by PM10, NOx and CO, that mobile sources was the main cause of low visibility in the Taipei Basin. In addition, Tamsui region were affected by PM10 and SO2 more than Songshan and Sindian regions, which was influenced by neighboring industrial and the power plants. In the meteorological factors, wind speed and temperature have more influence on atmospheric visibility, however, the relationship between atmospheric visibility and relative humidity was somehow irregular. The analysis of the spatial distribution of air pollutants showed that low visibilities can not be caused only by high air pollutant concentration within the region, it may caused by the rise of air pollutant concentration in the transition region.
97

3D Modeling of Indoor Environments

Dahlin, Johan January 2013 (has links)
With the aid of modern sensors it is possible to create models of buildings. These sensorstypically generate 3D point clouds and in order to increase interpretability and usability,these point clouds are often translated into 3D models.In this thesis a way of translating a 3D point cloud into a 3D model is presented. The basicfunctionality is implemented using Matlab. The geometric model consists of floors, wallsand ceilings. In addition, doors and windows are automatically identified and integrated intothe model. The resulting model also has an explicit representation of the topology betweenentities of the model. The topology is represented as a graph, and to do this GraphML isused. The graph is opened in a graph editing program called yEd.The result is a 3D model that can be plotted in Matlab and a graph describing the connectivitybetween entities. The GraphML file is automatically generated in Matlab. An interfacebetween Matlab and yEd allows the user to choose which rooms should be plotted.
98

Production and fractionation of antioxidant peptides from soy protein isolate using sequential membrane ultrafiltration and nanofiltration

Ranamukhaarachchi, Sahan January 2012 (has links)
Antioxidants are molecules capable of stabilizing and preventing oxidation. Certain peptides, protein hydrolysates, have shown antioxidant capacities, which are obtained once liberated from the native protein structure. Soy protein isolates (SPI) were enzymatically hydrolyzed by pepsin and pancreatin mixtures. The soy protein hydrolysates (SPH) were fractionated with sequential ultrafiltration (UF) and nanofiltration (NF) membrane steps. Heat pre-treatment of SPI at 95 degrees celsius (C) for 5 min prior to enzymatic hydrolysis was investigated for its effect on peptide distribution and antioxidant capacity. SPH were subjected to UF with a 10 kDa molecular weight cut off (MWCO) polysulfone membrane. UF permeate fractions (lower molecular weight than 10 kDa) were fractionated by NF with a thin film composite membrane (2.5 kDa MWCO) at pH 4 and 8. Similar peptide content and antioxidant capacity (α=0.05) were obtained in control and pre-heated SPH when comparing the respective UF and NF permeate and retentate fractions produced. FCR antioxidant capacities of the SPH fractions were significantly lower than their ORAC antioxidant capacities, and the distribution among the UF and NF fractions was generally different. Most UF and NF fractions displayed higher antioxidant capacities when compared to the crude SPI hydrolysates, showing the importance of molecular weight on antioxidant capacity of peptides. The permeate fractions produced by NF at pH 8 displayed the highest antioxidant capacity, expressed in terms of Trolox equivalents (TE) per total solids (TS): 5562 μmol TE/g TS for control SPH, and 5187 μmol TE/g TS for pre-heated SPH. Due to the improvement in antioxidant capacity of peptides by NF at pH 8, the potential for NF as a viable industrial fractionation process was demonstrated. Principal component analysis (PCA) of fluorescence excitation-emission matrix (EEM) data for UF and NF peptide fractions, followed by multi-linear regression analysis, was assessed for its potential to monitor and identify the contributions to ORAC and FCR, two in vitro antioxidant capacity assays, of SPH during membrane fractionation. Two statistically significant principal components (PCs) were obtained for UF and NF peptide fractions. Multi-linear regression models (MLRM) were developed to estimate their fluorescence and PCA-captured ORAC (ORAC-FPCA) and FCR (FCR-FPCA) antioxidant capacities. The ORAC-FPCA and FCR-FPCA antioxidant capacities for NF samples displayed strong, linear relationships at different pH conditions (R-squared>0.99). Such relationships are believed to reflect the individual and relative combined contributions of tryptophan and tyrosine residues present in the SPH fractions to ORAC and FCR antioxidant capacities. Therefore, the proposed method provides a tool for the assessment of fundamental parameters of antioxidant capacities captured by ORAC and FCR assays.
99

Interactions Of Water And Sediment Phosphorus In Lake Eymir

Pilevneli, Tolga 01 February 2013 (has links) (PDF)
A detailed study is held in Lake Eymir, a shallow eutrophic lake, investigating the phosphorus concentrations in water and the bottom sediment. Water depth, secchi depth, TSS, sediment soluble total phosphorus, sediment soluble PO4-P, Chl-a , TKN, NH4-N, NO2-N, NO3-N, alkalinity, temperature, pH, conductivity, dissolved oxygen, turbidity and PAR parameters are monitored for 21 months and Principal Component Analysis (PCA) is applied to identify trend of phosphorus concentration in water column. Results indicated that total phosphorus concentrations in water column and sediment at lake bottom are susceptible to changes caused by the variations in other water quality parameters compared to average, surface and mid-depth values. Correlations observed between P and other parameters were the highest in Bottom &ndash / 3 data set. In order to model sediment soluble total phosphorus in Lake Eymir, chlorophyll-a, NH3, total phosphorus, PO4-P, temperature, conductivity, pH, turbidity, &Delta / T and dissolved oxygen are defined as effective parameters. Linear regression models were more successful in predicting sediment soluble phosphorus concentrations compared to non-linear ones. Turbidity is a good tracer for total phosphorus concentrations in Lake Eymir. Temperature is seasonally effective on phosphorus concentrations, and may create stratified water in summer. Stratification causes phosphorus to build up in bottom water layer. Particle size distribution results show that area of sampling point 1 has different characteristics compared to other sampling locations since it is located at the inlet. The exchange of phosphorus from water to sediment is mostly completed within the first 7-8 hours. On average, 30% of the exchange is completed in an hour. It is clearly seen that although sediment layer in the lake is a phosphorus source, it has not reached its phosphorus binding capacity yet. Adsorption isotherm is found to be pseudo-second-order with a coefficient of determination greater than 0.9909 at all sampling points. Sediment phosphorus content has been fractioned into NH4Cl-P, BD-P, NaOH-P and HCl-P in order to identify permanent and bioavailable parts. Fractionation results show that even if the soluble concentrations are low, they are high enough to cause eutrophication problems.
100

FlexSADRA: Flexible Structural Alignment using a Dimensionality Reduction Approach

Hui, Shirley January 2005 (has links)
A topic of research that is frequently studied in Structural Biology is the problem of determining the degree of similarity between two protein structures. The most common solution is to perform a three dimensional structural alignment on the two structures. Rigid structural alignment algorithms have been developed in the past to accomplish this but treat the protein molecules as immutable structures. Since protein structures can bend and flex, rigid algorithms do not yield accurate results and as a result, flexible structural alignment algorithms have been developed. The problem with these algorithms is that the protein structures are represented using thousands of atomic coordinate variables. This results in a great computational burden due to the large number of degrees of freedom required to account for the flexibility. Past research in dimensionality reduction techniques has shown that a linear dimensionality reduction technique called Principal Component Analysis (PCA) is well suited for high dimensionality reduction. This thesis introduces a new flexible structural alignment algorithm called FlexSADRA, which uses PCA to perform flexible structural alignments. Test results show that FlexSADRA determines better alignments than rigid structural alignment algorithms. Unlike existing rigid and flexible algorithms, FlexSADRA addresses the problem in a significantly lower dimensionality problem space and assesses not only the structural fit but the structural feasibility of the final alignment.

Page generated in 0.0724 seconds