• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 303
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 742
  • 742
  • 742
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

The Classification of In Vivo MR Spectra on Brain Abscesses Patients Using Independent Component Analysis

Liu, Cheng-Chih 04 September 2012 (has links)
Magnetic Resonance Imaging (MRI) can obtain the tissues of in vivo non-invasively. Proton MR Spectroscopy uses the resonance principle to collect the signals of proton and transforms them to spectrums. It provides information of metabolites in patient¡¦s brain for doctors to observe the change of pathology. Observing the metabolites of brain abscess patients is most important process in clinical diagnosis and treatment. Then, doctors use different spectrums of echo time (TE) to enhance the accuracy in the diagnosis. In our study, we use independent component analysis (ICA) to analyze MR spectroscopy. After analyzing, the independent components represent the elements which compose the input data. Then, we use the projection which is mentioned by Ssu-Ying Lu¡¦s Thesis to help us observe the relationship between independent components and spectrums of patients. We also discuss the result of spectrums with using ICA and PCA and discover some questions (whether it need to do scale normalization before inputting data or not, the result of scale normalization doesn¡¦t expect, and the peak in some independent components confuse us by locating in indistinct place) to discuss and to find possible reason after experiments.
112

Applying Point-Based Principal Component Analysis on Orca Whistle Detection

Wang, Chiao-mei 23 July 2007 (has links)
For many undersea research application scenarios, instruments need to be deployed for more than one month which is the basic time interval for many phenomena. With limited power supply and memory, management strategies are crucial for the success of data collection. For acoustic recording of undersea activities, in general,either preprogrammed duty cycle is configured to log partial time series,or spectrogram of signal is derived and stored,to utilize the available memory storage efficiently.To overcome this limitation, we come up with an algorithm to classify different and store only the sound data of interest. Features like characteristic frequencies, large amplitude of selected frequencies or intensity threshold are used to identify or classify different patterns. On main limitation for this type of approaches is that the algorithm is generally range-dependent, as a result, also sound-level-dependent. This type of algorithms will be less robust to the change of the environment.One the other hand, one interesting observation is that when human beings look at the spectrogram, they will immediately tell the difference between two patterns. Even though no knowledge about the nature of the source, human beings still can discern the tiny dissimilarity and group them accordingly. This suggests that the recognition and classification can be done in spectrogram as a recognition problem. In this work, we propose to modify Principal Component Analysis by generating feature points from moment invariant and sound Level variance, to classify sounds of interest in the ocean. Among all different sound sources in the ocean, we focus on three categories of our interest, i.e., rain, ship and whale and dolphin. The sound data were recorded with the Passive Acoustic Listener developed by Nystuen, Applied Physics Lab, University of Washington. Among all the data, we manually identify twenty frames for each cases, and use them as the base training set. Feed several unknown clips for classification experiments, we suggest that both point-based feature extraction are effective ways to describe whistle vocalizations and believe that this algorithm would be useful for extracting features from noisy recordings of the callings of a wide variety of species.
113

The Determinants Of Financial Development In Turkey: A Principal Component Analysis

Boru, Mesrur 01 August 2009 (has links) (PDF)
This thesis investigates the determinants of financial development in Turkey. Principle Component Analysis (PCA) is employed in order to examine the main determinants of financial sector development and shed light on the structure of the financial system in Turkey. The empirical studies on financial development suffer from the measurement problem. This study aims to remedy the measurement problem by providing proxies that explain different aspects of financial development more accurately than other proxies used in the extant literature. Hence, the present study constitutes a strong basis for studies that rely on measuring financial development in Turkey.
114

A Contribution To Modern Data Reduction Techniques And Their Applications By Applied Mathematics And Statistical Learning

Sakarya, Hatice 01 January 2010 (has links) (PDF)
High-dimensional data take place from digital image processing, gene expression micro arrays, neuronal population activities to financial time series. Dimensionality Reduction - extracting low dimensional structure from high dimension - is a key problem in many areas like information processing, machine learning, data mining, information retrieval and pattern recognition, where we find some data reduction techniques. In this thesis we will give a survey about modern data reduction techniques, representing the state-of-the-art of theory, methods and application, by introducing the language of mathematics there. This needs a special care concerning the questions of, e.g., how to understand discrete structures as manifolds, to identify their structure, preparing the dimension reduction, and to face complexity in the algorithmically methods. A special emphasis will be paid to Principal Component Analysis, Locally Linear Embedding and Isomap Algorithms. These algorithms are studied by a research group from Vilnius, Lithuania and Zeev Volkovich, from Software Engineering Department, ORT Braude College of Engineering, Karmiel, and others. The main purpose of this study is to compare the results of the three of the algorithms. While the comparison is beeing made we will focus the results and duration.
115

Functional data analysis: classification and regression

Lee, Ho-Jin 01 November 2005 (has links)
Functional data refer to data which consist of observed functions or curves evaluated at a finite subset of some interval. In this dissertation, we discuss statistical analysis, especially classification and regression when data are available in function forms. Due to the nature of functional data, one considers function spaces in presenting such type of data, and each functional observation is viewed as a realization generated by a random mechanism in the spaces. The classification procedure in this dissertation is based on dimension reduction techniques of the spaces. One commonly used method is Functional Principal Component Analysis (Functional PCA) in which eigen decomposition of the covariance function is employed to find the highest variability along which the data have in the function space. The reduced space of functions spanned by a few eigenfunctions are thought of as a space where most of the features of the functional data are contained. We also propose a functional regression model for scalar responses. Infinite dimensionality of the spaces for a predictor causes many problems, and one such problem is that there are infinitely many solutions. The space of the parameter function is restricted to Sobolev-Hilbert spaces and the loss function, so called, e-insensitive loss function is utilized. As a robust technique of function estimation, we present a way to find a function that has at most e deviation from the observed values and at the same time is as smooth as possible.
116

Computational and experimental investigation of the enzymatic hydrolysis of cellulose

Bansal, Prabuddha 25 August 2011 (has links)
The enzymatic hydrolysis of cellulose to glucose by cellulases is one of the major steps in the conversion of lignocellulosic biomass to biofuel. This hydrolysis by cellulases, a heterogeneous reaction, currently suffers from some major limitations, most importantly a dramatic rate slowdown at high degrees of conversion in the case of crystalline cellulose. Various rate-limiting factors were investigated employing experimental as well as computational studies. Cellulose accessibility and the hydrolysable fraction of accessible substrate (a previously undefined and unreported quantity) were shown to decrease steadily with conversion, while cellulose reactivity, defined in terms of hydrolytic activity per amount of actively adsorbed cellulase, remained constant. Faster restart rates were observed on partially converted cellulose as compared to uninterrupted hydrolysis rates, supporting the presence of an enzyme clogging phenomenon. Cellulose crystallinity is a major substrate property affecting the rates, but its quantification has suffered from lack of consistency and accuracy. Using multivariate statistical analysis of X-ray data from cellulose, a new method to determine the degree of crystallinity was developed. Cel7A CBD is a promising target for protein engineering as cellulose pretreated with Cel7A CBDs exhibits enhanced hydrolysis rates resulting from a reduction in crystallinity. However, for Cel7A CBD, a high throughput assay is unlikely to be developed. In the absence of a high throughput assay (required for directed evolution) and extensive knowledge of the role of specific protein residues (required for rational protein design), the mutations need to be picked wisely, to avoid the generation of inactive variants. To tackle this issue, a method utilizing the underlying patterns in the sequences of a protein family has been developed.
117

Användarverifiering från webbkamera

Alajarva, Sami January 2007 (has links)
<p>Arbetet som presenteras i den här rapporten handlar om ansiktsigenkänning från webbkameror med hjälp av principal component analysis samt artificiella neurala nätverk av typen feedforward. Arbetet förbättrar tekniken med hjälp av filterbaserade metoder som bland annat används inom ansiktsdetektering. Dessa filter bygger på att skicka med redundant data av delregioner av ansiktet.</p>
118

High-dimensional classification for brain decoding

Croteau, Nicole Samantha 26 August 2015 (has links)
Brain decoding involves the determination of a subject’s cognitive state or an associated stimulus from functional neuroimaging data measuring brain activity. In this setting the cognitive state is typically characterized by an element of a finite set, and the neuroimaging data comprise voluminous amounts of spatiotemporal data measuring some aspect of the neural signal. The associated statistical problem is one of classification from high-dimensional data. We explore the use of functional principal component analysis, mutual information networks, and persistent homology for examining the data through exploratory analysis and for constructing features characterizing the neural signal for brain decoding. We review each approach from this perspective, and we incorporate the features into a classifier based on symmetric multinomial logistic regression with elastic net regularization. The approaches are illustrated in an application where the task is to infer from brain activity measured with magnetoencephalography (MEG) the type of video stimulus shown to a subject. / Graduate
119

Optimization of an array of peptidic indicator displacement assays for the discrimination of cabernet sauvignon wines

Chong, Sally 06 January 2011 (has links)
The research project, Optimization of an array of Peptidic Indicator Displacement Assays for the Discrimination of Cabernet Sauvignon Wines, describes the multiple step lab trials conducted to optimize an array of ensembles composed of synthesized peptides and PCV:Cu+2 complexes for the differentiation of seven Cabernet Sauvignon wines with different tannin levels. This report also includes the methods and analysis used. The analysis interpreted by principal component analysis. / text
120

Computerized model to forecast low-cost housing demand in urban area in Malaysia using Artificial Neural Networks (ANN)

Zainun, Noor Y. B. January 2011 (has links)
The forecasted proportions of urban population to total population in Malaysia are steadily increasing from 26% in 1965 to 70% in 2020. Therefore, there is a need to fully appreciate the legacy of the urbanization of Malaysia by providing affordable housing. The main aim of this study is to focus on developing a model to forecast the demand of low cost housing in urban areas. The study is focused on eight states in Peninsular Malaysia, as most of these states are among the areas predicted to have achieved the highest urbanization level in the country. The states are Kedah, Penang, Perlis, Kelantan, Terengganu, Perak, Pahang and Johor. Monthly time-series data for six to eight years of nine indicators including: population growth; birth rate; child mortality rate; unemployment rate; household income rate; inflation rate; GDP; poverty rate and housing stocks have been used to forecast the demand on low cost housing using Artificial Neural Network (ANN) approach. The data is collected from the Department of Malaysian Statistics, the Ministry of Housing and the Housing Department of the State Secretary. The Principal Component Analysis (PCA) method has been adopted to analyze the data using SPSS 18.0 package. The performance of the Neural Network is evaluated using R squared (R2) and the accuracy of the model is measured using the Mean Absolute Percentage Error (MAPE). Lastly, a user friendly interface is developed using Visual Basic. From the results, it was found that the best Neural Network to forecast the demand on low cost housing in Kedah is 2-16-1, Pahang 2-15-1, Kelantan 2-25-1, Terengganu 2-30-1, Perlis 3-5-1, Pulau Pinang 3-7-1, Johor 3-38-1 and Perak 3-24-1. In conclusion, the evaluation performance of the model through the MAPE value shows that the NN model can forecast the low-cost housing demand very good in Pulau Pinang, Johor, Pahang and Kelantan, where else good in Kedah and Terengganu while in Perlis and Perak it is not accurate due to the lack of data. The study has successfully developed a user friendly interface to retrieve and view all the data easily.

Page generated in 0.1037 seconds