• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 87
  • 9
  • 8
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 282
  • 282
  • 222
  • 61
  • 61
  • 51
  • 37
  • 27
  • 25
  • 24
  • 24
  • 23
  • 23
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Biplots based on principal surfaces

Ganey, Raeesa 28 April 2020 (has links)
Principal surfaces are smooth two-dimensional surfaces that pass through the middle of a p-dimensional data set. They minimise the distance from the data points, and provide a nonlinear summary of the data. The surfaces are nonparametric and their shape is suggested by the data. The formation of a surface is found using an iterative procedure which starts with a linear summary, typically with a principal component plane. Each successive iteration is a local average of the p-dimensional points, where an average is based on a projection of a point onto the nonlinear surface of the previous iteration. Biplots are considered as extensions of the ordinary scatterplot by providing for more than three variables. When the difference between data points are measured using a Euclidean embeddable dissimilarity function, observations and the associated variables can be displayed on a nonlinear biplot. A nonlinear biplot is predictive if information on variables is added in such a way that it allows the values of the variables to be estimated for points in the biplot. Prediction trajectories, which tend to be nonlinear are created on the biplot to allow information about variables to be estimated. The goal is to extend the idea of nonlinear biplot methodology onto principal surfaces. The ultimate emphasis is on high dimensional data where the nonlinear biplot based on a principal surface allows for visualisation of samples, variable trajectories and predictive sets of contour lines. The proposed biplot provides more accurate predictions, with an additional feature of visualising the extent of nonlinearity that exists in the data.
52

DEFECTIVE PIXEL CORRECTION AND RESTORATION IN STARING REMOTE SENSOR FOCAL PLANE ARRAYS

FERRO, ANDREW F. January 2005 (has links)
No description available.
53

Scheduling optimal maintenance times for a system based on component reliabilities

Rao, Naresh Krishna 04 May 2006 (has links)
This dissertation extends the work done on single component maintenance planning to a multi-component series system. An attempt is made to develop a function which represents the expected cost rate (cost per unit time) of any maintenance plan. Three increasingly complex cases are considered. The first and simplest case assumes that the component is restored to an “as good as new” condition after a maintenance operation. The second case assumes that an occasional imperfect maintenance Operation may occur. During this period of time, the failure rate of the component is higher. Hence, the likelihood of a failure is greater until the component is properly maintained in a subsequent maintenance operation. The final case assumes that there is some deterioration in the component behavior even after a maintenance operation. Therefore, it is necessary to replace the system at some point in time. Models for all three cases are developed. Based on these models, cost rate functions are constructed. The cost rate functions reflect the cost rates of maintaining a component at a particular time. In addition, the savings obtained through the simultaneous maintenance of components is also accounted for in the cost rate functions. A series of approximations are made in order to make the cost rate functions mathematically tractable. Finally, an algorithmic procedure for optimizing the cost rate functions for all three cases is given. / Ph. D.
54

Principal components based techniques for hyperspectral image data

Fountanas, Leonidas 12 1900 (has links)
Approved for public release; distribution in unlimited. / PC and MNF transforms are two widely used methods that are utilized for various applications such as dimensionality reduction, data compression and noise reduction. In this thesis, an in-depth study of these two methods is conducted in order to estimate their performance in hyperspectral imagery. First the PCA and MNF methods are examined for their effectiveness in image enhancement. Also, the various methods are studied to evaluate their ability to determine the intrinsic dimension of the data. Results indicate that, in most cases, the scree test gives the best measure of the number of retained components, as compared to the cumulative variance, the Kaiser, and the CSD methods. Then, the applicability of PCA and MNF for image restoration are considered using two types of noise, Gaussian and periodic. Hyperspectral images are corrupted by noise using a combination of ENVI and MATLAB software, while the performance metrics used for evaluation of the retrieval algorithms are visual interpretation, rms correlation coefficient spectral comparison, and classification. In Gaussian noise, the retrieved images using inverse transforms indicate that the basic PC and MNF transform perform comparably. In periodic noise, the MNF transform shows less sensitivity to variations in the number of lines and the gain factor. / Lieutenant, Hellenic Navy
55

Risk Measurement of Mortgage-Backed Security Portfolios via Principal Components and Regression Analyses

Motyka, Matt 29 April 2003 (has links)
Risk measurement of mortgage-backed security portfolios presents a very involved task for analysts and portfolio managers of such investments. A strong predictive econometric model that can account for the variability of these securities in the future would prove a very useful tool for anyone in this financial market sector due to the difficulty of evaluating the risk of mortgage cash flows and prepayment options at the same time. This project presents two linear regression methods that attempt to explain the risk within these portfolios. The first study involves a principal components analysis on absolute changes in market data to form new sets of uncorrelated variables based on the variability of original data. These principal components then serve as the predictor variables in a principal components regression, where the response variables are the day-to-day changes in the net asset values of three agency mortgage-backed security mutual funds. The independence of each principal component would allow an analyst to reduce the number of observable sets in capturing the risk of these portfolios of fixed income instruments. The second idea revolves around a simple ordinary least squares regression of the three mortgage funds on the sets of the changes in original daily, weekly and monthly variables. While the correlation among such predictor variables may be very high, the simplicity of utilizing observable market variables is a clear advantage. The goal of either method was to capture the largest amount of variance in the mortgage-backed portfolios through these econometric models. The main purpose was to reduce the residual variance to less than 10 percent, or to produce at least 90 percent explanatory power of the original fund variances. The remaining risk could then be attributed to the nonlinear dependence in the changes in these net asset values on the explanatory variables. The primary cause of this nonlinearity is due to the prepayment put option inherent in these securities.
56

Multivariate Quality Control Using Loss-Scaled Principal Components

Murphy, Terrence Edward 24 November 2004 (has links)
We consider a principal components based decomposition of the expected value of the multivariate quadratic loss function, i.e., MQL. The principal components are formed by scaling the original data by the contents of the loss constant matrix, which defines the economic penalty associated with specific variables being off their desired target values. We demonstrate the extent to which a subset of these ``loss-scaled principal components", i.e., LSPC, accounts for the two components of expected MQL, namely the trace-covariance term and the off-target vector product. We employ the LSPC to solve a robust design problem of full and reduced dimensionality with deterministic models that approximate the true solution and demonstrate comparable results in less computational time. We also employ the LSPC to construct a test statistic called loss-scaled T^2 for multivariate statistical process control. We show for one case how the proposed test statistic has faster detection than Hotelling's T^2 of shifts in location for variables with high weighting in the MQL. In addition we introduce a principal component based decomposition of Hotelling's T^2 to diagnose the variables responsible for driving the location and/or dispersion of a subgroup of multivariate observations out of statistical control. We demonstrate the accuracy of this diagnostic technique on a data set from the literature and show its potential for diagnosing the loss-scaled T^2 statistic as well.
57

Μελέτη των ατμοσφαιρικών ρύπων στην πόλη της Πάτρας με τη μέθοδο της ανάλυσης σε κύριες συνιστώσες

Σούφλα, Ευαγγελία 04 September 2013 (has links)
Μελέτη των ατμοσφαιρικών ρύπων στην πόλη της Πάτρας για το έτος 2010 με τη μέθοδο της ανάλυσης σε κύριες συνιστώσες και κάνοντας χρήση του στατιστικού πακέτου Minitab16 / Research on air pollutants in the city of Patras for the year 2010 using the method of Principal Components Analysis. The results are elaborated using the statistical program minitab16.
58

EVALUATING THE PERFORMANCE AND WATER CHEMISTRY DYNAMICS OF PASSIVE SYSTEMS TREATING MUNICIPAL WASTEWATER AND LANDFILL LEACHATE

Wallace, JACK 29 October 2013 (has links)
This thesis consists of work conducted in two separate studies, evaluating the performance of passive systems for treating wastewater effluents. The first study involved the characterization of three wastewater stabilization ponds (WSPs) providing secondary and tertiary treatment for municipal wastewater at a facility in Amherstview, Ontario, Canada. Since 2003, the WSPs have experienced excessive algae growth and high pH levels during the summer months. A full range of parameters consisting of: pH, chlorophyll-a (chl-a), dissolved oxygen (DO), temperature, alkalinity, oxidation-reduction potential (ORP), conductivity, nutrient species, and organic matter measures; were monitored for the system and the chemical dynamics in the three WSPs were assessed through multivariate statistical analysis. Supplementary continuous monitoring of pH, chl-a, and DO was performed to identify time-series dependencies. The analyses showed strong correlations between chl-a and sunlight, temperature, organic matter, and nutrients, and strong time dependent correlations between chl-a and DO and between chl-a and pH. Additionally, algae samples were collected and analyzed using metagenomics methods to determine the distribution and speciation of algae growth in the WSPs. A strong shift from the dominance of a major class of green algae, chlorophyceae, in the first WSP, to the dominance of land plants, embryophyta – including aquatic macrophytes – in the third WSP, was observed and corresponded to field observations during the study period. The second study involved the evaluation of the performance and chemical dynamics of a hybrid-passive system treating leachate from a municipal solid waste (MSW) landfill in North Bay, Ontario, Canada. Over a three year period, monitoring of a full range of parameters consisting of: pH, DO, temperature, alkalinity, ORP, conductivity, sulfate, chloride, phenols, solids fractions, nutrient species, organic matter measures, and metals; was conducted bi-weekly and the dataset was analyzed with time series and multivariate statistical techniques. Regression analyses identified 8 parameters that were most frequently retained for modelling the five criteria parameters (alkalinity, ammonia, chemical oxygen demand, iron, and heavy metals), on a statistically significant level (p < 0.05): conductivity, DO, nitrite, organic nitrogen, ORP, pH, sulfate, and total volatile solids. / Thesis (Master, Civil Engineering) -- Queen's University, 2013-10-27 05:29:20.564
59

Evaluating the Use of Ridge Regression and Principal Components in Propensity Score Estimators under Multicollinearity

Gripencrantz, Sarah January 2014 (has links)
Multicollinearity can be present in the propensity score model when estimating average treatment effects (ATEs). In this thesis, logistic ridge regression (LRR) and principal components logistic regression (PCLR) are evaluated as an alternative to ML estimation of the propensity score model. ATE estimators based on weighting (IPW), matching and stratification are assessed in a Monte Carlo simulation study to evaluate LRR and PCLR. Further, an empirical example of using LRR and PCLR on real data under multicollinearity is provided. Results from the simulation study reveal that under multicollinearity and in small samples, the use of LRR reduces bias in the matching estimator, compared to ML. In large samples PCLR yields lowest bias, and typically was found to have the lowest MSE in all estimators. PCLR matched ML in bias under IPW estimation and in some cases had lower bias. The stratification estimator was heavily biased compared to matching and IPW but both bias and MSE improved as PCLR was applied, and for some cases under LRR. The specification with PCLR in the empirical example was usually most sensitive as a strongly correlated covariate was included in the propensity score model.
60

Detecting and Correcting Batch Effects in High-Throughput Genomic Experiments

Reese, Sarah 19 April 2013 (has links)
Batch effects are due to probe-specific systematic variation between groups of samples (batches) resulting from experimental features that are not of biological interest. Principal components analysis (PCA) is commonly used as a visual tool to determine whether batch effects exist after applying a global normalization method. However, PCA yields linear combinations of the variables that contribute maximum variance and thus will not necessarily detect batch effects if they are not the largest source of variability in the data. We present an extension of principal components analysis to quantify the existence of batch effects, called guided PCA (gPCA). We describe a test statistic that uses gPCA to test if a batch effect exists. We apply our proposed test statistic derived using gPCA to simulated data and to two copy number variation case studies: the first study consisted of 614 samples from a breast cancer family study using Illumina Human 660 bead-chip arrays whereas the second case study consisted of 703 samples from a family blood pressure study that used Affymetrix SNP Array 6.0. We demonstrate that our statistic has good statistical properties and is able to identify significant batch effects in two copy number variation case studies. We further compare existing batch effect correction methods and apply gPCA to test their effectiveness. We conclude that our novel statistic that utilizes guided principal components analysis to identify whether batch effects exist in high-throughput genomic data is effective. Although our examples pertain to copy number data, gPCA is general and can be used on other data types as well.

Page generated in 0.0339 seconds