• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 298
  • 138
  • 33
  • 31
  • 23
  • 17
  • 16
  • 14
  • 13
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 722
  • 722
  • 722
  • 137
  • 117
  • 111
  • 101
  • 83
  • 67
  • 64
  • 58
  • 58
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Pyrite oxidation in coal-bearing strata : controls on in-situ oxidation as a precursor of acid mine drainage formation

Roy, Samita January 2002 (has links)
Pyrite oxidation in coal-bearing strata is recognised as the main precursor to Acidic Mine Drainage (AMD) generation. Predicting AMD quality and quantity for remediation, or proposed extraction, requires assessment of interactions between oxidising fluids and pyrite, and between oxidation products and groundwater. Current predictive methods and models rarely account for individual mineral weathering rates, or their distribution within rock. Better constraints on the importance of such variables in controlling rock leachate are required to provide more reliable predictions of AMD quality. In this study assumptions made during modelling of AMD generation were tested including; homogeneity of rock chemical and physical characteristics, controls on the rate of embedded pyrite oxidation and oxidation front ingress. The main conclusions of this work are:• The ingress of a pyrite oxidation front into coal-bearing strata depends on dominant oxidant transport mechanism, pyrite morphology and rock pore-size distribution.• Although pyrite oxidation rates predicted from rate laws and derived from experimental weathering of coal-bearing strata agree, uncertainty in surface area of framboids produces at least an order of magnitude error in predicted rates.• Pyrite oxidation products in partly unsaturated rock are removed to solution via a cycle of dissolution and precipitation at the water-rock interface. Dissolution mainly occurs along rock cleavage planes, as does diffusion of dissolved oxidant.• Significant variance of whole seam S and pyrite wt % existed over a 30 m exposure of an analysed coal seam. Assuming a seam mean pyrite wt % to predict net acid producing potential for coal and shale seams may be unsuitable, at this scale at least.• Seasonal variation in AMD discharge chemistry indicates that base-flow is not necessarily representative of extreme poor quality leachate. Summer and winter storms, following relatively dry periods, tended to release the greatest volume of pyrite oxidation products.
2

Variable selection in principal component analysis : using measures of multivariate association.

Sithole, Moses M. January 1992 (has links)
This thesis is concerned with the problem of selection of important variables in Principal Component Analysis (PCA) in such a way that the selected subsets of variables retain, as much as possible, the overall multivariate structure of the complete data. Throughout the thesis, the criteria used in order to meet this requirement are collectively referred to as measures of Multivariate Association (MVA). Most of the currently available selection methods may lead to inappropriate subsets, while Krzanowskis (1987) M(subscript)2-Procrustes criterion successfully identifies structure-bearing variables particularly when groups are present in the data. Our major objective, however, is to utilize the idea of multivariate association to select subsets of the original variables which preserve any (unknown) multivariate structure that may be present in the data.The first part of the thesis is devoted to a study of the choice of the number of components (say, k) to be used in the variable selection process. Various methods that exist in the literature for choosing k are described, and comparative studies on these methods are reviewed. Currently available methods based exclusively on the eigenvalues of the covariance or correlation matrices, and those based on cross-validation are unsatisfactory. Hence, we propose a new technique for choosing k based on the bootstrap methodology. A full comparative study of this new technique and the cross-validatory choice of k proposed by Eastment and Krzanowski (1982) is then carried out using data simulated from Monte Carlo experiment.The remainder of the thesis focuses on variable selection in PCA using measures of MVA. Various existing selection methods are described, and comparative studies on these methods available in the literature are reviewed. New methods for selecting variables, based of measures of MVA are then proposed and compared ++ / among themselves as well as with the M(subscript)2-procrustes criterion. This comparison is based on Monte Carlo simulation, and the behaviour of the selection methods is assessed in terms of the performance of the selected variables.In summary, the Monte Carlo results suggest that the proposed bootstrap technique for choosing k generally performs better than the cross-validatory technique of Eastment and Krzanowski (1982). Similarly, the Monte Carlo comparison of the variable selection methods shows that the proposed methods are comparable with or better than Krzanowskis (1987) M(subscript)2-procrustes criterion. These conclusions are mainly based on data simulated by means of Monte Carlo experiments. However, these techniques for choosing k and the various variable selection techniques are also evaluated on some real data sets. Some comments on alternative approaches and suggestions for possible extensions conclude the thesis.
3

Quasi-objective Nonlinear Principal Component Analysis and applications to the atmosphere

Lu, Beiwei 05 1900 (has links)
NonLinear Principal Component Analysis (NLPCA) using three-hidden-layer feed-forward neural networks can produce solutions that over-fit the data and are non-unique. These problems have been dealt with by subjective methods during the network training. This study shows that these problems are intrinsic due to the three-hidden-layer architecture. A simplified two-hidden-layer feed-forward neural network that has no encoding layer and no bottleneck and output biases is proposed. This new, compact NLPCA model alleviates these problems without employing the subjective methods and is called quasi-objective. The compact NLPCA is applied to the zonal winds observed at seven pressure levels between 10 and 70 hPa in the equatorial stratosphere to represent the Quasi-Biennial Oscillation (QBO) and investigate its variability and structure. The two nonlinear principal components of the dataset offer a clear picture of the QBO. In particular, their structure shows that the QBO phase consists of a predominant 28.4-month cycle that is modulated by an 11-year cycle and a longer-period cycle. The significant difference in variability of the winds between cold and warm seasons and the tendency for a seasonal synchronization of the QBO phases are well captured. The one-dimensional NLPCA approximation of the dataset provides a better representation of the QBO than the classical principal component analysis and a better description of the asymmetry of the QBO between westerly and easterly shear zones and between their transitions. The compact NLPCA is then applied to the Arctic Oscillation (AO) index and aforementioned zonal winds to investigate the relationship of the AO with the QBO. The NLPCA of the AO index and zonal-winds dataset shows clearly that, of covariation of the two oscillations, the phase defined by the two nonlinear principal components progresses with a predominant 28.4-month periodicity, plus the 11-year and longer-period modulations. Large positive values of the AO index occur when westerlies prevail near the middle and upper levels of the equatorial stratosphere. Large negative values of the AO index arise when easterlies occupy over half the layer of the equatorial stratosphere.
4

Sensor Fault Diagnosis Using Principal Component Analysis

Sharifi, Mahmoudreza 2009 December 1900 (has links)
The purpose of this research is to address the problem of fault diagnosis of sensors which measure a set of direct redundant variables. This study proposes: 1. A method for linear senor fault diagnosis 2. An analysis of isolability and detectability of sensor faults 3. A stochastic method for the decision process 4. A nonlinear approach to sensor fault diagnosis. In this study, first a geometrical approach to sensor fault detection is proposed. The sensor fault is isolated based on the direction of residuals found from a residual generator. This residual generator can be constructed from an input-output model in model based methods or from a Principal Component Analysis (PCA) based model in data driven methods. Using this residual generator and the assumption of white Gaussian noise, the effect of noise on the isolability is studied, and the minimum magnitude of isolable fault in each sensor is found based on the distribution of noise in the measurement system. Next, for the decision process a probabilistic approach to sensor fault diagnosis is presented. Unlike most existing probabilistic approaches to fault diagnosis, which are based on Bayesian Belief Networks, in this approach the probabilistic model is directly extracted from a parity equation. The relevant parity equation can be found using a model of the system or through PCA analysis of data measured from the system. In addition, a sensor detectability index is introduced that specifies the level of detectability of sensor faults in a set of redundant sensors. This index depends only on the internal relationships of the variables of the system and noise level. Finally, the proposed linear sensor fault diagnosis approach has been extended to nonlinear method by separating the space of measurements into several local linear regions. This classification has been performed by application of Mixture of Probabilistic PCA (MPPCA). The proposed linear and nonlinear methods are tested on three different systems. The linear method is applied to sensor fault diagnosis in a smart structure and to the Tennessee Eastman process model, and the nonlinear method is applied to a data set collected from a fully instrumented HVAC system.
5

The Study on Construction for Social Indicators in Taiwan

Tsai, Wan-ying 24 July 2004 (has links)
Prior to 1960, most countries over the world used traditional economic indicators to represent their social status. Economists mostly used Gross National Product, GNP, as a measurement of the social welfare standard of a country or a society. However, with the progress of the development, the traditional economic indicators were unable to follow the progress of social welfares. Therefore, it made economists hard to measure the status of social welfares. Sen (1977) thought that the development of human beings is not restricted to the increase of average disposable incomes only. He thought that people should use the indicators with more information to measure the distinctive diversity of welfares. Bauer (1966) first stated the social indicators as a measurement of social status and trends. Then, the so-called ¡§Social Indicators Movement¡¨ was aroused by Bauer¡¦s theory. As a result, to measure the development of a society entirely, people could determine the development from the medicine, health, economy, environment, and welfare aspects. In the researches about social developments, there were many discussions in building indicators of social welfare, quality of living, basic fulfillment, and development of welfare. The research is trying to establish a system of social indicators to measure the development from every aspect and selecting the social indicator index with representative indicators as a measurement of society development. Moreover, this research would analyze the systems of social indicators in Taiwan from 1982 to 2002 to see if the government made an appropriate allocation of resources in the executions of related policies. The research refers to 20 related indicator systems in Taiwan and overseas as their times of quotes and principles of selecting indicators to sum up 9 probable indicators. Then, the Principal Component Analysis method and the Varian method are adopted as research methods to abstract factors. Moreover, there are two abstracted factors. One is economic and environment factor and another one is medical welfare and unemployment factor. The research uses weighted method to find out the synthetic indicators in Taiwan from 1982 to 2002. The weighted multiple gained from factor analysis for the two factors is 0.8353 and 0.1647. Based on the data mining and analysis from second information, three scores were acquired, economic and environment factor, medical welfare and unemployment factor, and entirely performance. Each of these three scores shows the trend and the change year by year. The last, according the result from this analysis, the policy and recommendation was brought up.
6

Quasi-objective Nonlinear Principal Component Analysis and applications to the atmosphere

Lu, Beiwei 05 1900 (has links)
NonLinear Principal Component Analysis (NLPCA) using three-hidden-layer feed-forward neural networks can produce solutions that over-fit the data and are non-unique. These problems have been dealt with by subjective methods during the network training. This study shows that these problems are intrinsic due to the three-hidden-layer architecture. A simplified two-hidden-layer feed-forward neural network that has no encoding layer and no bottleneck and output biases is proposed. This new, compact NLPCA model alleviates these problems without employing the subjective methods and is called quasi-objective. The compact NLPCA is applied to the zonal winds observed at seven pressure levels between 10 and 70 hPa in the equatorial stratosphere to represent the Quasi-Biennial Oscillation (QBO) and investigate its variability and structure. The two nonlinear principal components of the dataset offer a clear picture of the QBO. In particular, their structure shows that the QBO phase consists of a predominant 28.4-month cycle that is modulated by an 11-year cycle and a longer-period cycle. The significant difference in variability of the winds between cold and warm seasons and the tendency for a seasonal synchronization of the QBO phases are well captured. The one-dimensional NLPCA approximation of the dataset provides a better representation of the QBO than the classical principal component analysis and a better description of the asymmetry of the QBO between westerly and easterly shear zones and between their transitions. The compact NLPCA is then applied to the Arctic Oscillation (AO) index and aforementioned zonal winds to investigate the relationship of the AO with the QBO. The NLPCA of the AO index and zonal-winds dataset shows clearly that, of covariation of the two oscillations, the phase defined by the two nonlinear principal components progresses with a predominant 28.4-month periodicity, plus the 11-year and longer-period modulations. Large positive values of the AO index occur when westerlies prevail near the middle and upper levels of the equatorial stratosphere. Large negative values of the AO index arise when easterlies occupy over half the layer of the equatorial stratosphere.
7

Quasi-objective Nonlinear Principal Component Analysis and applications to the atmosphere

Lu, Beiwei 05 1900 (has links)
NonLinear Principal Component Analysis (NLPCA) using three-hidden-layer feed-forward neural networks can produce solutions that over-fit the data and are non-unique. These problems have been dealt with by subjective methods during the network training. This study shows that these problems are intrinsic due to the three-hidden-layer architecture. A simplified two-hidden-layer feed-forward neural network that has no encoding layer and no bottleneck and output biases is proposed. This new, compact NLPCA model alleviates these problems without employing the subjective methods and is called quasi-objective. The compact NLPCA is applied to the zonal winds observed at seven pressure levels between 10 and 70 hPa in the equatorial stratosphere to represent the Quasi-Biennial Oscillation (QBO) and investigate its variability and structure. The two nonlinear principal components of the dataset offer a clear picture of the QBO. In particular, their structure shows that the QBO phase consists of a predominant 28.4-month cycle that is modulated by an 11-year cycle and a longer-period cycle. The significant difference in variability of the winds between cold and warm seasons and the tendency for a seasonal synchronization of the QBO phases are well captured. The one-dimensional NLPCA approximation of the dataset provides a better representation of the QBO than the classical principal component analysis and a better description of the asymmetry of the QBO between westerly and easterly shear zones and between their transitions. The compact NLPCA is then applied to the Arctic Oscillation (AO) index and aforementioned zonal winds to investigate the relationship of the AO with the QBO. The NLPCA of the AO index and zonal-winds dataset shows clearly that, of covariation of the two oscillations, the phase defined by the two nonlinear principal components progresses with a predominant 28.4-month periodicity, plus the 11-year and longer-period modulations. Large positive values of the AO index occur when westerlies prevail near the middle and upper levels of the equatorial stratosphere. Large negative values of the AO index arise when easterlies occupy over half the layer of the equatorial stratosphere. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate
8

Contrasting Environments Associated with Storm Prediction Center Tornado Outbreak Forecasts using Synoptic-Scale Composite Analysis

Bates, Alyssa Victoria 17 May 2014 (has links)
Tornado outbreaks have significant human impact, so it is imperative forecasts of these phenomena are accurate. As a synoptic setup lays the foundation for a forecast, synoptic-scale aspects of Storm Prediction Center (SPC) outbreak forecasts of varying accuracy were assessed. The percentages of the number of tornado outbreaks within SPC 10% tornado probability polygons were calculated. False alarm events were separately considered. The outbreaks were separated into quartiles using a point-in-polygon algorithm. Statistical composite fields were created to represent the synoptic conditions of these groups and facilitate comparison. Overall, temperature advection had the greatest differences between the groups. Additionally, there were significant differences in the jet streak strengths and amounts of vertical wind shear. The events forecasted with low accuracy consisted of the weakest synoptic-scale setups. These results suggest it is possible that events with weak synoptic setups should be regarded as areas of concern by tornado outbreak forecasters.
9

WASP: An Algorithm for Ranking College Football Teams

Earl, Jonathan January 2016 (has links)
Arrow's Impossibility Theorem outlines the flaws that effect any voting system that attempts to order a set of objects. For its entire history, American college football has been determining its champion based on a voting system. Much of the literature has dealt with why the voting system used is problematic, but there does not appear to be a large collection of work done to create a better, mathematical process. More generally, the inadequacies of ranking in football are a manifestation of the problem of ranking a set of objects. Herein, principal component analysis is used as a tool to provide a solution for the problem, in the context of American college football. To show its value, rankings based on principal component analysis are compared against the rankings used in American college football. / Thesis / Master of Science (MSc) / The problem of ranking is a ubiquitous problem, appearing everywhere from Google to ballot boxes. One of the more notable areas where this problem arises is in awarding the championship in American college football. This paper explains why this problem exists in American college football, and presents a bias-free mathematical solution that is compared against how American college football awards their championship.
10

Large Scale Matrix Completion and Recommender Systems

Amadeo, Lily 04 September 2015 (has links)
"The goal of this thesis is to extend the theory and practice of matrix completion algorithms, and how they can be utilized, improved, and scaled up to handle large data sets. Matrix completion involves predicting missing entries in real-world data matrices using the modeling assumption that the fully observed matrix is low-rank. Low-rank matrices appear across a broad selection of domains, and such a modeling assumption is similar in spirit to Principal Component Analysis. Our focus is on large scale problems, where the matrices have millions of rows and columns. In this thesis we provide new analysis for the convergence rates of matrix completion techniques using convex nuclear norm relaxation. In addition, we validate these results on both synthetic data and data from two real-world domains (recommender systems and Internet tomography). The results we obtain show that with an empirical, data-inspired understanding of various parameters in the algorithm, this matrix completion problem can be solved more efficiently than some previous theory suggests, and therefore can be extended to much larger problems with greater ease. "

Page generated in 0.1361 seconds