1 |
Pyrite oxidation in coal-bearing strata : controls on in-situ oxidation as a precursor of acid mine drainage formationRoy, Samita January 2002 (has links)
Pyrite oxidation in coal-bearing strata is recognised as the main precursor to Acidic Mine Drainage (AMD) generation. Predicting AMD quality and quantity for remediation, or proposed extraction, requires assessment of interactions between oxidising fluids and pyrite, and between oxidation products and groundwater. Current predictive methods and models rarely account for individual mineral weathering rates, or their distribution within rock. Better constraints on the importance of such variables in controlling rock leachate are required to provide more reliable predictions of AMD quality. In this study assumptions made during modelling of AMD generation were tested including; homogeneity of rock chemical and physical characteristics, controls on the rate of embedded pyrite oxidation and oxidation front ingress. The main conclusions of this work are:• The ingress of a pyrite oxidation front into coal-bearing strata depends on dominant oxidant transport mechanism, pyrite morphology and rock pore-size distribution.• Although pyrite oxidation rates predicted from rate laws and derived from experimental weathering of coal-bearing strata agree, uncertainty in surface area of framboids produces at least an order of magnitude error in predicted rates.• Pyrite oxidation products in partly unsaturated rock are removed to solution via a cycle of dissolution and precipitation at the water-rock interface. Dissolution mainly occurs along rock cleavage planes, as does diffusion of dissolved oxidant.• Significant variance of whole seam S and pyrite wt % existed over a 30 m exposure of an analysed coal seam. Assuming a seam mean pyrite wt % to predict net acid producing potential for coal and shale seams may be unsuitable, at this scale at least.• Seasonal variation in AMD discharge chemistry indicates that base-flow is not necessarily representative of extreme poor quality leachate. Summer and winter storms, following relatively dry periods, tended to release the greatest volume of pyrite oxidation products.
|
2 |
An application of principal component analysisShishido, Junichi 01 January 2004 (has links)
No description available.
|
3 |
Sex-Based Differences In Lifting Technique Under Increasing Load Conditions: A Principal Component AnalysisSheppard, Phillip S. 04 October 2012 (has links)
The objectives of the present study were: 1) to determine if there is a sex-based difference in lifting technique across increasing load conditions; and, 2) to examine the use of body size-adjusted tasks and back strength-adjusted loads in the analysis of lifting technique. Eleven male and 14 female participants (n=25) with no previous history of low back pain participated in the study. Participants completed freestyle, symmetric lifts of a box with handles from the floor to table height for five trials under three load conditions (10%, 20%, and 30% of their individual maximum isometric back strength). Joint kinematic data for the ankle, knee, hip, and lumbar and thoracic spine were collected using a two-camera Optotrak 3020 system (NDI, Waterloo, ON). Joint angles were calculated using a three-dimensional Euler rotation sequence and PCA was applied to assess differences in lifting technique across the entire waveform. A repeated measures ANOVA with a mixed design revealed no significant effect of sex for any of the PCs. This was contrary to previous research that used discrete points on the lifting curve to analyze sex-based differences but agreed with more recent research using more complex analysis techniques. There was a significant effect of load on lifting technique for six PCs of the lower limb (p<0.005). However, there was no significant difference in lifting technique for the thoracic and lumbar spine. It was concluded that, when load is standardized to individual back strength characteristics, males and females adopted a similar lifting technique. In addition, as load increases participants used more of a semi-squat or squat lifting technique. / Thesis (Master, Kinesiology & Health Studies) -- Queen's University, 2012-10-03 21:10:11.889
|
4 |
主成分選取與因子選取在費雪區別分析上的探討 / Discussion of the Fisher's Discriminant Analysis Based on Choices of Principal Components and Factors李婉菁 Unknown Date (has links)
當我們的資料變數很多時,我們通常會使用主成分
或因子來降低資料變數;
在選取主成分與因子時,我們通常會以特徵值來做選擇,
然而變異數大(亦即特徵值大)的主成分或因子雖然解釋了大部分變異,
但卻不一定保留了最多後續要分析的資訊,
例如利用由特徵值所選取出來最好的主成分或因子
來當做區別資料之變數,所得結果不一定理想。
在此我們假設資料是來自於兩個多維常態母體,
我們將分別利用由Mardia等人 (1979) 和Chang (1983) 所提出的兩種方法
來選取出具區別能力的主成分,將其區別結果與由特徵值所選取出最好的主成分
之區別結果作一比較;並且將此二方法應用在選取因子上。
同時我們也證明Mardia等人 (1979) 和Chang (1983)的方法對於
主成分及因子(利用主成分方法轉換)有相同的選取順序。
本文更進一步地將Mardia等人
所提出之方法運用至三群資料上,探討當資料來自於三個
多維常態母體時,我們該如何利用此方法來選取具區別能力之變數。 / Principal component analysis or factor analysis are often used
to reduce the dimensionality of the original variables.
However, the principal component or factor, which has
larger variance (i.e eigenvalue) explaining larger proportion of total sample
variance, may not retain the most information for other analyses later.
For example, using the first few principal components or factors
having the largest corresponding eigenvalues as
discriminant variables, the discriminant result
may not be good or even appropriate.
\hspace{2.05em}We first discuss two methods, given by Mardia et al. (1979) and Chang (1983)
for choosing discriminant variables when data are randomly obtained from
a mixture of two multivariate normal distributions.
We then use the discriminant result (or classification error rates)
to compare these two methods and the traditional method of using the
principal components, which have the larger corresponding eigenvalues,
as discriminant variables. We also prove that the both the two methods
have the same selection order on principal components and factor (obtained
by the principal component method).
Furthermore, we use the method of
Mardia et al. to select appropriate discriminators when data is from
three populations.
|
5 |
Essays on Corporate Finance and Interest Rate PolicyYao, Haibo 15 August 2014 (has links)
My research makes three contributions to the literature. The first contribution is to find supportive evidence for the augmented Taylor rule model with orthogonalized bond market variables I build to more accurately describe and forecast the behavior the Federal Reserve, with improved model’s fit both in and out-of-sample. The second contribution to the existing literature is that I find supportive evidence for a macro explanation of industrial firm behavior in the United States. The third contribution of this paper is that I provide a new aspect to understand the monetary policy and the monetary policy transmission mechanism for both monetary policy practitioners and researchers. This research proceeds as the following: Essay one provides a literature review for the research in this dissertation discussing background and theories for both the empirical and theoretical applications of the Taylor rule, a tool for the setting of the federal funds rate. The second essay is designed to understand the setting of monetary policy by the Federal Reserve. I show that augment a simple Taylor rule with bond market information can significantly improve the model’s fit, both in and out-of-sample. The improvement is enough to produce lower forecast errors than those of non-linear policy models. In addition, the inclusion of these bond market variables resolves the parameter instability of the Taylor rule documented in the literature, and implies that the lagged federal funds rate plays a much smaller role than that suggested in the previous studies. The third essay examines the impact of monetary shocks on corporate cash holdings. I find evidence that small industrial firms hold onto cash when monetary policy is too tight and large industrial firms do the reverse both in the short-run and in the long-run. Further tests examine whether the long lasting loose monetary policy results in the pileup of corporate cash holdings. The evidence supports the assumption that industrial firms take the “long lasting lower interest rate” environment to hoard cash to buffer the monetary policy effectiveness.
|
6 |
Multi-Resolution Mixtures of Principal ComponentsLesner, Christopher January 1998 (has links)
The main contribution of this thesis is a new method of image compression based on a recently developed adaptive transform called Mixtures of Principal Components (MPC). Our multi-resolution extension of MPC-called Multi-Resolution Mixtures of Principal Components (MR-MPC) compresses and decompresses images in stages. The first stage processes the original images at very low resolution and is followed by stages that process the encoding errors of the previous stages at incrementally higher resolutions. To evaluate our multi-resolution extension of MPC we compared it with MPC and with the excellent performing wavelet based scheme called SPIHT. Fifty chest radiographs were compressed and compared to originals in two ways. First, Peak Signal to Noise Ratio (PSNR) and five distortion factors from a perceptual distortion measure called PQS were used to demonstrate that our multi-resolution extension of MPC can achieve rate distortion performance that is 220% to 720% better than MPC and much closer to that of SPIHT. And second, in a study involving 724 radiologists' evaluations of compressed chest radiographs, we found that the impact of MR-MPC and SPIHT at 25:1, 50:1, 75:1 on subjective image quality scores was less than the difference of opinion between four radiologists. / Thesis / Master of Science (MS)
|
7 |
Variable selection in principal component analysis : using measures of multivariate association.Sithole, Moses M. January 1992 (has links)
This thesis is concerned with the problem of selection of important variables in Principal Component Analysis (PCA) in such a way that the selected subsets of variables retain, as much as possible, the overall multivariate structure of the complete data. Throughout the thesis, the criteria used in order to meet this requirement are collectively referred to as measures of Multivariate Association (MVA). Most of the currently available selection methods may lead to inappropriate subsets, while Krzanowskis (1987) M(subscript)2-Procrustes criterion successfully identifies structure-bearing variables particularly when groups are present in the data. Our major objective, however, is to utilize the idea of multivariate association to select subsets of the original variables which preserve any (unknown) multivariate structure that may be present in the data.The first part of the thesis is devoted to a study of the choice of the number of components (say, k) to be used in the variable selection process. Various methods that exist in the literature for choosing k are described, and comparative studies on these methods are reviewed. Currently available methods based exclusively on the eigenvalues of the covariance or correlation matrices, and those based on cross-validation are unsatisfactory. Hence, we propose a new technique for choosing k based on the bootstrap methodology. A full comparative study of this new technique and the cross-validatory choice of k proposed by Eastment and Krzanowski (1982) is then carried out using data simulated from Monte Carlo experiment.The remainder of the thesis focuses on variable selection in PCA using measures of MVA. Various existing selection methods are described, and comparative studies on these methods available in the literature are reviewed. New methods for selecting variables, based of measures of MVA are then proposed and compared ++ / among themselves as well as with the M(subscript)2-procrustes criterion. This comparison is based on Monte Carlo simulation, and the behaviour of the selection methods is assessed in terms of the performance of the selected variables.In summary, the Monte Carlo results suggest that the proposed bootstrap technique for choosing k generally performs better than the cross-validatory technique of Eastment and Krzanowski (1982). Similarly, the Monte Carlo comparison of the variable selection methods shows that the proposed methods are comparable with or better than Krzanowskis (1987) M(subscript)2-procrustes criterion. These conclusions are mainly based on data simulated by means of Monte Carlo experiments. However, these techniques for choosing k and the various variable selection techniques are also evaluated on some real data sets. Some comments on alternative approaches and suggestions for possible extensions conclude the thesis.
|
8 |
Quasi-objective Nonlinear Principal Component Analysis and applications to the atmosphereLu, Beiwei 05 1900 (has links)
NonLinear Principal Component Analysis (NLPCA) using three-hidden-layer
feed-forward neural networks can produce solutions that over-fit the data and
are non-unique. These problems have been dealt with by subjective methods
during the network training. This study shows that these problems are intrinsic
due to the three-hidden-layer architecture. A simplified two-hidden-layer
feed-forward neural network that has no encoding layer and no bottleneck and
output biases is proposed. This new, compact NLPCA model alleviates these
problems without employing the subjective methods and is called
quasi-objective.
The compact NLPCA is applied to the zonal winds observed at seven pressure
levels between 10 and 70 hPa in the equatorial stratosphere to represent the
Quasi-Biennial Oscillation (QBO) and investigate its variability and structure.
The two nonlinear principal components of the dataset offer a clear picture of
the QBO. In particular, their structure shows that the QBO phase consists of a
predominant 28.4-month cycle that is modulated by an 11-year cycle and a
longer-period cycle. The significant difference in variability of the winds
between cold and warm seasons and the tendency for a seasonal synchronization
of the QBO phases are well captured. The one-dimensional NLPCA approximation of
the dataset provides a better representation of the QBO than the classical
principal component analysis and a better description of the asymmetry of the
QBO between westerly and easterly shear zones and between their transitions.
The compact NLPCA is then applied to the Arctic Oscillation (AO) index and
aforementioned zonal winds to investigate the relationship of the AO with the
QBO. The NLPCA of the AO index and zonal-winds dataset shows clearly that, of
covariation of the two oscillations, the phase defined by the two nonlinear
principal components progresses with a predominant 28.4-month periodicity, plus
the 11-year and longer-period modulations. Large positive values of the AO
index occur when westerlies prevail near the middle and upper levels of the
equatorial stratosphere. Large negative values of the AO index arise when
easterlies occupy over half the layer of the equatorial stratosphere.
|
9 |
SVD and PCA in Image ProcessingRenkjumnong, Wasuta - 16 July 2007 (has links)
The Singular Value Decomposition is one of the most useful matrix factorizations in applied linear algebra, the Principal Component Analysis has been called one of the most valuable results of applied linear algebra. How and why principal component analysis is intimately related to the technique of singular value decomposition is shown. Their properties and applications are described. Assumptions behind this techniques as well as possible extensions to overcome these limitations are considered. This understanding leads to the real world applications, in particular, image processing of neurons. Noise reduction, and edge detection of neuron images are investigated.
|
10 |
Sensor Fault Diagnosis Using Principal Component AnalysisSharifi, Mahmoudreza 2009 December 1900 (has links)
The purpose of this research is to address the problem of fault diagnosis of sensors which measure a set of direct redundant variables. This study proposes:
1. A method for linear senor fault diagnosis
2. An analysis of isolability and detectability of sensor faults
3. A stochastic method for the decision process
4. A nonlinear approach to sensor fault diagnosis.
In this study, first a geometrical approach to sensor fault detection is proposed. The sensor fault is isolated based on the direction of residuals found from a residual generator. This residual generator can be constructed from an input-output model in model based methods or from a Principal Component Analysis (PCA) based model in data driven methods. Using this residual generator and the assumption of white Gaussian noise, the effect of noise on the isolability is studied, and the minimum magnitude of isolable fault in each sensor is found based on the distribution of noise in the measurement system.
Next, for the decision process a probabilistic approach to sensor fault diagnosis is presented. Unlike most existing probabilistic approaches to fault diagnosis, which are based on Bayesian Belief Networks, in this approach the probabilistic model is directly extracted from a parity equation. The relevant parity equation can be found using a model of the system or through PCA analysis of data measured from the system. In addition, a sensor detectability index is introduced that specifies the level of detectability of sensor faults in a set of redundant sensors. This index depends only on the internal relationships of the variables of the system and noise level.
Finally, the proposed linear sensor fault diagnosis approach has been extended to nonlinear method by separating the space of measurements into several local linear regions. This classification has been performed by application of Mixture of Probabilistic PCA (MPPCA).
The proposed linear and nonlinear methods are tested on three different systems. The linear method is applied to sensor fault diagnosis in a smart structure and to the Tennessee Eastman process model, and the nonlinear method is applied to a data set collected from a fully instrumented HVAC system.
|
Page generated in 0.0936 seconds