Spelling suggestions: "subject:"component 2analysis"" "subject:"component 3analysis""
431 |
Facial Expression Recognition by Using Class Mean Gabor Responses with Kernel Principal Component AnalysisChung, Koon Yin C. 16 April 2010 (has links)
No description available.
|
432 |
Characterizing the Quaternary Hydrostratigraphy of Buried Valleys using Multi-Parameter Borehole Geophysics, Georgetown, OntarioBrennan, Andrew N. 10 1900 (has links)
<p>In 2009, the Regional Municipality of Halton and McMaster University initiated a 2-year collaborative study (Georgetown Aquifer Characterization Study-GACS) of the groundwater resource potential of Quaternary sediments near Georgetown, Ontario. As part of that study, this thesis investigated the Quaternary infill stratigraphy of the Middle Sixteen Mile Creek (MSMC) and Cedarvale (CV) buried valley systems using newly acquired core and borehole geophysical data. Multi-parameter geophysical log suites (natural gamma, EM conductivity, resistivity, magnetic susceptibility, full-waveform sonic, caliper) were acquired in 16 new boreholes (16 m to 55 m depth), pre-existing monitoring wells and from archival data. Characteristic log responses (electrofacies) were identified and correlated with core to produce a detailed subsurface model of a 20-km<sup>2</sup> area to the southwest of Georgetown. Nine distinctive lithostratgraphic units were identified and their geometry mapped across the study area as structure contour and isochore thickness maps. The subsurface model shows that the CV valley truncates the Late Wisconsin MSMC stratigraphy along a channelized erosional unconformity and is a younger (post-glacial?) sediment-hosted valley system. Model results demonstrate the high level of stratigraphic heterogeneity and complexity that is inherent in bedrock valley systems and provides a geological framework for understanding groundwater resource availability.</p> <p>Principal component analysis (PCA) was applied to selected log suites to evaluate the potential for objective lithologic classification using log data. Gamma, resistivity and conductivity logs were most useful for lithologic typing, while p-wave velocity and resistivity logs were more diagnostic of compact diamict units. Cross plots of the first and second principal components of log parameters discriminated silts and clays/shales from sand/gravel and diamict lithofacies. The results show that PCA is a viable method for predicting subsurface lithology in un-cored boreholes and can assist in the identification of hydrostratigraphic units.</p> / Master of Science (MSc)
|
433 |
Echo Planar Magnetic Resonance Imaging of Skeletal Muscle Following ExerciseDavis, Andrew January 2018 (has links)
In recent years, researchers have increasingly used magnetic resonance imaging (MRI) to study temporal skeletal muscle changes using gradient echo (GRE) echo planar imaging (EPI). These studies, typically involving exercise or ischemic challenges, have differentiated healthy subjects from athletic or unhealthy populations, such as those with peripheral vascular disease. However, the analysis methodologies have been lacking.
In this thesis, two sessions of post-exercise GRE EPI data were collected from six subjects' lower legs using a 3 Tesla MRI scanner and a custom built ergometer. Past studies used common medical imaging software for motion correction. This work shows that such tools degrade leg image data by introducing motion, increasing root mean squared error in rest data by 22%. A new approach decreased it by 12%. EPI distortion correction in muscle images was also achieved, with the correlation ratio of functional and structural images increasing by up to 8%.
In addition, a brief but intense artifact in GRE EPI muscle images results from muscle tissue moving in and out of the imaged volume. This through-plane artifact was successfully modelled as a mono-exponential decay for regression analysis, increasing the utility of the residual signal. The regression parameters were also leveraged to produce muscle displacement maps, identifying 44% of voxels as displaced. The maps were validated in a motion phantom and in-vivo using ultrasound.
Finally, independent component analysis (ICA) was applied to post-exercise GRE EPI images to detect features in a data-driven, multivariate way and improve on conventional ROI selection methods. ICA produced parametric maps that were spatially correlated to working muscles from every trial (most with |R| > 0.4). The components were also separated from the susceptibility, motion, and blood vessel signals, and temporally reliable within individuals.
These methodological advances represent increased rigour in the analysis of muscle GRE EPI images. / Thesis / Doctor of Philosophy (PhD) / Adequate blood circulation to muscles is important for good health. Researchers have used magnetic resonance imaging (MRI) techniques to assess blood and oxygen supply to muscles. The work in this thesis improves upon the analysis methods in prior work, especially in the areas of motion correction of the images and selection of individual muscle regions for analysis.
Previous techniques could sometimes make motion in muscle images worse. This work provides valuable motion and distortion correction for muscle imaging, ensuring that measurements truly reflect muscle physiology. It also describes a method to remove an unwanted signal from post-exercise muscle data, and create a map of the internal muscle motion that occurred.
Finally, an advanced mathematical technique was used to extract signals of interest and important spatial features from muscle image data automatically. The technique produced reliable results within and among subjects.
|
434 |
Analysis of Zero-Heavy Data Using a Mixture Model ApproachWang, Shin Cheng 30 March 1998 (has links)
The problem of high proportion of zeroes has long been an interest in data analysis and modeling, however, there are no unique solutions to this problem. The solution to the individual problem really depends on its particular situation and the design of the experiment. For example, different biological, chemical, or physical processes may follow different distributions and behave differently. Different mechanisms may generate the zeroes and require different modeling approaches. So it would be quite impossible and inflexible to come up with a unique or a general solution.
In this dissertation, I focus on cases where zeroes are produced by mechanisms that create distinct sub-populations of zeroes. The dissertation is motivated from problems of chronic toxicity testing which has a data set that contains a high proportion of zeroes. The analysis of chronic test data is complicated because there are two different sources of zeroes: mortality and non-reproduction in the data. So researchers have to separate zeroes from mortality and fecundity. The use of mixture model approach which combines the two mechanisms to model the data here is appropriate because it can incorporate the mortality kind of extra zeroes.
A zero inflated Poisson (ZIP) model is used for modeling the fecundity in <i> Ceriodaphnia dubia</i> toxicity test. A generalized estimating equation (GEE) based ZIP model is developed to handle longitudinal data with zeroes due to mortality. A joint estimate of inhibition concentration (ICx) is also developed as potency estimation based on the mixture model approach. It is found that the ZIP model would perform better than the regular Poisson model if the mortality is high. This kind of toxicity testing also involves longitudinal data where the same subject is measured for a period of seven days. The GEE model allows the flexibility to incorporate the extra zeroes and a correlation structure among the repeated measures. The problem of zero-heavy data also exists in environmental studies in which the growth or reproduction rates of multi-species are measured. This gives rise to multivariate data. Since the inter-relationships between different species are imbedded in the correlation structure, the study of the information in the correlation of the variables, which is often accessed through principal component analysis, is one of the major interests in multi-variate data. In the case where mortality influences the variables of interests, but mortality is not the subject of interests, the use of the mixture approach can be applied to recover the information of the correlation structure. In order to investigate the effect of zeroes on multi-variate data, simulation studies on principal component analysis are performed. A method that recovers the information of the correlation structure is also presented. / Ph. D.
|
435 |
Reproductive traits and sex ratio bias in the dwarf willow Salix herbaceaZhao, Minchun January 2024 (has links)
Sex ratio is very important for the evolution of dioecious plants. It can influence the reproductive success of plants. Sex ratio bias is common among reproductive individuals. Studying the reproductive traits can help to understand possible mechanisms that could influence the generation and maintenance of sex ratio bias. However, few studies have reported the relationship between reproductive traits and sex ratio bias. We investigated 29 full-sib families of the dwarf willow Salix herbacea L. S.herbacea exhibits an overall female sex ratio bias but also strong variation in sex ratio among families. We used variance component analysis to investigate from which morphological level the variation of reproductive traits (cumulative catkin number over four growth periods, annual catkin number in the fourth growth period, flower number, ovule number) came from. And we used mixed models to test the influence of family, sex and the sex by family interaction on reproductive traits. Besides that, we also tested the correlation between sex ratio and reproductive traits. Our results suggest that genetic factors can influence the degree of sexual dimorphism of S. herbacea in the different families Flowers from families with higher sex ratios had more ovules, sex ratio and the ovule number co-varied across families.
|
436 |
New Insights in Prediction and Dynamic Modeling from Non-Gaussian Mixture Processing MethodsSafont Armero, Gonzalo 29 July 2015 (has links)
[EN] This thesis considers new applications of non-Gaussian mixtures in the framework of statistical signal processing and pattern recognition. The non-Gaussian mixtures were implemented by mixtures of independent component analyzers (ICA). The fundamental hypothesis of ICA is that the observed signals can be expressed as a linear transformation of a set of hidden variables, usually referred to as sources, which are statistically independent. This independence allows factoring the original M-dimensional probability density function (PDF) of the data as a product of one-dimensional probability densities, greatly simplifying the modeling of the data. ICA mixture models (ICAMM) provide further flexibility by alleviating the independency requirement of ICA, thus allowing the model to obtain local projections of the data without compromising its generalization capabilities. Here are explored new possibilities of ICAMM for the purposes of estimation and classification of signals.
The thesis makes several contributions to the research in non-Gaussian mixtures: (i) a method for maximum-likelihood estimation of missing data, based on the maximization of the PDF of the data given the ICAMM; (ii) a method for Bayesian estimation of missing data that minimizes the mean squared error and can obtain the confidence interval of the prediction; (iii) a generalization of the sequential dependence model for ICAMM to semi-supervised or supervised learning and multiple chains of dependence, thus allowing the use of multimodal data; and (iv) introduction of ICAMM in diverse novel applications, both for estimation and for classification.
The developed methods were validated via an extensive number of simulations that covered multiple scenarios. These tested the sensitivity of the proposed methods with respect to the following parameters: number of values to estimate; kinds of source distributions; correspondence of the data with respect to the assumptions of the model; number of classes in the mixture model; and unsupervised, semi-supervised, and supervised learning. The performance of the proposed methods was evaluated using several figures of merit, and compared with the performance of multiple classical and state-of-the-art techniques for estimation and classification.
Aside from the simulations, the methods were also tested on several sets of real data from different types: data from seismic exploration studies; ground penetrating radar surveys; and biomedical data. These data correspond to the following applications: reconstruction of damaged or missing data from ground-penetrating radar surveys of historical walls; reconstruction of damaged or missing data from a seismic exploration survey; reconstruction of artifacted or missing electroencephalographic (EEG) data; diagnosis of sleep disorders; modeling of the brain response during memory tasks; and exploration of EEG data from subjects performing a battery of neuropsychological tests. The obtained results demonstrate the capability of the proposed methods to work on problems with real data. Furthermore, the proposed methods are general-purpose and can be used in many signal processing fields. / [ES] Esta tesis considera nuevas aplicaciones de las mezclas no Gaussianas dentro del marco de trabajo del procesado estadístico de señal y del reconocimiento de patrones. Las mezclas no Gaussianas fueron implementadas mediante mezclas de analizadores de componentes independientes (ICA). La hipótesis fundamental de ICA es que las señales observadas pueden expresarse como una transformación lineal de un grupo de variables ocultas, normalmente llamadas fuentes, que son estadísticamente independientes. Esta independencia permite factorizar la función de densidad de probabilidad (PDF) original M-dimensional de los datos como un producto de densidades unidimensionales, simplificando ampliamente el modelado de los datos. Los modelos de mezclas ICA (ICAMM) aportan una mayor flexibilidad al relajar el requisito de independencia de ICA, permitiendo que el modelo obtenga proyecciones locales de los datos sin comprometer su capacidad de generalización. Aquí se exploran nuevas posibilidades de ICAMM para los propósitos de estimación y clasificación de señales.
La tesis realiza varias contribuciones a la investigación en mezclas no Gaussianas: (i) un método de estimación de datos faltantes por máxima verosimilitud, basado en la maximización de la PDF de los datos dado el ICAMM; (ii) un método de estimación Bayesiana de datos faltantes que minimiza el error cuadrático medio y puede obtener el intervalo de confianza de la predicción; (iii) una generalización del modelo de dependencia secuencial de ICAMM para aprendizaje supervisado o semi-supervisado y múltiples cadenas de dependencia, permitiendo así el uso de datos multimodales; y (iv) introducción de ICAMM en varias aplicaciones novedosas, tanto para estimación como para clasificación.
Los métodos desarrollados fueron validados mediante un número extenso de simulaciones que cubrieron múltiples escenarios. Éstos comprobaron la sensibilidad de los métodos propuestos con respecto a los siguientes parámetros: número de valores a estimar; tipo de distribuciones de las fuentes; correspondencia de los datos con respecto a las suposiciones del modelo; número de clases en el modelo de mezclas; y aprendizaje supervisado, semi-supervisado y no supervisado. El rendimiento de los métodos propuestos fue evaluado usando varias figuras de mérito, y comparado con el rendimiento de múltiples técnicas clásicas y del estado del arte para estimación y clasificación.
Además de las simulaciones, los métodos también fueron probados sobre varios grupos de datos de diferente tipo: datos de estudios de exploración sísmica; exploraciones por radar de penetración terrestre; y datos biomédicos. Estos datos corresponden a las siguientes aplicaciones: reconstrucción de datos dañados o faltantes de exploraciones de radar de penetración terrestre de muros históricos; reconstrucción de datos dañados o faltantes de un estudio de exploración sísmica; reconstrucción de datos electroencefalográficos (EEG) dañados o artefactados; diagnóstico de desórdenes del sueño; modelado de la respuesta del cerebro durante tareas de memoria; y exploración de datos EEG de sujetos durante la realización de una batería de pruebas neuropsicológicas. Los resultados obtenidos demuestran la capacidad de los métodos propuestos para trabajar en problemas con datos reales. Además, los métodos propuestos son de propósito general y pueden utilizarse en muchos campos del procesado de señal. / [CA] Aquesta tesi considera noves aplicacions de barreges no Gaussianes dins del marc de treball del processament estadístic de senyal i del reconeixement de patrons. Les barreges no Gaussianes van ser implementades mitjançant barreges d'analitzadors de components independents (ICA). La hipòtesi fonamental d'ICA és que els senyals observats poden ser expressats com una transformació lineal d'un grup de variables ocultes, comunament anomenades fonts, que són estadísticament independents. Aquesta independència permet factoritzar la funció de densitat de probabilitat (PDF) original M-dimensional de les dades com un producte de densitats de probabilitat unidimensionals, simplificant àmpliament la modelització de les dades. Els models de barreges ICA (ICAMM) aporten una major flexibilitat en alleugerar el requeriment d'independència d'ICA, permetent així que el model obtinga projeccions locals de les dades sense comprometre la seva capacitat de generalització. Ací s'exploren noves possibilitats d'ICAMM pels propòsits d'estimació i classificació de senyals.
Aquesta tesi aporta diverses contribucions a la recerca en barreges no Gaussianes: (i) un mètode d'estimació de dades faltants per màxima versemblança, basat en la maximització de la PDF de les dades donat l'ICAMM; (ii) un mètode d'estimació Bayesiana de dades faltants que minimitza l'error quadràtic mitjà i pot obtenir l'interval de confiança de la predicció; (iii) una generalització del model de dependència seqüencial d'ICAMM per entrenament supervisat o semi-supervisat i múltiples cadenes de dependència, permetent així l'ús de dades multimodals; i (iv) introducció d'ICAMM en diverses noves aplicacions, tant per a estimació com per a classificació.
Els mètodes desenvolupats van ser validats mitjançant una extensa quantitat de simulacions que cobriren múltiples situacions. Aquestes van verificar la sensibilitat dels mètodes proposats amb respecte als següents paràmetres: nombre de valors per estimar; mena de distribucions de les fonts; correspondència de les dades amb respecte a les suposicions del model; nombre de classes del model de barreges; i aprenentatge supervisat, semi-supervisat i no-supervisat. El rendiment dels mètodes proposats va ser avaluat mitjançant diverses figures de mèrit, i comparat amb el rendiments de múltiples tècniques clàssiques i de l'estat de l'art per a estimació i classificació.
A banda de les simulacions, els mètodes van ser verificats també sobre diversos grups de dades reals de diferents tipus: dades d'estudis d'exploració sísmica; exploracions de radars de penetració de terra; i dades biomèdiques. Aquestes dades corresponen a les següents aplicacions: reconstrucció de dades danyades o faltants d'estudis d'exploracions de radar de penetració de terra sobre murs històrics; reconstrucció de dades danyades o faltants en un estudi d'exploració sísmica; reconstrucció de dades electroencefalogràfiques (EEG) artefactuades o faltants; diagnosi de desordres de la son; modelització de la resposta del cervell durant tasques de memòria; i exploració de dades EEG de subjectes realitzant una bateria de tests neuropsicològics. Els resultats obtinguts han demostrat la capacitat dels mètodes proposats per treballar en problemes amb dades reals. A més, els mètodes proposats són de propòsit general i poden fer-se servir en molts camps del processament de senyal. / Safont Armero, G. (2015). New Insights in Prediction and Dynamic Modeling from Non-Gaussian Mixture Processing Methods [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53913
|
437 |
An integrated approach to the taxonomic identification of prehistoric shell ornamentsDemarchi, B., O'Connor, Sonia A., de Lima Ponzoni, A., de Almeida Rocha Ponzoni, R., Sheridan, A., Penkman, K.E.H., Hancock, Y., Wilson, J. 17 May 2014 (has links)
Yes / Shell beads appear to have been one of the earliest examples of personal adornments. Marine shells identified far from the shore evidence long-distance transport and imply networks of exchange and negotiation. However, worked beads lose taxonomic clues to identification, and this may be compounded by taphonomic alteration. Consequently, the significance of this key early artefact may be underestimated. We report the use of bulk amino acid composition of the stable intra-crystalline proteins preserved in shell biominerals and the application of pattern recognition methods to a large dataset (777 samples) to demonstrate that taxonomic identification can be achieved at genus level. Amino acid analyses are fast (<2 hours per sample) and micro-destructive (sample size <2 mg). Their integration with non-destructive techniques provides a valuable and affordable tool, which can be used by archaeologists and museum curators to gain insight into early exploitation of natural resources by humans. Here we combine amino acid analyses, macro- and microstructural observations (by light microscopy and scanning electron microscopy) and Raman spectroscopy to try to identify the raw material used for beads discovered at the Early Bronze Age site of Great Cornard (UK). Our results show that at least two shell taxa were used and we hypothesise that these were sourced locally.
|
438 |
Gaps in Propolis Research: Challenges Posed to Commercialisation and the Need for an Holistic ApproachKatekhaye, S., Fearnley, H., Fearnley, J., Paradkar, Anant R 30 May 2019 (has links)
Yes / Both the season and region in which propolis is collected influence its chemical composition, resulting in variations in biological activity. Significant differences in composition and concentration of certain chemical compounds in propolis make standardisation and quality control challenging. In addition, the lack of uniformity in evaluation methodology and analytical techniques, make it extremely difficult to correlate data across the climatic zones. In this report, we focus on the gaps in propolis research and the challenges they pose for commercialisation, with suggestions as to how we might address them. We hope to stimulate further research which explores the holistic nature of propolis in order to derive a propolis bioactivity standard.
|
439 |
Development and Application of Novel Computer Vision and Machine Learning TechniquesDepoian, Arthur Charles, II 08 1900 (has links)
The following thesis proposes solutions to problems in two main areas of focus, computer vision and machine learning. Chapter 2 utilizes traditional computer vision methods implemented in a novel manner to successfully identify overlays contained in broadcast footage. The remaining chapters explore machine learning algorithms and apply them in various manners to big data, multi-channel image data, and ECG data. L1 and L2 principal component analysis (PCA) algorithms are implemented and tested against each other in Python, providing a metric for future implementations. Selected algorithms from this set are then applied in conjunction with other methods to solve three distinct problems. The first problem is that of big data error detection, where PCA is effectively paired with statistical signal processing methods to create a weighted controlled algorithm. Problem 2 is an implementation of image fusion built to detect and remove noise from multispectral satellite imagery, that performs at a high level. The final problem examines ECG medical data classification. PCA is integrated into a neural network solution that achieves a small performance degradation while requiring less then 20% of the full data size.
|
440 |
Multiscale process monitoring with singular spectrum analysisKrishnannair, Syamala 12 1900 (has links)
Thesis (MScEng (Process Engineering))--University of Stellenbosch, 2010. / Thesis presented in partial fulfilment of the requirements for the degree
of
Master of Science in Engineering
(Extractive Metallurgy)
In the Department of Process Engineering
at the University of Stellenbosch / ENGLISH ABSTRACT: Multivariate statistical process control (MSPC) approaches are now widely used for performance monitoring, fault detection and diagnosis in chemical processes. Conventional MSPC approaches are based on latent variable projection methods such as principal component analysis and partial least squares. These methods are suitable for handling linearly correlated data sets, with minimal autocorrelation in the variables. Industrial plant data invariably violate these conditions, and several extensions to conventional MSPC methodologies have been proposed to account for these limitations.
In practical situations process data usually contain contributions at multiple scales because of different events occurring at different localizations in time and frequency. To account for such multiscale nature, monitoring techniques that decompose observed data at different scales are necessary. Hence the use of standard MSPC methodologies may lead to unreliable results due to false alarms and significant loss of information.
In this thesis a multiscale methodology based on the use of singular spectrum analysis is proposed. Singular spectrum analysis (SSA) is a linear method that extracts information from the short and noisy time series by decomposing the data into deterministic and stochastic components without prior knowledge of the dynamics affecting the time series. These components can be classified as independent additive time series of slowly varying trend, periodic series and aperiodic noise. SSA does this decomposition by projecting the original time series onto a data-adaptive vector basis obtained from the series itself based on principal component analysis (PCA).
The proposed method in this study treats each process variable as time series and the autocorrelation between the variables are explicitly accounted for. The data-adaptive nature of SSA makes the proposed method more flexible than other spectral techniques using fixed basis functions. Application of the proposed technique is demonstrated using simulated, industrial data and the Tennessee Eastman Challenge process. Also, a comparative analysis is given using the simulated and Tennessee Eastman process. It is found that in most cases the proposed method is superior in detecting process changes and faults of different magnitude accurately compared to classical statistical process control (SPC) based on latent variable methods as well as the wavelet-based multiscale SPC. / AFRIKAANSE OPSOMMING: Meerveranderlike statistiese prosesbeheerbenaderings (MSPB) word tans wydverspreid benut vir werkverrigtingkontrolering, foutopsporing en .diagnose in chemiese prosesse. Gebruiklike MSPB word op latente veranderlike projeksiemetodes soos hoofkomponentontleding en parsiele kleinste-kwadrate gebaseer. Hierdie metodes is geskik om lineer gekorreleerde datastelle, met minimale outokorrelasie, te hanteer. Nywerheidsaanlegdata oortree altyd hierdie voorwaardes, en verskeie MSPB is voorgestel om verantwoording te doen vir hierdie beperkings.
Prosesdata afkomstig van praktiese toestande bevat gewoonlik bydraes by veelvuldige skale, as gevolg van verskillende gebeurtenisse wat by verskillende lokaliserings in tyd en frekwensie voorkom. Kontroleringsmetodes wat waargenome data ontbind by verskillende skale is nodig om verantwoording te doen vir sodanige multiskaalgedrag. Derhalwe kan die gebruik van standaard-MSPB weens vals alarms en beduidende verlies van inligting tot onbetroubare resultate lei.
In hierdie tesis word . multiskaalmetodologie gebaseer op die gebruik van singuliere spektrumontleding (SSO) voorgestel. SSO is . lineere metode wat inligting uit die kort en ruiserige tydreeks ontrek deur die data in deterministiese en stochastiese komponente te ontbind, sonder enige voorkennis van die dinamika wat die tydreeks affekteer. Hierdie komponente kan as onafhanklike, additiewe tydreekse geklassifiseer word: stadigveranderende tendense, periodiese reekse en aperiodiese geruis. SSO vermag hierdie ontbinding deur die oorspronklike tydreeks na . data-aanpassende vektorbasis te projekteer, waar hierdie vektorbasis verkry is vanaf die tydreeks self, gebaseer op hoofkomponentontleding.
Die voorgestelde metode in hierdie studie hanteer elke prosesveranderlike as . tydreeks, en die outokorrelasie tussen veranderlikes word eksplisiet in berekening gebring. Aangesien die SSO metode aanpas tot data, is die voorgestelde metode meer buigsaam as ander spektraalmetodes wat gebruik maak van vaste basisfunksies. Toepassing van die voorgestelde tegniek word getoon met gesimuleerde prosesdata en die Tennessee Eastman-proses. . Vergelykende ontleding word ook gedoen met die gesimuleerde prosesdata en die Tennessee Eastman-proses. In die meeste gevalle is dit gevind dat die voorgestelde metode beter vaar om prosesveranderings en .foute met verskillende groottes op te spoor, in vergeleke met klassieke statistiese prosesbeheer (SP) gebaseer op latente veranderlikes, asook golfie-gebaseerde multiskaal SP.
|
Page generated in 0.0906 seconds