• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 380
  • 185
  • 126
  • 31
  • 24
  • 24
  • 20
  • 20
  • 16
  • 14
  • 9
  • 8
  • 8
  • 4
  • 4
  • Tagged with
  • 945
  • 945
  • 148
  • 136
  • 130
  • 117
  • 68
  • 68
  • 67
  • 56
  • 52
  • 52
  • 49
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A search for solar dark matter with the IceCube neutrino detector : Advances in data treatment and analysis technique

Zoll, Marcel Christian Robert January 2016 (has links)
There is compelling observational evidence for the existence of dark matter in the Universe, including our own Galaxy, which could possibly consist of weakly interacting massive particles (WIMPs) not contained in the standard model (SM) of particle physics. WIMPs may get gravitationally trapped inside heavy celestial bodies of ordinary matter. The Sun is a nearby candidate for such a capture process which is driven by the scattering of WIMPs on its nuclei. Forming an over-density at the Sun's core the WIMPs would self-annihilate yielding energetic neutrinos, which leave the Sun and can be detected in experiments on Earth. The cubic-kilometer sized IceCube neutrino observatory, constructed in the clear glacial ice at the Amundsen-Scott South Pole Station in Antarctica offers an excellent opportunity to search for this striking signal. This thesis is dedicated to the search for these solar dark matter signatures in muon neutrinos from the direction of the Sun. Newly developed techniques based on hit clustering and hit-based vetos allow more accurate reconstruction and identification of events in the detector and thereby a stronger rejection of background. These techniques are also applicable to other IceCube analyses and event filters. In addition, new approaches to the analysis without seasonal cuts lead to improvements in sensitivity especially in the low-energy regime (<=100 GeV), the target of the more densely instrumented DeepCore sub-array. This first analysis of 369 days of data recorded with the completed detector array of 86 strings revealed no significant excess above the expected background of atmospheric neutrinos. This allows us to set strong limits on the annihilation rate of WIMPs in the Sun for the models probed in this analysis. The IceCube limits for the spin-independent WIMP-proton scattering cross-section are the most stringent ones for WIMP masses above 100 GeV. / IceCube
102

Identifying Profiles of Resilience among a High-Risk Adolescent Population

Wright, Anna W 01 January 2016 (has links)
The purpose of the present study was to determine whether distinct patterns of adolescent adjustment existed when four domains of functioning were considered. The study included a sample of 299 high-risk urban adolescents, predominantly African American, ages 9-16 and their maternal caregivers. Cluster analysis was used to identify patterns of adjustment. Logistic regression analyses were used to explore whether variations in levels of five theoretically and empirically supported protective factors predicted cluster membership. A four-cluster model was determined to best fit the data. Higher rates of goal directedness and anger regulation coping predicted membership within the highest functioning cluster over a cluster demonstrating high externalizing problem behaviors, and neighborhood cohesion predicted highest functioning cluster membership over a cluster demonstrating high internalizing symptoms. Findings suggest that within a high-risk population of adolescents, significant variability in functioning will exist. The presence or absence of specific protective factors predicts developmental outcomes.
103

Zhluková analýza dynamických dát / Clustering of dynamic data

Marko, Michal January 2011 (has links)
Title: Cluster analysis of dynamic data Author: Bc. Michal Marko Department: Department of Software and Computer Science Education Supervisor: RNDr. František Mráz, CSc. Supervisor's e-mail address: Frantisek.Mraz@mff.cuni.cz Abstract: The mail goal of this thesis is to choose or eventually to propose own modifications to some of the cluster analysis methods in order to observe the progress of dynamic data and its clusters. The chosen ones are applied to the real data. The dynamic data denotes series of information that is created periodically over the time describing the same characteristics of the given set of data objects. When applied to such data, the problem of classic clustering algorithm is the lack of coherence between the results of particular data set from the series which can be illustrated via application to our artificial data. We discuss the idea of proposed modifications and compare the progress of the methods based on them. In order to be able to use our modified methods on the real data, we examine their applicability to the multidimensional artificial data. Due to the complications caused by multidimensional space we develop our own validation criterion. Once the methods are approved for use in such space, we apply our modified methods on the real data, followed by the visualization and...
104

Seleção de grupos a partir de hierarquias: uma modelagem baseada em grafos / Clusters selection from hierarchies: a graph-based model

Anjos, Francisco de Assis Rodrigues dos 28 June 2018 (has links)
A análise de agrupamento de dados é uma tarefa fundamental em mineração de dados e aprendizagem de máquina. Ela tem por objetivo encontrar um conjunto finito de categorias que evidencie as relações entre os objetos (registros, instâncias, observações, exemplos) de um conjunto de dados de interesse. Os algoritmos de agrupamento podem ser divididos em particionais e hierárquicos. Uma das vantagens dos algoritmos hierárquicos é conseguir representar agrupamentos em diferentes níveis de granularidade e ainda serem capazes de produzir partições planas como aquelas produzidas pelos algoritmos particionais, mas para isso é necessário que seja realizado um corte (por exemplo horizontal) sobre o dendrograma ou hierarquia dos grupos. A escolha de como realizar esse corte é um problema clássico que vem sendo investigado há décadas. Mais recentemente, este problema tem ganho especial importância no contexto de algoritmos hierárquicos baseados em densidade, pois somente estratégias mais sofisticadas de corte, em particular cortes não-horizontais denominados cortes locais (ao invés de globais) conseguem selecionar grupos de densidades diferentes para compor a solução final. Entre as principais vantagens dos algoritmos baseados em densidade está sua robustez à interferência de dados anômalos, que são detectados e deixados de fora da partição final, rotulados como ruído, além da capacidade de detectar clusters de formas arbitrárias. O objetivo deste trabalho foi adaptar uma variante da medida da Modularidade, utilizada amplamente na área de detecção de comunidades em redes complexas, para que esta possa ser aplicada ao problema de corte local de hierarquias de agrupamento. Os resultados obtidos mostraram que essa adaptação da modularidade pode ser uma alternativa competitiva para a medida de estabilidade utilizada originalmente pelo algoritmo estado-da-arte em agrupamento de dados baseado em densidade, HDBSCAN*. / Cluster Analysis is a fundamental task in Data Mining and Machine Learning. It aims to find a finite set of categories that evidences the relationships between the objects (records, instances, observations, examples) of a data set of interest. Clustering algorithms can be divided into partitional and hierarchical. One of the advantages of hierarchical algorithms is to be able to represent clusters at different levels of granularity while being able to produce flat partitions like those produced by partitional algorithms. To achieve this, it is necessary to perform a cut (for example horizontal) through the dendrogram or cluster tree. How to perform this cut is a classic problem that has been investigated for decades. More recently, this problem has gained special importance in the context of density-based hierarchical algorithms, since only more sophisticated cutting strategies, in particular nonhorizontal cuts (instead of global ones) are able to select clusters with different densities to compose the final solution. Among the main advantages of density-based algorithms is their robustness to noise and their capability to detect clusters of arbitrary shape. The objective of this work was to adapt a variant of the Q Modularity measure, widely used in the realm of community detection in complex networks, so that it can be applied to the problem of local cuts through cluster hierarchies. The results show that the proposed measure can be a competitive alternative to the stability measure, originally used by the state-of-the-art density-based clustering algorithm HDBSCAN*.
105

Image retrieval using visual attention

Unknown Date (has links) (PDF)
The retrieval of digital images is hindered by the semantic gap. The semantic gap is the disparity between a user's high-level interpretation of an image and the information that can be extracted from an image's physical properties. Content based image retrieval systems are particularly vulnerable to the semantic gap due to their reliance on low-level visual features for describing image content. The semantic gap can be narrowed by including high-level, user-generated information. High-level descriptions of images are more capable of capturing the semantic meaning of image content, but it is not always practical to collect this information. Thus, both content-based and human-generated information is considered in this work. A content-based method of retrieving images using a computational model of visual attention was proposed, implemented, and evaluated. This work is based on a study of contemporary research in the field of vision science, particularly computational models of bottom-up visual attention. The use of computational models of visual attention to detect salient by design regions of interest in images is investigated. The method is then refined to detect objects of interest in broad image databases that are not necessarily salient by design. An interface for image retrieval, organization, and annotation that is compatible with the attention-based retrieval method has also been implemented. It incorporates the ability to simultaneously execute querying by image content, keyword, and collaborative filtering. The user is central to the design and evaluation of the system. A game was developed to evaluate the entire system, which includes the user, the user interface, and retrieval methods. / by Liam M. Mayron. / Thesis (Ph.D.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, FL : 2008 Mode of access: World Wide Web.
106

Uso do teste de Scott-Knott e da análise de agrupamentos, na obtenção de grupos de locais para experimentos com cana-de-açúcar / Scott-Knott test and cluster analysis use in the obtainment of placement groups for sugar cane experiments

Silva, Cristiane Mariana Rodrigues da 15 February 2008 (has links)
O Centro de Tecnologia Canavieira (CTC), situado na cidade de Piracicaba, é uma associação civil de direito privado, criada em agosto de 2004, com o objetivo de realizar pesquisa e desenvolvimento em novas tecnologias para aplicação nas atividades agrícolas, logísticas e industriais dos setores canavieiro e sucroalcooleiro e desenvolver novas variedades de cana-de-açúcar. Há 30 anos, são feitos experimentos, principalmente no estado de São Paulo onde se concentra a maior parte dessas unidades produtoras associadas. No ano de 2004 foram instalados ensaios em 11 destas Unidades Experimentais dentro do estado de São Paulo, e há a necessidade de se saber se é possível a redução deste número, visando aos aspectos econômicos. Se se detectarem grupos de Unidades com dados muito similares, pode-se reduzir o número destas, reduzindo-se, conseqüentemente, o custo dessas pesquisas, e é através do teste estatístico de Scott-Knott e da Análise de Agrupamento, que essa similaridade será comprovada. Este trabalho tem por objetivo, aplicar as técnicas da Análise de Agrupamento (\"Cluster Analisys\") e o teste de Scott-Knott na identificação da existência de grupos de Unidades Industriais, visando à diminuição do número de experimentos do Centro de Tecnologia Canavieira (CTC) e, por conseguinte, visando ao menor custo operacional. Os métodos de comparação múltipla baseados em análise de agrupamento univariada, têm por objetivo separar as médias de tratamentos que, para esse estudo foram médias de locais, em grupos homogêneos, pela minimização da variação dentro, e maximização entre grupos e um desses procedimentos é o teste de Scott-Knott. A análise de agrupamento permite classificar indivíduos ou objetos em subgrupos excludentes, em que se pretende, de uma forma geral, maximizar a homogeneidade de objetos ou indivíduos dentro de grupos e maximizar a heterogeneidade entre os grupos, sendo que a representação desses grupos é feita num gráfico com uma estrutura de árvore denominado dendrograma. O teste de Scott- Knott, é um teste para Análise Univariada, portanto, mais indicado quando se tem apenas uma variável em estudo, sendo que a variável usada foi TPH5C, por se tratar de uma variável calculada a partir das variáveis POL, TCH e FIB. A Análise de Agrupamento, através do Método de Ligação das Médias, mostrou-se mais confiável, pois possuía-se, nesse estudo, três variáveis para análise, que foram: TCH (tonelada de cana por hectare), POL (porcentagem de açúcar), e FIB (porcentagem de fibra). Comparando-se o teste de Scott-Knott com a Análise de Agrupamentos, confirmam-se os agrupamentos entre os locais L020 e L076 e os locais L045 e L006. Conclui-se, portanto, que podem ser eliminadas dos experimentos duas unidades experimentais, optando por L020 (Ribeirão Preto) ou L076 (Assis), e L045 (Ribeirão Preto) ou L006 (Região de Jaú), ficando essa escolha, a critério do pesquisador, podendo assim, reduzir seu custo operacional. / The Centre of Sugar Cane Technology (CTC), placed at the city of Piracicaba, is a private right civilian association, created in August of 2004, aiming to research and develop new technologies with application in agricultural and logistic activities, as well as industrial activities related to sugar and alcohol sectors, such as the development of new sugar cane varieties. Experiments have been made for 30 years, mainly at the state of São Paulo, where most of the associated unities of production are located. At the year of 2004, experiments were installed in 11 of those Experimental Unities within the state of São Paulo, and there is the need to know if it is possible the reduction of this number, aiming at the economical aspects. If it were detected groups of Unities with very similar data, it would be possible to eliminate some of these Unities, diminishing, consequently, the researches cost, and it is through the Scott-Knott statistical test and the Cluster Analysis that this similarity may be corroborated. This work aims to apply the Cluster Analysis techniques and the Scott-Knott test to the identification of the existence of groups of Industrial Unities, aiming at the reduction of the CTC\'s experiments number and, consequently, aiming at the smaller operational cost. The methods of multiple comparison based on univariate cluster analysis aim to split the treatments means in homogenous groups, for this work were used the placement groups means, through the minimization of the variation within, and the maximization amongst groups; one of these methods is the Scott-Knott test. The cluster analysis allows the classification of individual or objects in excludent groups; again, the idea is to maximize the homogeneity of objects or individual within groups and to maximize the heterogeneity amongst groups, being that these groups are represented by a tree structured graphic by the name of dendogram. The Scott-Knott test is a Univariate Analysis test, therefore is appropriate for studies with only one variable of interest. The Cluster Analysis, through the Linkage of Means Method, proved to be more reliable, for, in this case, there were three variables of interest for analysis, and these were: TCH (weight, in tons, of sugar cane by hectare), POL (percentage of sugar) and FIB (percentage of fiber). By comparing the Scott-Knott test with the Cluster Analysis, two pairs of clustering are confirmed, these are: placements L020 and L076; and L045 and L006. Therefore it is concluded that two of the experimental unities may be removed, one can choose from L020 (Ribeirão Preto) or L076 (Assis), and L045 (Ribeirão Preto) or L006 (Região de Jaú), the choice lies with the researcher, and it can diminish the operational cost. Keywords: Cluster Analysis; Sugar Cane
107

Bayesian decision theoretical framework for clustering. / CUHK electronic theses & dissertations collection

January 2011 (has links)
By the Bayesian decision theoretical view, we propose several extensions of current popular graph based methods. Several data-dependent graph construction approaches are proposed by adopting more flexible density estimators. The advantage of these approaches is that the parameters for constructing the graph can be estimated from the data. The constructed graph explores the intrinsic distribution of the data. As a result, the algorithm is more robust. It can obtain good performance constantly across different data sets. Using the flexible density models can result in directed graphs which cannot be handled by traditional graph partitioning algorithms. To tackle this problem, we propose general algorithms for graph partitioning, which can deal with both undirected and directed graphs in a unified way. / In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. / We prove that the spectral clustering (to be specific, the normalized cut) algorithm can be derived from this framework. Especially, it can be shown that the normalized cut is a nonparametric clustering method which adopts a kernel density estimator as its density model and tries to minimize the expected classification error or Bayes risk. / Chen, Mo. / Adviser: Xiaoou Tang. / Source: Dissertation Abstracts International, Volume: 73-06, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 96-104). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
108

study of the generalized spin-boson model =: 廣義自旋--玻色子模型硏究. / 廣義自旋--玻色子模型硏究 / A study of the generalized spin-boson model =: Guang yi zi xuan--bo se zi mo xing yan jiu. / Guang yi zi xuan--bo se zi mo xing yan jiu

January 1999 (has links)
Yung Lit Hung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves p. [122]-124). / Text in English; abstracts in English and Chinese. / Yung Lit Hung. / Abstract --- p.i / Acknowledgements --- p.ii / List of Figures --- p.v / List of Tables --- p.vii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Dissipative two-state system --- p.3 / Chapter 2.1 --- Introduction --- p.3 / Chapter 2.2 --- A two-state system viewed as a spin --- p.4 / Chapter 2.3 --- Rotation of spin operators --- p.5 / Chapter 2.4 --- Dissipative two state system --- p.7 / Chapter 2.5 --- The model in consideration --- p.8 / Chapter 2.5.1 --- gk= 0 --- p.8 / Chapter 2.5.2 --- Δ0 = 0 --- p.8 / Chapter 2.5.3 --- dispersionless phonon case with constant coupling --- p.10 / Chapter 3 --- Linearized spin-wave approximation and mean-field method --- p.13 / Chapter 3.1 --- Holstein Primakoff Transformation --- p.13 / Chapter 3.2 --- Application of linearized spin-wave approxmation to our system --- p.14 / Chapter 3.3 --- Mean-field method --- p.24 / Chapter 4 --- Variational method for optical phonons with constant coupling --- p.35 / Chapter 4.1 --- Introduction --- p.35 / Chapter 4.2 --- Variational Principle --- p.35 / Chapter 4.3 --- Variational Principle applied to optical phonon case --- p.36 / Chapter 4.4 --- Results --- p.41 / Chapter 4.5 --- Conclusion --- p.54 / Chapter 5 --- Variational method for acoustic phonons with ohmic dissipation --- p.56 / Chapter 5.1 --- Introduction --- p.56 / Chapter 5.2 --- Variational Principle applied to acoustic phonon case --- p.57 / Chapter 5.3 --- μk= 0 case --- p.59 / Chapter 5.4 --- Search for any μk≠ 0 solution --- p.60 / Chapter 5.5 --- Results --- p.62 / Chapter 5.6 --- Conclusion --- p.70 / Chapter 6 --- Coupled Cluster Method --- p.72 / Chapter 6.1 --- Introduction --- p.72 / Chapter 6.2 --- Coupled Cluster Method --- p.73 / Chapter 6.2.1 --- Zeroth Level --- p.74 / Chapter 6.2.2 --- First Level --- p.74 / Chapter 6.2.3 --- The bra-state --- p.75 / Chapter 6.3 --- Coupled cluster method applied to our system --- p.76 / Chapter 6.4 --- Coupled cluster method applied to optical phonon case --- p.78 / Chapter 6.4.1 --- First Level --- p.79 / Chapter 6.4.2 --- Second Level --- p.81 / Chapter 6.5 --- Coupled cluster method applied to acoustic phonon case --- p.90 / Chapter 6.5.1 --- First Level --- p.90 / Chapter 6.5.2 --- Second Level --- p.92 / Chapter 6.6 --- Conclusion --- p.98 / Chapter 7 --- Spin system interacting with a photon field --- p.99 / Chapter 7.1 --- Rotation wave approximation --- p.100 / Chapter 7.2 --- Spin system interacting with an optical field --- p.101 / Chapter 7.3 --- Heisenberg equation of motion --- p.102 / Chapter 7.4 --- Brogoliubov transformation approach --- p.104 / Chapter 7.5 --- Conclusion --- p.106 / Chapter A --- Supplementary calculations --- p.107 / Chapter A.1 --- First level calculation for optical photon --- p.107 / Chapter A.2 --- Second level calculation for optical photon --- p.111 / Chapter A.3 --- First level calculation for acoustic photon --- p.114 / Chapter A.4 --- Second level calculation for acoustic photon --- p.118 / Bibliography --- p.121
109

A generic Chinese PAT tree data structure for Chinese documents clustering.

January 2002 (has links)
Kwok Chi Leong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 122-127). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgment --- p.vi / Table of Contents --- p.vii / List of Tables --- p.x / List of Figures --- p.xi / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Contributions --- p.2 / Chapter 1.2 --- Thesis Overview --- p.3 / Chapter Chapter 2 --- Background Information --- p.5 / Chapter 2.1 --- Documents Clustering --- p.5 / Chapter 2.1.1 --- Review of Clustering Techniques --- p.5 / Chapter 2.1.2 --- Suffix Tree Clustering --- p.7 / Chapter 2.2 --- Chinese Information Processing --- p.8 / Chapter 2.2.1 --- Sentence Segmentation --- p.8 / Chapter 2.2.2 --- Keyword Extraction --- p.10 / Chapter Chapter 3 --- The Generic Chinese PAT Tree --- p.12 / Chapter 3.1 --- PAT Tree --- p.13 / Chapter 3.1.1 --- Patricia Tree --- p.13 / Chapter 3.1.2 --- Semi-Infinite String --- p.14 / Chapter 3.1.3 --- Structure of Tree Nodes --- p.17 / Chapter 3.1.4 --- Some Examples of PAT Tree --- p.22 / Chapter 3.1.5 --- Storage Complexity --- p.24 / Chapter 3.2 --- The Chinese PAT Tree --- p.26 / Chapter 3.2.1 --- The Chinese PAT Tree Structure --- p.26 / Chapter 3.2.2 --- Some Examples of Chinese PAT Tree --- p.30 / Chapter 3.2.3 --- Storage Complexity --- p.33 / Chapter 3.3 --- The Generic Chinese PAT Tree --- p.34 / Chapter 3.3.1 --- Structure Overview --- p.34 / Chapter 3.3.2 --- Structure of Tree Nodes --- p.35 / Chapter 3.3.3 --- Essential Node --- p.37 / Chapter 3.3.4 --- Some Examples of the Generic Chinese PAT Tree --- p.41 / Chapter 3.3.5 --- Storage Complexity --- p.45 / Chapter 3.4 --- Problems of Embedded Nodes --- p.46 / Chapter 3.4.1 --- The Reduced Structure --- p.47 / Chapter 3.4.2 --- Disadvantages of Reduced Structure --- p.48 / Chapter 3.4.3 --- A Case Study of Reduced Design --- p.50 / Chapter 3.4.4 --- Experiments on Frequency Mismatch --- p.51 / Chapter 3.5 --- Strengths of the Generic Chinese PAT Tree --- p.55 / Chapter Chapter 4 --- Performance Analysis on the Generic Chinese PAT Tree --- p.58 / Chapter 4.1 --- The Construction of the Generic Chinese PAT Tree --- p.59 / Chapter 4.2 --- Counting the Essential Nodes --- p.61 / Chapter 4.3 --- Performance of Various PAT Trees --- p.62 / Chapter 4.4 --- The Implementation Analysis --- p.64 / Chapter 4.4.1 --- Pure Dynamic Memory Allocation --- p.64 / Chapter 4.4.2 --- Node Production Factory Approach --- p.66 / Chapter 4.4.3 --- Experiment Result of the Factory Approach --- p.68 / Chapter Chapter 5 --- The Chinese Documents Clustering --- p.70 / Chapter 5.1 --- The Clustering Framework --- p.70 / Chapter 5.1.1 --- Documents Cleaning --- p.73 / Chapter 5.1.2 --- PAT Tree Construction --- p.76 / Chapter 5.1.3 --- Essential Node Extraction --- p.77 / Chapter 5.1.4 --- Base Clusters Detection --- p.80 / Chapter 5.1.5 --- Base Clusters Filtering --- p.86 / Chapter 5.1.6 --- Base Clusters Combining --- p.94 / Chapter 5.1.7 --- Documents Assigning --- p.95 / Chapter 5.1.8 --- Result Presentation --- p.96 / Chapter 5.2 --- Discussion --- p.96 / Chapter 5.2.1 --- Flexibility of Our Framework --- p.96 / Chapter 5.2.2 --- Our Clustering Model --- p.97 / Chapter 5.2.3 --- More About Clusters Detection --- p.98 / Chapter 5.2.4 --- Analysis and Complexity --- p.100 / Chapter Chapter 6 --- Evaluations on the Chinese Documents Clustering --- p.101 / Chapter 6.1 --- Details of Experiment --- p.101 / Chapter 6.1.1 --- Parameter of Weighted Frequency --- p.105 / Chapter 6.1.2 --- Effect of CLP Analysis --- p.105 / Chapter 6.1.3 --- Result of Clustering --- p.108 / Chapter 6.2 --- Clustering on Larger Collection --- p.109 / Chapter 6.2.1 --- Comparing the Base Clusters --- p.109 / Chapter 6.2.2 --- Result of Clustering --- p.111 / Chapter 6.2.3 --- Discussion --- p.112 / Chapter 6.3 --- Clustering with Part of Documents --- p.113 / Chapter 6.3.1 --- Clustering with News Headlines --- p.114 / Chapter 6.3.2 --- Clustering with News Abstract --- p.117 / Chapter Chapter 7 --- Conclusion --- p.119 / Bibliography --- p.122
110

An Exploration of the Ground Water Quality of the Trinity Aquifer Using Multivariate Statistical Techniques

Holland, Jennifer M. 08 1900 (has links)
The ground water quality of the Trinity Aquifer for wells sampled between 2000 and 2009 was examined using multivariate and spatial statistical techniques. A Kruskal-Wallis test revealed that all of the water quality parameters with the exception of nitrate vary with land use. A Spearman’s rho analysis illustrates that every water quality parameter with the exception of silica correlated with well depth. Factor analysis identified four factors contributable to hydrochemical processes, electrical conductivity, alkalinity, and the dissolution of parent rock material into the ground water. The cluster analysis generated seven clusters. A chi-squared analysis shows that Clusters 1, 2, 5, and 6 are reflective of the distribution of the entire dataset when looking specifically at land use categories. The nearest neighbor analysis revealed clustered, dispersed, and random patterns depending upon the entity being examined. The spatial autocorrelation technique used on the water quality parameters for the entire dataset identified that all of the parameters are random with the exception of pH which was found to be spatially clustered. The combination of the multivariate and spatial techniques together identified influences on the Trinity Aquifer including hydrochemical processes, agricultural activities, recharge, and land use. In addition, the techniques aided in identifying areas warranting future monitoring which are located in the western and southwestern parts of the aquifer.

Page generated in 0.1293 seconds