• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 11
  • 7
  • 3
  • 1
  • 1
  • Tagged with
  • 109
  • 109
  • 109
  • 35
  • 25
  • 24
  • 20
  • 19
  • 19
  • 16
  • 16
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

The Growth Curve Model for High Dimensional Data and its Application in Genomics

Jana, Sayantee 04 1900 (has links)
<p>Recent advances in technology have allowed researchers to collect high-dimensional biological data simultaneously. In genomic studies, for instance, measurements from tens of thousands of genes are taken from individuals across several experimental groups. In time course microarray experiments, gene expression is measured at several time points for each individual across the whole genome resulting in massive amount of data. In such experiments, researchers are faced with two types of high-dimensionality. The first is global high-dimensionality, which is common to all genomic experiments. The global high-dimensionality arises because inference is being done on tens of thousands of genes resulting in multiplicity. This challenge is often dealt with statistical methods for multiple comparison, such as the Bonferroni correction or false discovery rate (FDR). We refer to the second type of high-dimensionality as gene specific high-dimensionality, which arises in time course microarry experiments due to the fact that, in such experiments, sample size is often smaller than the number of time points ($n</p> <p>In this thesis, we use the growth curve model (GCM), which is a generalized multivariate analysis of variance (GMANOVA) model, and propose a moderated test statistic for testing a special case of the general linear hypothesis, which is specially useful for identifying genes that are expressed. We use the trace test for the GCM and modify it so that it can be used in high-dimensional situations. We consider two types of moderation: the Moore-Penrose generalized inverse and Stein's shrinkage estimator of $ S $. We performed extensive simulations to show performance of the moderated test, and compared the results with original trace test. We calculated empirical level and power of the test under many scenarios. Although the focus is on hypothesis testing, we also provided moderated maximum likelihood estimator for the parameter matrix and assessed its performance by investigating bias and mean squared error of the estimator and compared the results with those of the maximum likelihood estimators. Since the parameters are matrices, we consider distance measures in both power and level comparisons as well as when investigating bias and mean squared error. We also illustrated our approach using time course microarray data taken from a study on Lung Cancer. We were able to filter out 1053 genes as non-noise genes from a pool of 22,277 genes which is approximately 5\% of the total number of genes. This is in sync with results from most biological experiments where around 5\% genes are found to be differentially expressed.</p> / Master of Science (MSc)
92

Canonical Correlation and Clustering for High Dimensional Data

Ouyang, Qing January 2019 (has links)
Multi-view datasets arise naturally in statistical genetics when the genetic and trait profile of an individual is portrayed by two feature vectors. A motivating problem concerning the Skin Intrinsic Fluorescence (SIF) study on the Diabetes Control and Complications Trial (DCCT) subjects is presented. A widely applied quantitative method to explore the correlation structure between two domains of a multi-view dataset is the Canonical Correlation Analysis (CCA), which seeks the canonical loading vectors such that the transformed canonical covariates are maximally correlated. In the high dimensional case, regularization of the dataset is required before CCA can be applied. Furthermore, the nature of genetic research suggests that sparse output is more desirable. In this thesis, two regularized CCA (rCCA) methods and a sparse CCA (sCCA) method are presented. When correlation sub-structure exists, stand-alone CCA method will not perform well. To tackle this limitation, a mixture of local CCA models can be employed. In this thesis, I review a correlation clustering algorithm proposed by Fern, Brodley and Friedl (2005), which seeks to group subjects into clusters such that features are identically correlated within each cluster. An evaluation study is performed to assess the effectiveness of CCA and correlation clustering algorithms using artificial multi-view datasets. Both sCCA and sCCA-based correlation clustering exhibited superior performance compare to the rCCA and rCCA-based correlation clustering. The sCCA and the sCCA-clustering are applied to the multi-view dataset consisted of PrediXcan imputed gene expression and SIF measurements of DCCT subjects. The stand-alone sparse CCA method identified 193 among 11538 genes being correlated with SIF#7. Further investigation of these 193 genes with simple linear regression and t-test revealed that only two genes, ENSG00000100281.9 and ENSG00000112787.8, were significance in association with SIF#7. No plausible clustering scheme was detected by the sCCA based correlation clustering method. / Thesis / Master of Science (MSc)
93

Contributions to Data Reduction and Statistical Model of Data with Complex Structures

Wei, Yanran 30 August 2022 (has links)
With advanced technology and information explosion, the data of interest often have complex structures, with the large size and dimensions in the form of continuous or discrete features. There is an emerging need for data reduction, efficient modeling, and model inference. For example, data can contain millions of observations with thousands of features. Traditional methods, such as linear regression or LASSO regression, cannot effectively deal with such a large dataset directly. This dissertation aims to develop several techniques to effectively analyze large datasets with complex structures in the observational, experimental and time series data. In Chapter 2, I focus on the data reduction for model estimation of sparse regression. The commonly-used subdata selection method often considers sampling or feature screening. Un- der the case of data with both large number of observation and predictors, we proposed a filtering approach for model estimation (FAME) to reduce both the size of data points and features. The proposed algorithm can be easily extended for data with discrete response or discrete predictors. Through simulations and case studies, the proposed method provides a good performance for parameter estimation with efficient computation. In Chapter 3, I focus on modeling the experimental data with quantitative-sequence (QS) factor. Here the QS factor concerns both quantities and sequence orders of several compo- nents in the experiment. Existing methods usually can only focus on the sequence orders or quantities of the multiple components. To fill this gap, we propose a QS transformation to transform the QS factor to a generalized permutation matrix, and consequently develop a simple Gaussian process approach to model the experimental data with QS factors. In Chapter 4, I focus on forecasting multivariate time series data by leveraging the au- toregression and clustering. Existing time series forecasting method treat each series data independently and ignore their inherent correlation. To fill this gap, I proposed a clustering based on autoregression and control the sparsity of the transition matrix estimation by adap- tive lasso and clustering coefficient. The clustering-based cross prediction can outperforms the conventional time series forecasting methods. Moreover, the the clustering result can also enhance the forecasting accuracy of other forecasting methods. The proposed method can be applied on practical data, such as stock forecasting, topic trend detection. / Doctor of Philosophy / This dissertation focuses on three projects that are related to data reduction and statistical modeling of data with complex structures. In chapter 2, we propose a filtering approach of data for parameter estimation of sparse regression. Given data with thousands of ob- servations and predictors or even more, large storage and computation spaces is need to handle these data. It is challenging to computational power and takes long time in terms of computational cost. So we come up with an algorithm (FAME) that can reduce both the number of observations and predictors. After data reduction, this subdata selected by FAME keeps most information of the original dataset in terms of parameter estimation. Compare with existing methods, the dimension of the subdata generated by the proposed algorithm is smaller while the computational time does not increase. In chapter 3, we use quantitative-sequence (QS) factor to describe experimental data. One simple example of experimental data is milk tea. Adding 1 cup of milk first or adding 2 cup of tea first will influence the flavor. And this case can be extended to cases when there are thousands of ingredients need to be input into the experiment. Then the order and amount of ingredients will generate different experimental results. We use QS factor to describe this kind of order and amount. Then by transforming the QS factor to a matrix containing continuous value and set this matrix as input, we model the experimental results with a simple Gaussian process. In chapter 4, we propose an autoregression-based clustering and forecasting method of multi- variate time series data. Existing research works often treat each time series independently. Our approach incorporates the inherent correlation of data and cluster related series into one group. The forecasting is built based on each cluster and data within one cluster can cross predict each other. One application of this method is on topic trending detection. With thousands of topics, it is unfeasible to apply one model for forecasting all time series. Considering the similarity of trends among related topics, the proposed method can cluster topics based on their similarity, and then perform forecasting in autoregression model based on historical data within each cluster.
94

Um estudo sobre o papel de medidas de similaridade em visualização de coleções de documentos / A study on the role of similarity measures in visual text analytics

Salazar, Frizzi Alejandra San Roman 27 September 2012 (has links)
Técnicas de visualização de informação, tais como as que utilizam posicionamento de pontos baseado na similaridade do conteúdo, são utilizadas para criar representações visuais de dados que evidenciem certos padrões. Essas técnicas são sensíveis à qualidade dos dados, a qual, por sua vez, depende de uma etapa de pré-processamento muito influente. Esta etapa envolve a limpeza do texto e, em alguns casos, a detecção de termos e seus pesos, bem como a definição de uma função de (dis)similaridade. Poucos são os estudos realizados sobre como esses cálculos de (dis)similaridade afetam a qualidade das representações visuais geradas para dados textuais. Este trabalho apresenta um estudo sobre o papel das diferentes medidas de (dis)similaridade entre pares de textos na geração de mapas visuais. Nos concentramos principalmente em dois tipos de funções de distância, aquelas computadas a partir da representação vetorial do texto (Vector Space Model (VSM)) e em medidas de comparação direta de strings textuais. Comparamos o efeito na geração de mapas visuais com técnicas de posicionamento de pontos, utilizando as duas abordagens. Para isso, foram utilizadas medidas objetivas para comparar a qualidade visual dos mapas, tais como Neighborhood Hit (NH) e Coeficiente de Silhueta (CS). Descobrimos que ambas as abordagens têm pontos a favor, mas de forma geral, o VSM apresentou melhores resultados quanto à discriminação de classes. Porém, a VSM convencional não é incremental, ou seja, novas adições à coleção forçam o recálculo do espaço de dados e das dissimilaridades anteriormente computadas. Nesse sentido, um novo modelo incremental baseado no VSM (Incremental Vector Space Model (iVSM)) foi considerado em nossos estudos comparativos. O iVSM apresentou os melhores resultados quantitativos e qualitativos em diversas configurações testadas. Os resultados da avaliação são apresentados e recomendações sobre a aplicação de diferentes medidas de similaridade de texto em tarefas de análise visual, são oferecidas / Information visualization techniques, such as similarity based point placement, are used for generating of visual data representation that evidence some patterns. These techniques are sensitive to data quality, which depends of a very influential preprocessing step. This step involves cleaning the text and in some cases, detecting terms and their weights, as well as definiting a (dis)similarity function. There are few studies on how these (dis)similarity calculations aect the quality of visual representations for textual data. This work presents a study on the role of the various (dis)similarity measures in generating visual maps. We focus primarily on two types of distance functions, those based on vector representations of the text (Vector Space Model (VSM)) and measures obtained from direct comparison of text strings, comparing the effect on the visual maps obtained with point placement techniques with the two approaches. For this, objective measures were employed to compare the visual quality of the generated maps, such as the Neighborhood Hit and Silhouette Coefficient. We found that both approaches have strengths, but in general, the VSM showed better results as far as class discrimination is concerned. However, the conventional VSM is not incremental, i.e., new additions to the collection force the recalculation of the data space and dissimilarities previously computed. Thus, a new model based on incremental VSM (Incremental Vector Space Model (iVSM)) has been also considered in our comparative studies. iVSM showed the best quantitative and qualitative results in several of the configurations considered. The evaluation results are presented and recommendations on the application of different similarity measures for text analysis tasks visually are provided
95

Um estudo sobre o papel de medidas de similaridade em visualização de coleções de documentos / A study on the role of similarity measures in visual text analytics

Frizzi Alejandra San Roman Salazar 27 September 2012 (has links)
Técnicas de visualização de informação, tais como as que utilizam posicionamento de pontos baseado na similaridade do conteúdo, são utilizadas para criar representações visuais de dados que evidenciem certos padrões. Essas técnicas são sensíveis à qualidade dos dados, a qual, por sua vez, depende de uma etapa de pré-processamento muito influente. Esta etapa envolve a limpeza do texto e, em alguns casos, a detecção de termos e seus pesos, bem como a definição de uma função de (dis)similaridade. Poucos são os estudos realizados sobre como esses cálculos de (dis)similaridade afetam a qualidade das representações visuais geradas para dados textuais. Este trabalho apresenta um estudo sobre o papel das diferentes medidas de (dis)similaridade entre pares de textos na geração de mapas visuais. Nos concentramos principalmente em dois tipos de funções de distância, aquelas computadas a partir da representação vetorial do texto (Vector Space Model (VSM)) e em medidas de comparação direta de strings textuais. Comparamos o efeito na geração de mapas visuais com técnicas de posicionamento de pontos, utilizando as duas abordagens. Para isso, foram utilizadas medidas objetivas para comparar a qualidade visual dos mapas, tais como Neighborhood Hit (NH) e Coeficiente de Silhueta (CS). Descobrimos que ambas as abordagens têm pontos a favor, mas de forma geral, o VSM apresentou melhores resultados quanto à discriminação de classes. Porém, a VSM convencional não é incremental, ou seja, novas adições à coleção forçam o recálculo do espaço de dados e das dissimilaridades anteriormente computadas. Nesse sentido, um novo modelo incremental baseado no VSM (Incremental Vector Space Model (iVSM)) foi considerado em nossos estudos comparativos. O iVSM apresentou os melhores resultados quantitativos e qualitativos em diversas configurações testadas. Os resultados da avaliação são apresentados e recomendações sobre a aplicação de diferentes medidas de similaridade de texto em tarefas de análise visual, são oferecidas / Information visualization techniques, such as similarity based point placement, are used for generating of visual data representation that evidence some patterns. These techniques are sensitive to data quality, which depends of a very influential preprocessing step. This step involves cleaning the text and in some cases, detecting terms and their weights, as well as definiting a (dis)similarity function. There are few studies on how these (dis)similarity calculations aect the quality of visual representations for textual data. This work presents a study on the role of the various (dis)similarity measures in generating visual maps. We focus primarily on two types of distance functions, those based on vector representations of the text (Vector Space Model (VSM)) and measures obtained from direct comparison of text strings, comparing the effect on the visual maps obtained with point placement techniques with the two approaches. For this, objective measures were employed to compare the visual quality of the generated maps, such as the Neighborhood Hit and Silhouette Coefficient. We found that both approaches have strengths, but in general, the VSM showed better results as far as class discrimination is concerned. However, the conventional VSM is not incremental, i.e., new additions to the collection force the recalculation of the data space and dissimilarities previously computed. Thus, a new model based on incremental VSM (Incremental Vector Space Model (iVSM)) has been also considered in our comparative studies. iVSM showed the best quantitative and qualitative results in several of the configurations considered. The evaluation results are presented and recommendations on the application of different similarity measures for text analysis tasks visually are provided
96

Variational Approximations and Other Topics in Mixture Models

Dang, Sanjeena 24 August 2012 (has links)
Mixture model-based clustering has become an increasingly popular data analysis technique since its introduction almost fifty years ago. Families of mixture models are said to arise when the component parameters, usually the component covariance matrices, are decomposed and a number of constraints are imposed. Within the family setting, it is necessary to choose the member of the family --- i.e., the appropriate covariance structure --- in addition to the number of mixture components. To date, the Bayesian information criterion (BIC) has proved most effective for this model selection process, and the expectation-maximization (EM) algorithm has been predominantly used for parameter estimation. We deviate from the EM-BIC rubric, using variational Bayes approximations for parameter estimation and the deviance information criterion (DIC) for model selection. The variational Bayes approach alleviates some of the computational complexities associated with the EM algorithm. We use this approach on the most famous family of Gaussian mixture models known as Gaussian parsimonious clustering models (GPCM). These models have an eigen-decomposed covariance structure. Cluster-weighted modelling (CWM) is another flexible statistical framework for modelling local relationships in heterogeneous populations on the basis of weighted combinations of local models. In particular, we extend cluster-weighted models to include an underlying latent factor structure of the independent variable, resulting in a novel family of models known as parsimonious cluster-weighted factor analyzers. The EM-BIC rubric is utilized for parameter estimation and model selection. Some work on a mixture of multivariate t-distributions is also presented, with a linear model for the mean and a modified Cholesky-decomposed covariance structure leading to a novel family of mixture models. In addition to model-based clustering, these models are also used for model-based classification, i.e., semi-supervised clustering. Parameters are estimated using the EM algorithm and another approach to model selection other than the BIC is also considered. / NSERC PGS-D
97

Numerische Methoden zur Analyse hochdimensionaler Daten / Numerical Methods for Analyzing High-Dimensional Data

Heinen, Dennis 01 July 2014 (has links)
Diese Dissertation beschäftigt sich mit zwei der wesentlichen Herausforderungen, welche bei der Bearbeitung großer Datensätze auftreten, der Dimensionsreduktion und der Datenentstörung. Der erste Teil dieser Dissertation liefert eine Zusammenfassung über Dimensionsreduktion. Ziel der Dimensionsreduktion ist eine sinnvolle niedrigdimensionale Darstellung eines vorliegenden hochdimensionalen Datensatzes. Insbesondere diskutieren und vergleichen wir bewährte Methoden des Manifold-Learning. Die zentrale Annahme des Manifold-Learning ist, dass der hochdimensionale Datensatz (approximativ) auf einer niedrigdimensionalen Mannigfaltigkeit liegt. Störungen im Datensatz sind bei allen Dimensionsreduktionsmethoden hinderlich. Der zweite Teil dieser Dissertation stellt eine neue Entstörungsmethode für hochdimensionale Daten vor, eine Wavelet-Shrinkage-Methode für die Glättung verrauschter Abtastwerte einer zugrundeliegenden multivariaten stückweise stetigen Funktion, wobei die Abtastpunkte gestreut sein können. Die Methode stellt eine Verallgemeinerung und Weiterentwicklung der für die Bildkompression eingeführten "Easy Path Wavelet Transform" (EPWT) dar. Grundlage ist eine eindimensionale Wavelet-Transformation entlang (adaptiv) zu konstruierender Pfade durch die Abtastpunkte. Wesentlich für den Erfolg der Methode sind passende adaptive Pfadkonstruktionen. Diese Dissertation beinhaltet weiterhin eine kurze Diskussion der theoretischen Eigenschaften von Wavelets entlang von Pfaden sowie numerische Resultate und schließt mit möglichen Modifikationen der Entstörungsmethode.
98

Mapas auto-organizáveis com topologioa variante no tempo para categorização em subespaços em dados de alta dimensionalidade e vistas múltiplas

ANTONINO, Victor Oliveira 16 August 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-24T15:04:03Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) mapas-auto-organizaveis2.pdf: 2835656 bytes, checksum: 8836a86bd2cced9353cb25b53383b305 (MD5) / Made available in DSpace on 2017-04-24T15:04:03Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) mapas-auto-organizaveis2.pdf: 2835656 bytes, checksum: 8836a86bd2cced9353cb25b53383b305 (MD5) Previous issue date: 2016-08-16 / Métodos e algoritmos em aprendizado de máquina não supervisionado têm sido empregados em diversos problemas significativos. Uma explosão na disponibilidade de dados de várias fontes e modalidades está correlacionada com os avanços na obtenção, compressão, armazenamento, transferência e processamento de grandes quantidades de dados complexos com alta dimensionalidade, como imagens digitais, vídeos de vigilância e microarranjos de DNA. O agrupamento se torna difícil devido à crescente dispersão desses dados, bem como a dificuldade crescente em discriminar distâncias entre os pontos de dados. Este trabalho apresenta um algoritmo de agrupamento suave em subespaços baseado em um mapa auto-organizável (SOM) com estrutura variante no tempo, o que significa que o agrupamento dos dados pode ser alcançado sem qualquer conhecimento prévio, tais como o número de categorias ou a topologia dos padrões de entrada, nos quais ambos são determinados durante o processo de treinamento. O modelo também atribui diferentes pesos a diferentes dimensões, o que implica que cada dimensão contribui para o descobrimento dos aglomerados de dados. Para validar o modelo, diversos conjuntos de dados reais foram utilizados, considerando uma diversificada gama de contextos, tais como mineração de dados, expressão genética, agrupamento multivista e problemas de visão computacional. Os resultados são promissores e conseguem lidar com dados reais caracterizados pela alta dimensionalidade. / Unsupervised learning methods have been employed on many significant problems. A blast in the availability of data from multiple sources and modalities is correlated with advancements in how to obtain, compress, store, transfer, and process large amounts of complex high-dimensional data, such as digital images, surveillance videos, and DNA microarrays. Clustering becomes challenging due to the increasing sparsity of such data, as well as the increasing difficulty in discriminating distances between data points. This work presents a soft subspace clustering algorithm based on a self-organizing map (SOM) with time-variant structure, meaning that clustering data can be achieved without any prior knowledge such as the number of categories or input data topology, in which both are determined during the training process. The model also assigns different weights to different dimensions, this implies that every dimension contributes to uncover clusters. To validate the model, we used a number of real-world data sets, considering a diverse range of contexts such as data mining, gene expression, multi-view and computer vision problems. The promising results can handle real-world data characterized by high dimensionality.
99

Multivariate analysis of high-throughput sequencing data / Analyses multivariées de données de séquençage à haut débit

Durif, Ghislain 13 December 2016 (has links)
L'analyse statistique de données de séquençage à haut débit (NGS) pose des questions computationnelles concernant la modélisation et l'inférence, en particulier à cause de la grande dimension des données. Le travail de recherche dans ce manuscrit porte sur des méthodes de réductions de dimension hybrides, basées sur des approches de compression (représentation dans un espace de faible dimension) et de sélection de variables. Des développements sont menés concernant la régression "Partial Least Squares" parcimonieuse (supervisée) et les méthodes de factorisation parcimonieuse de matrices (non supervisée). Dans les deux cas, notre objectif sera la reconstruction et la visualisation des données. Nous présenterons une nouvelle approche de type PLS parcimonieuse, basée sur une pénalité adaptative, pour la régression logistique. Cette approche sera utilisée pour des problèmes de prédiction (devenir de patients ou type cellulaire) à partir de l'expression des gènes. La principale problématique sera de prendre en compte la réponse pour écarter les variables non pertinentes. Nous mettrons en avant le lien entre la construction des algorithmes et la fiabilité des résultats.Dans une seconde partie, motivés par des questions relatives à l'analyse de données "single-cell", nous proposons une approche probabiliste pour la factorisation de matrices de comptage, laquelle prend en compte la sur-dispersion et l'amplification des zéros (caractéristiques des données single-cell). Nous développerons une procédure d'estimation basée sur l'inférence variationnelle. Nous introduirons également une procédure de sélection de variables probabiliste basée sur un modèle "spike-and-slab". L'intérêt de notre méthode pour la reconstruction, la visualisation et le clustering de données sera illustré par des simulations et par des résultats préliminaires concernant une analyse de données "single-cell". Toutes les méthodes proposées sont implémentées dans deux packages R: plsgenomics et CMF / The statistical analysis of Next-Generation Sequencing data raises many computational challenges regarding modeling and inference, especially because of the high dimensionality of genomic data. The research work in this manuscript concerns hybrid dimension reduction methods that rely on both compression (representation of the data into a lower dimensional space) and variable selection. Developments are made concerning: the sparse Partial Least Squares (PLS) regression framework for supervised classification, and the sparse matrix factorization framework for unsupervised exploration. In both situations, our main purpose will be to focus on the reconstruction and visualization of the data. First, we will present a new sparse PLS approach, based on an adaptive sparsity-inducing penalty, that is suitable for logistic regression to predict the label of a discrete outcome. For instance, such a method will be used for prediction (fate of patients or specific type of unidentified single cells) based on gene expression profiles. The main issue in such framework is to account for the response to discard irrelevant variables. We will highlight the direct link between the derivation of the algorithms and the reliability of the results. Then, motivated by questions regarding single-cell data analysis, we propose a flexible model-based approach for the factorization of count matrices, that accounts for over-dispersion as well as zero-inflation (both characteristic of single-cell data), for which we derive an estimation procedure based on variational inference. In this scheme, we consider probabilistic variable selection based on a spike-and-slab model suitable for count data. The interest of our procedure for data reconstruction, visualization and clustering will be illustrated by simulation experiments and by preliminary results on single-cell data analysis. All proposed methods were implemented into two R-packages "plsgenomics" and "CMF" based on high performance computing
100

Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction

Oesterling, Patrick 14 April 2016 (has links)
This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\''s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\''s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve.

Page generated in 0.1352 seconds