• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 9
  • 8
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 172
  • 172
  • 90
  • 61
  • 46
  • 43
  • 31
  • 30
  • 27
  • 22
  • 19
  • 18
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Explanatory visualization of multidimensional prejections / Visualização explanatória de projeções multidimensionais

Martins, Rafael Messias 11 March 2016 (has links)
Visual analytics tools play an important role in the scenario of big data solutions, combining data analysis and interactive visualization techniques in effective ways to support the incremental exploration of large data collections from a wide range of domains. One particular challenge for visual analytics is the analysis of multidimensional datasets, which consist of many observations, each being described by a large number of dimensions, or attributes. Finding and understanding data-related patterns present in such spaces, such as trends, correlations, groups of related observations, and outliers, is hard. Dimensionality reduction methods, or projections, can be used to construct low (two or three) dimensional representations of high-dimensional datasets. The resulting representation can then be used as a proxy for the visual interpretation of the high-dimensional space to efficiently and effectively support the above-mentioned data analysis tasks. Projections have important advantages over other visualization techniques for multidimensional data, such as visual scalability, high degree of robustness to noise and low computational complexity. However, a major obstacle to the effective practical usage of projections relates to their difficult interpretation. Two main types of interpretation challenges for projections are studied in this thesis. First, while projection techniques aim to preserve the so-called structure of the original dataset in the final produced layout, and effectively achieve the proxy effect mentioned earlier, they may introduce a certain amount of errors that influence the interpretation of their results. However, it is hard to convey to users where such errors occur in the projection, how large they are, and which specific data-interpretation aspects they affect. Secondly, interpreting the visual patterns that appear in the projection space is far from trivial, beyond the projections ability to show groups of similar observations. In particular, it is hard to explain these patterns in terms of the meaning of the original data dimensions. In this thesis we focus on the design and development of novel visual explanatory techniques to address the two interpretation challenges of multidimensional projections outlined above. We propose several methods to quantify, classify, and visually represent several types of projection errors, and how their explicit depiction helps interpreting data patterns. Next we show how projections can be visually explained in terms of the highdimensional data attributes, both in a global and a local way. Our proposals are designed to be easily added, and used with, any projection technique, and in any application context using such techniques. Their added value is demonstrated by presenting several exploration scenarios involving various types of multidimensional datasets, ranging from measurements, scientific simulations, software quality metrics, software system structure, and networks. / Ferramentas de análise visual desempenham um papel importante no cenário de soluções para grandes volumes de dados (big data), combinando análise de dados e técnicas interativas de visualização de forma eficaz para apoiar a exploração incremental de coleções de dados em diversos domínios. Um desafio importante em análise visual é a exploração de conjuntos de dados multidimensionais, que consistem em muitas observações, sendo cada uma descrita por um grande número de dimensões, ou atributos. Encontrar e compreender os padrões presentes em tais espaços, tais como tendências, correlações, grupos de observações relacionadas e valores extremos, é difícil. Técnicas de redução de dimensionalidade ou projeções são utilizadas para construir, a partir de conjuntos de dados multidimensionais, representações de duas ou três dimensões que podem então ser utilizadas com substitutas do espaço original para sua interpretação visual, apoiando de forma eficiente as tarefas de análise de dados acima mencionadas. Projeções apresentam vantagens importantes sobre outras técnicas de visualização para dados multidimensionais, tais como escalabilidade visual, resistência a ruídos e baixa complexidade computacional. No entanto, um grande obstáculo para o uso prático de projeções vem da sua difícil interpretação. Dois principais tipos de desafios de interpretação de projeções são estudados nesta tese. Em primeiro lugar, mesmo que as técnicas de projeção tenham como objetivo preservar, na representação final, a estrutura do conjunto de dados original, elas podem introduzir uma certa quantidade de erros que influenciam a interpretação dos seus resultados. No entanto, é difícil transmitir aos usuários onde tais erros ocorrem na projeção, quão severos eles são e que aspectos específicos da interpretação dos dados eles afetam. Em segundo lugar, interpretar os padrões visuais que aparecem em uma projeção, além da percepção de grupos de observações semelhantes, está longe de ser trivial. Em particular, é difícil explicar tais padrões em termos do significado das dimensões dos dados originais. O trabalho desenvolvido nesta tese concentra-se no projeto e desenvolvimento de novas técnicas visuais explicativas para lidar com os dois desafios de interpretação de projeções multidimensionais descritos acima. São propostos alguns métodos para quantificar, classificar e representar visualmente diversos tipos de erros de projeção, e é descrito como essas representações explícitas ajudam na interpretação dos padrões dos dados. Além disso, também são propostas técnicas visuais para explicar projeções em termos dos atributos dos dados multidimensionais, tanto de forma global quanto local. As propostas apresentadas foram concebidas para serem facilmente incorporadas e usadas com qualquer técnica de projeção e em qualquer contexto de aplicação. As contribuições são demonstradas pela apresentação de vários cenários de exploração, envolvendo vários tipos de conjuntos de dados multidimensionais, desde medições e simulações científicas até métricas de qualidade de software, estruturas de sistema de software e redes.
62

Visualisations pour la veille en épidémiologie animale / Visualizations for animal epidemiology surveillance

Fadloun, Samiha 15 November 2018 (has links)
De nombreux documents concernant l'émergence, la propagation ou le suivi de maladies humaines et animales sont quotidiennement publiés sur le Web. Afin de prévenir l'expansion des maladies, les épidémiologistes doivent constamment rechercher ces documents et les étudier afin de détecter les foyers de propagation le plus tôt possible. Dans cette thèse, nous nous intéressons aux deux activités liées à ce travail de veille afin de proposer des outils visuels permettant de faciliter/accélérer l'accès aux informations pertinentes. Nous nous focalisons sur les maladies animales, qui ont été moins étudiées et qui pourtant peuvent avoir de lourdes conséquences sur les activités humaines (maladies transmises d'animaux à humains, épidémies dans les élevages, ...).La première activité du veilleur consiste à collecter les documents issus du Web. Pour cela, nous proposons EpidVis, un outil visuel permettant aux épidémiologistes de regrouper et structurer les mots-clés nécessaires à leurs recherches, construire visuellement des requêtes complexes, les lancer sur différents moteurs de recherche et visualiser les résultats retournés. La seconde activité du veilleur consiste à explorer un grand nombre de documents concernant les maladies. Ces documents contiennent non seulement des informations telles que les noms des maladies, les symptômes associés, les espèces infectées, mais aussi des informations de type spatio-temporelles. Nous proposons EpidNews, un outil de visualisation analytique permettant d'explorer ces données en vue d'en extraire des informations. Les deux outils ont été réalisés dans le cadre d'une étroite collaboration avec des experts en épidémiologie. Ces derniers ont réalisé des études de cas pour montrer que les fonctionnalités des propositions étaient complètement adaptées et permettaient de pouvoir facilement extraire de la connaissance. / Many documents concerning emergence, spread or follow-up of human and animal diseases are published daily on the Web. In order to prevent the spread of disease, epidemiologists must frequently search for these documents and analyze them to detect outbreaks as early as possible. In this thesis, we are interested in the two activities related to this monitoring work in order to produce visual tools facilitating the access to relevant information. We focus on animal diseases, which have been less studied but can have serious consequences for human activities (diseases transmitted from animals to humans, epidemics in livestock ...).The first activity is to collect documents from the Web. For this, we propose EpidVis, a visual tool that allows epidemiologists to group and organize the keywords used for their research, visually build complex queries, launch them on different search engines and view the results returned. The second activity is to explore a large number of documents concerning diseases. These documents contain not only information such as disease names, associated symptoms, infected species, but also spatio-temporal information. We propose EpidNews, a visual analytics tool to explore this data for information extraction. Both tools were developed in close collaboration with experts in epidemiology. The latter carried out case studies to show that the functionalities of the proposals were completely adapted and made it possible to easily extract knowledge.
63

Visual analytics of topics in twitter in connection with political debates / Análise visual de tópicos no Twitter em conexão com debates políticos

Carvalho, Eder José de 04 May 2017 (has links)
Social media channels such as Twitter and Facebook often contribute to disseminate initiatives that seek to inform and empower citizens concerned with government actions. On the other hand, certain actions and statements by governmental institutions, or parliament members and political journalists that appear on the conventional media tend to reverberate on the social media. This scenario produces a lot of textual data that can reveal relevant information on governmental actions and policies. Nonetheless, the target audience still lacks appropriate tools capable of supporting the acquisition, correlation and interpretation of potentially useful information embedded in such text sources. In this scenario, this work presents two system for the analysis of government and social media data. One of the systems introduces a new visualization, based on the river metaphor, for the analysis of the temporal evolution of topics in Twitter in connection with political debates. For this purpose, the problem was initially modeled as a clustering problem and a domain-independent text segmentation method was adapted to associate (by clustering) Twitter content with parliamentary speeches. Moreover, a version of the MONIC framework for cluster transition detection was employed to track the temporal evolution of debates (or clusters) and to produce a set of time-stamped clusters. The other system, named ATR-Vis, combines visualization techniques with active retrieval strategies to involve the user in the retrieval of Twitters posts related to political debates and associate them to the specific debate they refer to. The framework proposed introduces four active retrieval strategies that make use of the Twitters structural information increasing retrieval accuracy while minimizing user involvement by keeping the number of labeling requests to a minimum. Evaluations through use cases and quantitative experiments, as well as qualitative analysis conducted with three domain experts, illustrates the effectiveness of ATR-Vis in the retrieval of relevant tweets. For the evaluation, two Twitter datasets were collected, related to parliamentary debates being held in Brazil and Canada, and a dataset comprising a set of top news stories that received great media attention at the time. / Mídias sociais como o Twitter e o Facebook atuam, em diversas situações, como canais de iniciativas que buscam ampliar as ações de cidadania. Por outro lado, certas ações e manifestações na mídia convencional por parte de instituições governamentais, ou de jornalistas e políticos como deputados e senadores, tendem a repercutir nas mídias sociais. Como resultado, gerase uma enorme quantidade de dados em formato textual que podem ser muito informativos sobre ações e políticas governamentais. No entanto, o público-alvo continua carente de boas ferramentas que ajudem a levantar, correlacionar e interpretar as informações potencialmente úteis associadas a esses textos. Neste contexto, este trabalho apresenta dois sistemas orientados à análise de dados governamentais e de mídias sociais. Um dos sistemas introduz uma nova visualização, baseada na metáfora do rio, para análise temporal da evolução de tópicos no Twitter em conexão com debates políticos. Para tanto, o problema foi inicialmente modelado como um problema de clusterização e um método de segmentação de texto independente de domínio foi adaptado para associar (por clusterização) tweets com discursos parlamentares. Uma versão do algorimo MONIC para detecção de transições entre agrupamentos foi empregada para rastrear a evolução temporal de debates (ou agrupamentos) e produzir um conjunto de agrupamentos com informação de tempo. O outro sistema, chamado ATR-Vis, combina técnicas de visualização com estratégias de recuperação ativa para envolver o usuário na recuperação de tweets relacionados a debates políticos e associa-os ao debate correspondente. O arcabouço proposto introduz quatro estratégias de recuperação ativa que utilizam informação estrutural do Twitter melhorando a acurácia do processo de recuperação e simultaneamente minimizando o número de pedidos de rotulação apresentados ao usuário. Avaliações por meio de casos de uso e experimentos quantitativos, assim como uma análise qualitativa conduzida com três especialistas ilustram a efetividade do ATR-Vis na recuperação de tweets relevantes. Para a avaliação, foram coletados dois conjuntos de tweets relacionados a debates parlamentares ocorridos no Brasil e no Canadá, e outro formado por um conjunto de notícias que receberam grande atenção da mídia no período da coleta.
64

A visual analytics approach for passing strateggies analysis in soccer using geometric features

Malqui, José Luis Sotomayor January 2017 (has links)
As estrategias de passes têm sido sempre de interesse para a pesquisa de futebol. Desde os inícios do futebol, os técnicos tem usado olheiros, gravações de vídeo, exercícios de treinamento e feeds de dados para coletar informações sobre as táticas e desempenho dos jogadores. No entanto, a natureza dinâmica das estratégias de passes são bastante complexas para refletir o que está acontecendo dentro do campo e torna difícil o entendimento do jogo. Além disso, existe uma demanda crecente pela deteção de padrões e analise de estrategias de passes popularizado pelo tiki-taka utilizado pelo FC. Barcelona. Neste trabalho, propomos uma abordagem para abstrair as sequências de pases e agrupálas baseadas na geometria da trajetória da bola. Para analizar as estratégias de passes, apresentamos um esquema de visualização interátiva para explorar a frequência de uso, a localização espacial e ocorrência temporal das sequências. A visualização Frequency Stripes fornece uma visão geral da frequencia dos grupos achados em tres regiões do campo: defesa, meio e ataque. O heatmap de trajetórias coordenado com a timeline de passes permite a exploração das formas mais recorrentes no espaço e tempo. Os resultados demostram oito trajetórias comunes da bola para sequências de três pases as quais dependem da posição dos jogadores e os ângulos de passe. Demonstramos o potencial da nossa abordagem com utilizando dados de várias partidas do Campeonato Brasileiro sob diferentes casos de estudo, e reportamos os comentários de especialistas em futebol. / Passing strategies analysis has always been of interest for soccer research. Since the beginning of soccer, managers have used scouting, video footage, training drills and data feeds to collect information about tactics and player performance. However, the dynamic nature of passing strategies is complex enough to reflect what is happening in the game and makes it hard to understand its dynamics. Furthermore, there exists a growing demand for pattern detection and passing sequence analysis popularized by FC Barcelona’s tiki-taka. We propose an approach to abstract passing strategies and group them based on the geometry of the ball trajectory. To analyse passing sequences, we introduce a interactive visualization scheme to explore the frequency of usage, spatial location and time occurrence of the sequences. The frequency stripes visualization provide, an overview of passing groups frequency on three pitch regions: defense, middle, attack. A trajectory heatmap coordinated with a passing timeline allow, for the exploration of most recurrent passing shapes in temporal and spatial domains. Results show eight common ball trajectories for three-long passing sequences which depend on players positioning and on the angle of the pass. We demonstrate the potential of our approach with data from the Brazilian league under several case studies, and report feedback from a soccer expert.
65

Visualizing multidimensional data similarities: improvements and applications / Visualizando similaridades em dados multidimensionais: melhorias e aplicações

Silva, Renato Rodrigues Oliveira da 05 December 2016 (has links)
Multidimensional datasetsare increasingly more prominent and important in data science and many application domains. Such datasets typically consist of a large set of observations, or data points, each which is described by several measurements, or dimensions. During the design of techniques and tools to process such datasets, a key component is to gather insights into their structure and patterns, a goal which is targeted by multidimensional visualization methods. Structures and patterns of high-dimensional data can be described, at a core level, by the notion of similarity of observations. Hence, to visualize such patterns, we need effective and efficient ways to depict similarity relations between a large number of observations, each having a potentially large number of dimensions. Within the realm of multidimensional visualization methods, two classes of techniques exist projections and similarity trees which effectively capture similarity patterns and also scale well to the number of observations and dimensions of the data. However, while such techniques show similarity patterns, understanding and interpreting these patterns in terms of the original data dimensions is still hard. This thesis addresses the development of visual explanatory techniques for the easy interpretation of similarity patterns present in multidimensional projections and similarity trees, by several contributions. First, we proposemethodsthat make the computation of similarity treesefficient for large datasets, and also allow their visual explanation on a multiscale, or several levels of detail. We also propose ways to construct simplified representations of similarity trees, thereby extending their visual scalability even further. Secondly, we propose methods for the visual explanation of multidimensional projections in terms of automatically detected groups of related observations which are also automatically annotated in terms of their similarity in the high-dimensional data space. We show next how these explanatory mechanismscan be adapted to handle both static and time-dependent multidimensional datasets. Our proposed techniques are designed to be easy to use, work nearly automatically, handle any typesof quantitativemultidimensional datasets and multidimensional projection techniques, and are demonstrated on a variety of real-world large datasets obtained from image collections, text archives, scientific measurements, and software engineeering. / Conjuntos de dados multidimensionais são cada vez mais proeminentes e importantes em data science e muitos domínios de aplicação. Esses conjuntos de dados são tipicamente constituídos de um grande número de observações, ou objetos, cada qual descrito por várias medidas, ou dimensões. Durante o projeto de técnicas e ferramentas para processar tais dados, um dos focos principais é prover meios para análise e levantamento de hipóteses a partir das principais estruturas e padrões. Esse objetivo é perseguido por métodos de visualização multidimensional. Estruturas e padrões em dados multidimensionais podem ser descritos, em linhas gerais, pela noção de similaridade das observações. Portanto, para visualizar esses padrões, precisamos de meios efetivos e eficientes para retratar relações de similaridade dentre um grande número de observações, que potencialmente possuem um grande número de dimensões cada. No contexto dos métodos de visualização multidimensional, existem duas categorias de técnicas projeções e árvores de similaridade que efetivamente capturam padrões de similaridade e oferecem boa escalabilidade, tanto para o número de observações e quanto de dimensões. No entanto, embora essas técnicas exibam padrões de similaridade, o entendimento e interpretação desses padrões, em termos das dimensões originais dos dados, ainda é difícil. O trabalho desenvolvido nessa tese visa o desenvolvimento de técnicas explicativas para a fácil interpretação de padrões de similaridade presentes em projeções multidimensionais e árvores de similaridade. Primeiro, propomos métodos que possibilitam a computação eficiente de árvores de similaridade para grandes conjuntos de dados, e também a sua explicação visual em multiescala, ou seja, em vários níveis de detalhe. Também propomos modos de construir representações simplificadas de árvores de similaridade, e desse modo estender ainda mais a sua escalabilidade visual. Segundo, propomos métodos para explicar visualmente projeções multidimensionais em termos de grupos de observações relacionadas, detectadas e anotadas automaticamente para explicitar aspectos de sua similaridade no espaço de alta dimensionalidade. Mostramos em seguida como esses mecanismos explicativos podem ser adaptados para lidar com dados de natureza estática e dependentes no tempo. Nossas técnicas sã construídas visando fácil utilização, funcionamento semi automático, aplicação em quaisquer tipos de dados multidimensionais quantitativos e quaisquer técnicas de projeção multidimensional. Demonstramos a sua utilização em uma variedade de conjuntos de dados reais, obtidos a partir de coleções de imagens, arquivos textuais, medições científicas e de engenharia de software.
66

Play with data - an exploration of play analytics and its effect on player expereinces

Medler, Ben 02 July 2012 (has links)
In a time of 'Big Data,' 'Personal Informatics' and 'Infographics' the definitions of data visualization and data analytics are splintering rapidly. When one compares how Fortune 500 companies are using analytics to optimize their supply chains and lone individuals are visualizing their Twitter messages, we can see how multipurpose these areas are becoming. Visualization and analytics are frequently exhibited as tools for increasing efficiency and informing future decisions. At the same time, they are used to produce artworks that alter our perspectives of how data is represented and analyzed. During this time of turbulent reflection within the fields of data visualization and analytics, digital games have been going through a similar period of data metamorphosis as players are increasingly being connected and tracked through various platform systems and social networks. The amount of game-related data collected and shared today greatly exceeds that of previous gaming eras and, by utilizing the domains of data visualization and analytics, this increased access to data is poised to reshape, and continue to reshape, how players experience games. This dissertation examines how visualization, analytics and games intersect into a domain with a fluctuating identity but has the overall goal to analyze game-related data. At this intersection exists play analytics, a blend of digital systems and data analysis methods connecting players, games and their data. Play analytic systems surround the experience of playing a game, visualizing data collected from players and act as external online hubs where players congregate. As part of this dissertation's examination of play analytics, over eighty systems are analyzed and discussed. Additionally, a user study was conducted to test the effects play analytic systems have on a player's gameplay behavior. Both studies are used to highlight how play analytic systems function and are experienced by players. With millions of players already using play analytics systems, this dissertation provides a chronicle of the current state of play analytics, how the design of play analytics systems may shift in the future and what it means to play with data.
67

Network-based visual analysis of tabular data

Liu, Zhicheng 04 April 2012 (has links)
Tabular data is pervasive in the form of spreadsheets and relational databases. Although tables often describe multivariate data without explicit network semantics, it may be advantageous to explore the data modeled as a graph or network for analysis. Even when a given table design conveys some static network semantics, analysts may want to look at multiple networks from different perspectives, at different levels of abstraction, and with different edge semantics. This dissertation is motivated by the observation that a general approach for performing multi-dimensional and multi-level network-based visual analysis on multivariate tabular data is necessary. We present a formal framework based on the relational data model that systematically specifies the construction and transformation of graphs from relational data tables. In the framework, a set of relational operators provide the basis for rich expressive power for network modeling. Powered by this relational algebraic framework, we design and implement a visual analytics system called Ploceus. Ploceus supports flexible construction and transformation of networks through a direct manipulation interface, and integrates dynamic network manipulation with visual exploration for a seamless analytic experience.
68

Informing design of visual analytics systems for intelligence analysis: understanding users, user tasks, and tool usage

Kang, Youn Ah 02 July 2012 (has links)
Visual analytics, defined as "the science of analytical reasoning facilitated by interactive visual interfaces," emerged several years ago as a new research field. While it has seen rapid growth for its first five years of existence, the main focus of visual analytics research has been on developing new techniques and systems rather than identifying how people conduct analysis and how visual analytics tools can help the process and the product of sensemaking. The intelligence analysis community in particular has not been fully examined in visual analytics research even though intelligence analysts are one of the major target users for which visual analytics systems are built. The lack of understanding about how analysts work and how they can benefit from visual analytics systems has created a gap between tools being developed and real world practices. This dissertation is motivated by the observation that existing models of sensemaking/intelligence analysis do not adequately characterize the analysis process and that many visual analytics tools do not truly meet user needs and are not being used effectively by intelligence analysts. I argue that visual analytics research needs to adopt successful HCI practices to better support user tasks and add utility to current work practices. As the first step, my research aims (1) to understand work processes and practices of intelligence analysts and (2) to evaluate a visual analytics system in order to identify where and how visual analytics tools can assist. By characterizing the analysis process and identifying leverage points for future visual analytics tools through empirical studies, I suggest a set of design guidelines and implications that can be used for both designing and evaluating future visual analytics systems.
69

Data-intensive interactive workflows for visual analytics

Khemiri, Wael 12 December 2011 (has links) (PDF)
The increasing amounts of electronic data of all forms, produced by humans (e.g. Web pages, structured content such as Wikipedia or the blogosphere etc.) and/or automatic tools (loggers, sensors, Web services, scientific programs or analysis tools etc.) leads to a situation of unprecedented potential for extracting new knowledge, finding new correlations, or simply making sense of the data.Visual analytics aims at combining interactive data visualization with data analysis tasks. Given the explosion in volume and complexity of scientific data, e.g., associated to biological or physical processes or social networks, visual analytics is called to play an important role in scientific data management.Most visual analytics platforms, however, are memory-based, and are therefore limited in the volume of data handled. Moreover, the integration of each new algorithm (e.g. for clustering) requires integrating it by hand into the platform. Finally, they lack the capability to define and deploy well-structured processes where users with different roles interact in a coordinated way sharing the same data and possibly the same visualizations.This work is at the convergence of three research areas: information visualization, database query processing and optimization, and workflow modeling. It provides two main contributions: (i) We propose a generic architecture for deploying a visual analytics platform on top of a database management system (DBMS) (ii) We show how to propagate data changes to the DBMS and visualizations, through the workflow process. Our approach has been implemented in a prototype called EdiFlow, and validated through several applications. It clearly demonstrates that visual analytics applications can benefit from robust storage and automatic process deployment provided by the DBMS while obtaining good performance and thus it provides scalability.Conversely, it could also be integrated into a data-intensive scientific workflow platform in order to increase its visualization features.
70

Dynamische Erzeugung von Diagrammen aus standardisierten Geodatendiensten

Mann, Ulrich 15 May 2014 (has links) (PDF)
Geodateninfrastrukturen (GDI) erfahren in den letzten Jahren immer weitere Verbreitung durch die Schaffung neuer Standards zum Austausch von Geodaten. Die vom Open Geospatial Consortium (OGC), einem Zusammenschluss aus Forschungseinrichtungen und privaten Firmen, entwickelten offenen Beschreibungen von Dienste-Schnittstellen verbessern die Interoperabilität in GDI. OGC-konforme Geodienste werden momentan hauptsächlich zur Aufnahme, Verwaltung, Prozessierung und Visualisierung von Geodaten verwendet. Durch das vermehrte Aufkommen von Geodiensten steigt die Verfügbarkeit von Geodaten. Gleichzeitig hält der Trend zur Generierung immer größerer Datenmengen beispielsweise durch wissenschaftliche Simulationen an (Unwin et al., 2006). Dieser führt zu einem wachsenden Bedarf an Funktionalität zur effektiven Exploration und Analyse von Geodaten, da komplexe Zusammenhänge in großen Datenbeständen untersucht und relevante Informationen heraus gefiltert werden müssen. Dazu angewendete Techniken werden im Forschungsfeld Visual Analytics (Visuelle Analyse) umfassend beschrieben. Die visuelle Analyse beschäftigt sich mit der Entwicklung von Werkzeugen und Techniken zur automatisierten Analyse und interaktiven Visualisierung zum Verständnis großer und komplexer Datensätze (Keim et al., 2008). Bei aktuellen Web-basierten Anwendungen zur Exploration und Analyse handelt es sich hauptsächlich um Client-Server-Systeme, die auf fest gekoppelten Datenbanken arbeiten. Mit den wachsenden Fähigkeiten von Geodateninfrastrukturen steigt das Interesse, Funktionalitäten zur Datenanalyse in einer GDI anzubieten. Das Zusammenspiel von bekannten Analysetechniken und etablierten Standards zur Verarbeitung von Geodaten kann dem Nutzer die Möglichkeit geben, in einer Webanwendung interaktiv auf ad hoc eingebundenen Geodaten zu arbeiten. Damit lassen sich mittels aktueller Technologien Einsichten in komplexe Daten gewinnen, ihnen zugrunde liegende Zusammenhänge verstehen und Aussagen zur Entscheidungsunterstützung ableiten. In dieser Arbeit wird die Eignung der OGC WMS GetFeatureInfo-Operation zur Analyse raum-zeitlicher Geodaten in einer GDI untersucht. Der Schwerpunkt liegt auf der dynamischen Generierung von Diagrammen unter Nutzung externer Web Map Service (WMS) als Datenquellen. Nach der Besprechung von Grundlagen zur Datenmodellierung und GDIStandards, wird auf relevante Aspekte der Datenanalyse und Visualisierung von Diagrammen eingegangen. Die Aufstellung einer Task Taxonomie dient der Untersuchung, welche raumzeitlichen Analysen sich durch die GetFeatureInfo-Operation umsetzen lassen. Es erfolgt die Konzeption einer Systemarchitektur zur Umsetzung der Datenanalyse auf verteilten Geodaten. Zur Sicherstellung eines konsistenten und OGC-konformen Datenaustauschs zwischen den Systemkomponenenten, wird ein GML-Schema erarbeitet. Anschließend wird durch eine prototypischen Implementierung die Machbarkeit der Diagramm-basierten Analyse auf Klimasimulationsdaten des ECHAM5-Modells verifiziert. / Spatial data infrastructures (SDI) have been subject to a widening dispersion in the last decade, through the development of standards for the exchange of geodata. The open descriptions of service interfaces, developed by the OGC, a consortium from research institutions and private sector companies, alter interoperability in SDI. Until now, OGC-conform geoservices are mainly utilised for the recording, management, processing and visualisation of geodata. Through the ongoing emergence of spatial data services there is a rise in the availability of geodata. At the same time, the trend of the generation of ever increasing amounts of data, e. g. by scientific simulation (Unwin et al., 2006), continues. By this, the need for capabilities to effectively explore and analyse geodata is growing. Complex relations in huge data need to be determined and relevant information extracted. Techniques, which are capable of this, are being described extensively by Visual Analytics. This field of research engages in the development of tools and techniques for automated analysis and interactive visualisation of huge and complex data (Keim et al., 2008). Current web-based applications for the exploration and analysis are usually established as Client-Server approaches, working on a tightly coupled data storage (see subsection 3.3). With the growing capabilities of SDI, there is an increasing interest in offering functionality for data analysis. The combination of widely used analysis techniques and well-established standards for the treatment of geodata may offer the possibility of working interactively on ad hoc integrated data. This will allow insights into large amounts of complex data, understand natural interrelations and derive knowledge for spatial decision support by the use of state-of-the-art technologies. In this paper, the capabilities of the OGC WMS GetFeatureInfo operation for the analysis of spatio-temporal geodata in a SDI are investigated. The main focus is on dynamic generation of diagrams by the use of distributed WMS as a data storage. After the review of basics in data modelling and SDI-standards, relevant aspects of data analysis and visualisation of diagrams are treated. The compilation of a task taxonomy aids in the determination of realisable spatio-temporal analysis tasks by use of the GetFeatureInfo operation. In the following, conceptual design of a multi-layered system architecture to accomplish data analysis on distributed datasets, is carried out. In response to one of the main issues, a GML-schema is developed to ensure consistent and OGC-conform data exchange among the system components. To verify the feasibility of integration of diagram-based analysis in a SDI, a system prototype is developed to explore ECHAM5 climate model data.

Page generated in 0.0752 seconds