• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • Tagged with
  • 14
  • 14
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Measuring data abstraction quality in multiresolution visualizations

Cui, Qingguang. January 2007 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: Multiresolution visualization; Sampling; Clustering; Metrics. Includes bibliographical references (leaves 66-68).
2

QoS : quality driven data abstraction for large databases

Wad, Charudatta V. January 2008 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: Computer science; abstraction quality; quality visualization. Includes bibliographical references (leaves 47-50).
3

Mytran: A Programming Language for Data Abstraction

Snider, Timothy West January 1981 (has links)
<p> This project is about the design and implementation of a new programming language, Mytran. Two new control statements are implemented in Mytran. Data abstraction is supported through parameterized types or "type constructors".</p> / Thesis / Master of Science (MSc)
4

Measuring Data Abstraction Quality in Multiresolution Visualizations

Cui, Qingguang 11 April 2007 (has links)
Data abstraction techniques are widely used in multiresolution visualization systems to reduce visual clutter and facilitate analysis from overview to detail. However, analysts are usually unaware of how well the abstracted data represent the original dataset, which can impact the reliability of results gleaned from the abstractions. In this thesis, we define three types of data abstraction quality measures for computing the degree to which the abstraction conveys the original dataset: the Histogram Difference Measure, the Nearest Neighbor Measure and Statistical Measure. They have been integrated within XmdvTool, a public-domain multiresolution visualization system for multivariate data analysis that supports sampling as well as clustering to simplify data. Several interactive operations are provided, including adjusting the data abstraction level, changing selected regions, and setting the acceptable data abstraction quality level. Conducting these operations, analysts can select an optimal data abstraction level. We did an evaluation to check how well the data abstraction measures conform to the data abstraction quality perceived by users. We adjusted the data abstraction measures based on the results of the evaluation. We also experimented on the measures with different distance methods and different computing mechanisms, in order to find the optimal variation from many variations of each type of measure. Finally, we developed two case studies to demonstrate how analysts can compare different abstraction methods using the measures to see how well relative data density and outliers are maintained, and then select an abstraction method that meets the requirement of their analytic tasks.
5

INCREASING MONITORING CAPACITY TO KEEP PACE WITH THE WIRELESS REVOLUTION

Chu, Joni, Harrison, Irving 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / With wireless communications becoming the rule rather than the exception, satellite operators need tools to effectively monitor increasingly large and complex satellite constellations. Visual data monitoring increases the monitoring capacity of satellite operators by several orders of magnitude, enabling them to track hundreds of thousands of parameters in real-time on a single screen. With this powerful new tool, operators can proactively address potential problems before they become customer complaints.
6

QoS: Quality Driven Data Abstraction for Large Databases

Wad, Charudatta V 05 February 2008 (has links)
Data abstraction is the process of reducing a large dataset into one of moderate size, while maintaining dominant characteristics of the original dataset. Data abstraction quality refers to the degree by which the abstraction represents original data. Clearly, the quality of an abstraction directly affects the confidence an analyst can have in results derived from such abstracted views about the actual data. While some initial measures to quantify the quality of abstraction have been proposed, they currently can only be used as an after thought. While an analyst can be made aware of the quality of the data he works with, he cannot control the desired quality and the trade off between the size of the abstraction and its quality. While some analysts require atleast a certain minimal level of quality, others must be able to work with certain sized abstraction due to resource limitations. consider the quality of the data while generating an abstraction. To tackle these problems, we propose a new data abstraction generation model, called the QoS model, that presents the performance quality trade-off to the analyst and considers that quality of the data while generating an abstraction. As the next step, it generates abstraction based on the desired level of quality versus time as indicated by the analyst. The framework has been integrated into XmdvTool, a freeware multi-variate data visualization tool developed at WPI. Our experimental results show that our approach provides better quality with the same resource usage compared to existing abstraction techniques.
7

Multidimensional Visualization of News Articles / Flerdimensionel Visualisering av Nyhetsartiklar

Åklint, Richard, Khan, Muhammad Farhan January 2015 (has links)
Large data sets are difficult to visualize. For a human to find structures and understand the data, good visualization tools are required. In this project a technique will be developed that makes it possible for a user to look at complex data at different scales. This technique is obvious when viewing geographical data where zooming in and out gives a good feeling for the spatial relationships in map data or satellite images. However, for other types of data it is not obvious how much scaling should be done. In this project, an experimental application is developed that visualizes data in multiple dimensions from a large news article database. Using this experimental application, the user can select multiple keywords on different axis and then can create a visualization containing news articles with those keywords. The user is able to move around the visualization. If the camera is far away from the document icons then they are clustered using red coloured spheres. If the user moves the camera closer to the clusters they will pop up into single document icons. If the camera is very close to the document icons it is possible to read the news articles
8

Abstraction of infinite and communicating CSPZ processes

FARIAS, Adalberto Cajueiro de 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:49:26Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009 / Esta tese trata de um problema muito comum em verificação formal: explosão de estados. O problema desabilita a verificação automática de propriedades através da verificação de modelos. Isto é superado pelo uso de abstração de dados, em que o espaço de estados de umsistema é reduzido usandoumprincípio simples: descartando detalhes de tal forma que o espaço de estados torna-se finito exibindo ainda propriedades desejáveis. Isso habilita o uso de verificacao de modelos, já que o modelo mais simples (abstrato) pode ser usado no lugar do modelo original (concreto). Entretanto, abstrações podem perder propriedades já que o nível de precisão é degradado, para algumas propriedades. Abstrair tipos de dados é, normalmente, uma tarefa não-trivial e requer uma profunda experiência: o usuário deve prover domínios abstratos, uma relacao matemática entre os estados (concreto e abstrato), uma inicialização abstrata, e uma versão abstrata para cada operação. A abordagem proposta nesta tese transfere a maior parte dessa experiência para um procedimento sistemático que calcula relações de abstração. Essas relações são a base para as relações matemáticas entre os estados, como também suas imagens determinam os domínios abstratos (os valores de dados mínimos para preservar propriedades). Também propomos meta-modelos para estabelecer como o sistema abstrato é inicializado e como operações são tornadas fechadas sob os domínios abstratos. Isso elimina o conhecimento requerido do usuário para fornecer as versões abstratas para a inicialização e operações. Os meta-modelos garantem a correspondência entre os sistemas concreto e abstrato. Assim, nós derivamos especificações abstratasa partir de concretas de tal formaque a especificação concreta é mais determinística que a abstrata por construção. Esta é a idéia por trás da teoria sobrejacente de nossa abordagem de abstração de dados: refinamento de dados. A notação adotada é CSPZ uma integração formal das linguagens de especificação CSP e Z. Uma especificação CSPZ tem duas partes: uma parte comportamental (CSP) e outra de dados (Z). O procedimento de cálculo foca na parte de Z, mas os resultados são usados na especificação CSPZ por completo; isso segue da independência de dados da parte de CSP (os dados não podem afetar seu comportamento). Ao final, a verificação automática é obtida pela conversão da especificação CSPZ em CSP puro e em seguida pelo reuso do verificador de modelos padrão de CSP. Nossa abordagem compreende as seguintes tarefas: nós extraímos a parte de Z de uma especificação CSPZ (puramente sintática), calculamos as relações de abstração (através de uma análise sistemática de predicados com uso de ferramenta de suporte), construímos as relações matemáticas entre os estados, os esquemas abstratos (definidos por meta-modelos), e realizamos um pós-processamento na especificação abstrata. A última tarefa pode resultar em alguns ajustes nas relações de abstração. A novidade prática e maior contribuição de nossa abordagem é o cálculo sistemático das das relações de abstração, que são os elementos chave de todas abordagens de abstração de dados que estudamos ao longo dos últimos anos. O refinamento de dados entre o sistema produzido por nossa abordagem e o original (concreto) é a segunda contribuição deste trabalho. O procedimento sistemático é na verdade uma técnica de análise de predicado que usa as restrições sobre os dados para determinar seus valores mínimos que são suficientes para preservar o comportamento do sistema. Isso evita a execução (concreta ou simbólica) do sistema analisado. Os passos produzem mapeamentos que revelam alguns elementos cruciais: o espaço de estados abstrato e as relações matemáticas entre ele e o espaço de estados concreto. Essas relações são usadas para construir o sistema abstrato seguindo o formato estabelecido pelos meta-modelos. As limitações de nossa abordagem são também discutidas. Nós aplicamos a abordagem a alguns exemplos também analisados por outras técnicas da literatura. Discutimos também sobre trabalhos relacionados procurando destacar vantagens, desvantagens e aspectos complementares. Finalmente, apresentamos nossas conclusões e futuras direções para este trabalho
9

Integrated tooling framework for software configuration analysis

Singh, Nieraj 05 May 2011 (has links)
Configurable software systems adapt to changes in hardware and execution environments, and often exhibit a variety of complex maintenance issues. Many tools exist to aid developers in analysing and maintaining large configurable software systems. Some are standalone applications, while a growing number are becoming part of Integrated Development Environments (IDE) like Eclipse. Reusable tooling frameworks can reduce development time for tools that concentrate on software configuration analysis. This thesis presents C-CLEAR, a common, reusable, and extensible tooling framework for software configuration analysis, where clear separation of concern exists between tooling functionality and definitions that characterise a software system. Special emphasis will be placed on common mechanisms for data abstraction and automatic IDE integration independent of the software system that is being analysed. / Graduate
10

Passage à l'échelle pour la visualisation interactive exploratoire de données : approches par abstraction et par déformation spatiale / Addressing scaling challenges in interactive exploratory visualization with abstraction and spatial distortion

Richer, Gaëlle 26 November 2019 (has links)
La visualisation interactive est un outil essentiel pour l'exploration, la compréhension et l'analyse de données. L'exploration interactive efficace de jeux de données grands ou complexes présente cependant deux difficultés fondamentales. La première est visuelle et concerne les limitations de la perception et cognition humaine, ainsi que celles des écrans. La seconde est computationnelle et concerne les limitations de capacité mémoire ou de traitement des machines standards. Dans cette thèse, nous nous intéressons aux techniques de passage à l'échelle relativement à ces deux difficultés, pour plusieurs contextes d'application.Pour le passage à l'échelle visuelle, nous présentons une approche versatile de mise en évidence de sous-ensembles d'éléments par déformation spatiale appliquée aux vues multiples et une représentation abstraite et multi-/échelle de coordonnées parallèles. Sur les vues multiples, la déformation spatiale vise à remédier à la diminution de l'efficacité de la surbrillance lorsque les éléments graphiques sont de taille réduite. Sur les coordonnées parallèles, l'abstraction multi-échelle consiste à simplifier la représentation tout en permettant d'accéder interactivement au détail des données, en les pré-agrégeant à plusieurs niveaux de détail.Pour le passage à l'échelle computationnelle, nous étudions des approches de pré-calcul et de calcul à la volée sur des infrastructures distribuées permettant l'exploration de jeux de données de plus d'un milliard d'éléments en temps interactif. Nous présentons un système pour l'exploration de données multi-dimensionnelles dont les interactions et l'abstraction respectent un budget en nombre d'éléments graphiques qui, en retour, fournit une borne théorique sur les latences d'interactions dues au transfert réseau entre client et serveur. Avec le même objectif, nous comparons des stratégies de réduction de données géométrique pour la reconstruction de cartes de densité d'ensembles de points. / Interactive visualization is helpful for exploring, understanding, and analyzing data. However, increasingly large and complex data challenges the efficiency of visualization systems, both visually and computationally. The visual challenge stems from human perceptual and cognitive limitations as well as screen space limitations while the computational challenge stems from the processing and memory limitations of standard computers.In this thesis, we present techniques addressing the two scalability issues for several interactive visualization applications.To address visual scalability requirements, we present a versatile spatial-distortion approach for linked emphasis on multiple views and an abstract and multi-scale representation based on parallel coordinates. Spatial distortion aims at alleviating the weakened emphasis effect of highlighting when applied to small-sized visual elements. Multiscale abstraction simplifies the representation while providing detail on demand by pre-aggregating data at several levels of detail.To address computational scalability requirements and scale data processing to billions of items in interactive times, we use pre-computation and real-time computation on a remote distributed infrastructure. We present a system for multi-/dimensional data exploration in which the interactions and abstract representation comply with a visual item budget and in return provides a guarantee on network-related interaction latencies. With the same goal, we compared several geometric reduction strategies for the reconstruction of density maps of large-scale point sets.

Page generated in 0.1273 seconds