• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 121
  • 14
  • 9
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 430
  • 430
  • 139
  • 123
  • 108
  • 84
  • 75
  • 71
  • 52
  • 50
  • 49
  • 49
  • 47
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Systematising glyph design for visualization

Maguire, Eamonn James January 2014 (has links)
The digitalisation of information now affects most fields of human activity. From the social sciences to biology to physics, the volume, velocity, and variety of data exhibit exponential growth trends. With such rates of expansion, efforts to understand and make sense of datasets of such scale, how- ever driven and directed, progress only at an incremental pace. The challenges are significant. For instance, the ability to display an ever growing amount of data is physically and naturally bound by the dimensions of the average sized display. A synergistic interplay between statistical analysis and visualisation approaches outlines a path for significant advances in the field of data exploration. We can turn to statistics to provide principled guidance for prioritisation of information to display. Using statistical results, and combining knowledge from the cognitive sciences, visual techniques can be used to highlight salient data attributes. The purpose of this thesis is to explore the link between computer science, statistics, visualization, and the cognitive sciences, to define and develop more systematic approaches towards the design of glyphs. Glyphs represent the variables of multivariate data records by mapping those variables to one or more visual channels (e.g., colour, shape, and texture). They offer a unique, compact solution to the presentation of a large amount of multivariate information. However, composing a meaningful, interpretable, and learnable glyph can pose a number of problems. The first of these problems exist in the subjectivity involved in the process of data to visual channel mapping, and in the organisation of those visual channels to form the overall glyph. Our first contribution outlines a computational technique to help systematise many of these otherwise subjective elements of the glyph design process. For visual information compression, common patterns (motifs) in time series or graph data for example, may be replaced with more compact, visual representations. Glyph-based techniques can provide such representations that can help users find common patterns more quickly, and at the same time, bring attention to anomalous areas of the data. However, replacing any data with a glyph is not going to make tasks such as visual search easier. A key problem is the selection of semantically meaningful motifs with the potential to compress large amounts of information. A second contribution of this thesis is a computational process for systematic design of such glyph libraries and their subsequent glyphs. A further problem in the glyph design process is in their evaluation. Evaluation is typically a time-consuming, highly subjective process. Moreover, domain experts are not always plentiful, therefore obtaining statistically significant evaluation results is often difficult. A final contribution of this work is to investigate if there are areas of evaluation that can be performed computationally.
102

The Relationship Between Data Visualization and Task Performance

Phillips, Brandon 12 1900 (has links)
We are entering an era of business intelligence and big data where simple tables and other traditional means of data display cannot deal with the vast amounts of data required to meet the decision-making needs of businesses and their clients. Graphical figures constructed with modern visualization software can convey more information than a table because there is a limit to the table size that is visually usable. Contemporary decision performance is influenced by the task domain, the user experience, and the visualizations themselves. Utilizing data visualization in task performance to aid in decision making is a complex process. We develop and test a decision-making framework to examine task performance in a visual and non-visual aided decision-making by using three experiments to test this framework. Studies 1 and 2 investigate DV formats and how complexity and design affects the proposed visual decision making framework. The studies also examine how DV formats affect task performance, as measured by accuracy and timeliness, and format preference. Additionally, these studies examine how DV formats influence the constructs in the proposed decision making framework which include information usefulness, decision confidence, cognitive load, visual aesthetics, information seeking intention, and emotion. Preliminary findings indicate that graphical DV allows individuals to respond faster and more accurately, resulting in improved task fit and performance. Anticipated implications of this research are as follows. Visualizations are independent of the size of the data set but can be increasingly complex as the data complexity increases. Furthermore, well designed visualizations let you see through the complexity and simultaneously mine the complexity with drill down technologies such as OLAP.
103

Supporting human interpretation and analysis of activity captured through overhead video

Romero, Mario January 2009 (has links)
Many disciplines spend considerable resources studying behavior. Tools range from pen-and-paper observation to biometric sensing. A tool's appropriateness depends on the goal and justification of the study, the observable context and feature set of target behaviors, the observers' resources, and the subjects' tolerance to intrusiveness. We present two systems: Viz-A-Vis and Tableau Machine. Viz-A-Vis is an analytical tool appropriate for onsite, continuous, wide-coverage and long-term capture, and for objective, contextual, and detailed analysis of the physical actions of subjects who consent to overhead video observation. Tableau Machine is a creative artifact for the home. It is a long-lasting, continuous, interactive, and abstract Art installation that captures overhead video and visualizes activity to open opportunities for creative interpretation. We focus on overhead video observation because it affords a near one-to-one correspondence between pixels and floor plan locations, naturally framing the activity in its spatial context. Viz-A-Vis is an information visualization interface that renders and manipulates computer vision abstractions. It visualizes the hidden structure of behavior in its spatiotemporal context. We demonstrate the practicality of this approach through two user studies. In the first user study, we show an important search performance boost when compared against standard video playback and against the video cube. Furthermore, we determine a unanimous user choice for overviewing and searching with Viz-A-Vis. In the second study, a domain expert evaluation, we validate a number of real discoveries of insightful environmental behavior patterns by a group of senior architects using Viz-A-Vis. Furthermore, we determine clear influences of Viz-A-Vis over the resulting architectural designs in the study. Tableau Machine is a sensing, interpreting, and painting artificial intelligence. It is an Art installation with a model of perception and personality that continuously and enduringly engages its co-occupants in the home, creating an aura of presence. It perceives the environment through overhead cameras, interprets its perceptions with computational models of behavior, maps its interpretations to generative abstract visual compositions, and renders its compositions through paintings. We validate the goal of opening a space for creative interpretation through a study that included three long-term deployments in real family homes. / <p>QC 20160405</p>
104

Affective Engagement in Information Visualization

Ya-Hsin Hung (7043363) 13 August 2019 (has links)
Evaluating the “success” of an information visualization (InfoVis) where its main purpose is communication or presentation is challenging. Within metrics that go beyond traditional analysis- and performance-oriented approaches, one construct that has received attention in recent years is “user engagement”. In this research, I propose Affective Engagement (AE)-- user's engagement in emotional aspects as a metric for InfoVis evaluation. I developed and evaluated a self-report measurement tool named AEVis that can quantify a user's level of AE while using an InfoVis. Following a systematic process of evidence-centered design, each activity during instrument development contributed specific evidence to support the validity of interpretations of scores from the instrument. Four stages were established for the development: In stage 1, I examined the role and characteristics of AE in evaluating information visualization through an exploratory qualitative study, from which 11 indicators of AE were proposed: Fluidity, Enthusiasm, Curiosity, Discovery, Clarity, Storytelling, Creativity, Entertainment, Untroubling, Captivation, and Pleasing; In stage 2, I developed an item bank comprising various candidate items for assessing a user's level of AE, and assembled the first version of survey instrument through target population and domain experts' feedback; In stage 3, I conducted three field tests for instrument revisions. Three analytical methods were applied during this process: Item Analysis, Factor Analysis (FA), and Item Response Theory (IRT); In stage 4, a follow-up field test study was conducted to investigate the external relations between constructs in AEVis and other existing instruments. The results of the four stages support the validity and reliability of the developed instrument, including: In stage 1, user's AE characteristics elicited from the observations support the theoretical background of the test content; In stage 2, the feedback and review from target users and domain experts provides validity evidence for the test content of the instrument in the context of InfoVis; In stage 3, results from Exploratory and Confirmatory FA, as well as IRT methods reveal evidence for the internal structure of the instrument; In stage 4, the correlations between total scores and sub-scores of AEVis and other existing instruments provide external relation evidence of score interpretations. Using this instrument, visualization researchers and designers can evaluate non-performance-related aspects of their work efficiently and without specific domain knowledge. The utilities and implications of AE can be investigated as well. In the future, this research may provide foundation for expanding the theoretical basis of engagement in the fields of human-computer interaction and information visualization.
105

Contextual Modulation of Competitive Object Candidates in Early Object Recognition

Unknown Date (has links)
Object recognition is imperfect; often incomplete processing or deprived information yield misperceptions (i.e., misidentification) of objects. While quickly rectified and typically benign, instances of such errors can produce dangerous consequences (e.g., police shootings). Through a series of experiments, this study examined the competitive process of multiple object interpretations (candidates) during the earlier stages of object recognition process using a lexical decision task paradigm. Participants encountered low-pass filtered objects that were previously demonstrated to evoke multiple responses: a highly frequented interpretation (“primary candidates”) and a lesser frequented interpretation (“secondary candidates”). When objects were presented without context, no facilitative effects were observed for primary candidates. However, secondary candidates demonstrated evidence for being actively suppressed. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2017. / FAU Electronic Theses and Dissertations Collection
106

Brain dynamics and behavioral basis of a higher level cognitive task: number comparison

Unknown Date (has links)
Number perception, its neural basis and its relationship to how numerical stimuli are presented have been challenging research topics in cognitive neuroscience for many years. A primary question that has been addressed is whether the perception of the quantity of a visually presented number stimulus is dissociable from its early visual perception. The present study examined the possible influence of visual quality judgment on quantity judgments of numbers. To address this issue, volunteer adult subjects performed a mental number comparison task in which two-digit stimulus numbers (Arabic number format), among the numbers between 31 and 99 were mentally compared to a memorized reference number, 65. Reaction times (RTs) and neurophysiological (i.e. electroencephalographic (EEG) data) responses were acquired simultaneously during performance of the two-digit number comparison task. In this particular quantity comparison task, the number stimuli were classified into three distance factors. That is, numbers were a close, medium or far distance from the reference number (i.e., 65). In order to evaluate the relationship between numerical stimulus quantity and quality, the number stimuli were embedded in varying degrees of a typical visual noise form, known as "salt and pepper noise" (e.g., the visual noise one perceives when viewing a photograph taken with a dusty camera lens). In this manner, the visual noise permitted visual quality to be manipulated across three levels: no noise, medium noise (approximately 60% degraded visual quality from nonoise), and dense noise (75% degraded visual quality from no-noise). / The RTs provided the information about the overt responses; however, the temporal relationship of visual quality (starts earlier than quantity perception) and quantity were examined using eventrelated potentials (ERPs) extracted from continuous EEG recordings. The analysis of the RTs revealed that the judgment of number quantity is dependent upon visual number quality. In addition, the same effect was observed over the ERP components occurring between 100 ms and 300 ms after stimulus onset time over the posterior electrodes. Principal components analysis (PCA) and independent component analysis (ICA) methods were used to further analyze the ERP data. The consistent results of the PCA and ICA were used to represent the spatial brain dynamics, as well as to obtain temporal dynamics. The overall conclusion of the present study is that ERPs, ICs and PCs along with RTs suggested a strategy of quantitative perception (i.e., number comparison) based on the qualitative attributes of the stimuli highlighting the importance of the design of the task and the methodology / by Meltem Ballan. / Thesis (Ph.D.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
107

Reengenharia da ferramenta Projection Explorer para apoio à seleção de estudos primários em revisão sistemática / Reengineering of projection explorer tool to support selection of primary studies on systematic review

Martins, Rafael Messias 11 April 2011 (has links)
A crescente adoção do paradigma experimental na pesquisa em Engenharia de Software visa a obtenção de evidências experimentais sobre as tecnologias propostas para garantir sua correta avaliação e para a construção de um corpo de conhecimento sólido da disciplina. Uma das abordagens de pesquisa experimental é a revisão sistemática, um método rigoroso, planejado e auditável para a realização da coleta e análise crítica de dados experimentais disponíveis sobre um determinado tema de pesquisa. Apesar de produzir resultados confiáveis, a condução de uma revisão sistemática pode ser trabalhosa e muitas vezes demorada, principalmente quando existe um grande volume de estudos a serem considerados, selecionados e avaliados. Uma solução encontrada na literatura é a utilização de ferramentas de Mineração Visual de Textos (VTM) como a Projection Explorer (PEx) para apoiar a fase de seleção e análise de estudos primários no processo de revisão sistemática. Neste trabalho foi realizada uma reengenharia de software na ferramenta PEx com dois objetivos principais: apoiar, utilizando VTM, a fase de seleção e análise de estudos primários no processo de revisão sistemática e implementar novos requisitos não-funcionais relativos à melhoria da manutenibilidade e escalabilidade da ferramenta. Como resultado foi construída uma plataforma modular para a instanciação de ferramentas de visualização e, a partir da mesma, uma ferramenta de revisão sistemática apoiada por VTM. Os resultados de um estudo de caso executado com a ferramenta mostraram que a abordagem de aplicação de técnicas VTM usada nesse contexto é viável e promissora, melhorando tanto a performance quanto a efetividade da seleção / The increasing adoption of the experimental paradigm in Software Engineering research aims at obtaining experimental evidence of the proposed technologies to ensure their proper evaluation and to build a solid body of knowledge for the discipline. One approach of experimental research is the systematic review, a rigorous, auditable and planned method to carry out the collection and analysis of experimental data available on a particular research topic. Despite producing reliable results, conducting a systematic review can be a cumbersome and often lengthy process, especially when a large volume of studies is to be considered, selected and evaluated. One solution found in the literature is the use of Visual Text Mining (VTM) tools such as the Projection Explorer (PEx) to support the selection and analysis of primary studies in the systematic review process. In this work a software re-engineering was performed on PEx with two main goals: to support, using VTM, the stage of selection and analysis of primary studies in the systematic review process and to implement new non-functional requirements related to improving the maintainability and scalability of the tool. The results were the building of a modular platform for instantiating visualization tools and, from it, the instantiation of a systematic review tool supported by VTM. The results of a case study carried out with the tool showed that the VTM approach used in this context is feasible and promising, improving both performance and the effectiveness of selection
108

Metáforas visuais alternativas para layouts gerados por projeções multidimensionais: um estudo de caso na visualização de músicas / Alternative visual metaphors for layouts generated by multidimensional projections: a case study in visualization of music

Vargas, Aurea Rossy Soriano 09 May 2013 (has links)
Os layouts gerados por técnicas de projeção multidimensional podem ser a base para diferentes metáforas de visualização que são aplicáveis a diversos tipos de dados. Existe muito interesse em investigar metáforas alternativas à comumente usada, nuvem de pontos usada para exibir layouts gerados por projeções multidimensionais. Neste trabalho, foi estudado este problema, com foco no domínio da visualização de músicas. Existem muitas dimensões envolvidas na percepção e manipulação de músicas e portanto é difícil encontrar um modelo computacional intuitivo para representá-las. Nosso objetivo neste trabalho foi investigar as representações visuais capazes de transmitir a estrutura de uma música, assim como exibir uma coleção de músicas de modo a ressaltar as similaridades. A solução proposta consiste em uma representação icônica de músicas individuais, que é associada ao posicionamento espacial dos grupos ou coleções de músicas gerado por uma técnica de projeção multidimensional que reflete suas similaridades estruturais. Tanto a projeção quanto o ícone requerem um vetor de características para representar a música. As características são extraídas a partir de arquivos MIDI, já que a própria natureza das descrições MIDI permite a identificação das estruturas musicais relevantes. Estas características proporcionam a entrada tanto para a comparação de dissimilaridades quanto para a construção do ícone da música. Os posicionamentos espaciais são obtidos usando a técnica de projeção multidimensional Least Square Projection (LSP), e as similaridades são calculadas usando a distância Dynamic Time Warping (DTW). O ícone fornece um resumo visual das repetições de acordes em uma música em particular. Nessa dissertação são descritos os processos de geração destas representações visuais, além de descrever um sistema que implementa esses recursos e ilustrar como eles podem apoiar algumas tarefas exploratórias das coleções de músicas, identificando possíveis cenários de uso / The layouts generated by multidimensional projection techniques can be the basis for different visualization metaphors that are applicable to various data types. There is much interest in investigating alternatives to the point cloud metaphor commonly used to present projection layouts. In this work, we investigated this problem, targeting the domain of music visualization. There are many dimensions involved in the perception and manipulation of music and therefore it is difficult to find an intuitive computer model to represent music. Our goal in this work was to investigate visual representations capable of conveying the musical structure of a song, as well as displaying a collection of songs so as to highlight their similarities. The proposed solution consists of an iconic representation for individual songs, that is associated with the spatial positioning of groups or collections of songs generated by a multidimensional projection technique that reflects their structural similarity. Both the projection and the icon require a feature vector representation of the music. The features are extracted from MIDI files, as the nature of the MIDI descriptions allows the identification of the relevant musical structures. These features provide the input for both the dissimilarity comparison and for constructing the music icon. The spatial layout is computed with the Least Square Projection (LSP) technique, and similarities are computed using the Dynamic Time Warping (DTW) distance. The icon provides a visual summary of the chord repetitions in a particular song. We describe the process of generating these visual representations, describe a system that implements such funcionalities and illustrate how they can support some exploratory tasks on music collections, identifying possible usage scenarios
109

A similarity-based approach to generate edge bundles / Uma abordagem baseada em similaridade para a construção de agrupamentos visuais de arestas

Sikansi, Fábio Henrique Gomes 22 December 2016 (has links)
Graphs have been successfully employed in avariety of problems and applications, being the object of study in modeling, analysis and construction of visual representations. While different approaches exist for graph visualization,most of them suffer from the severe clutter when the number of nodes or edges is large. Among the approaches that handle such problem, edge bundling techniques attained relative success on improving the quality of the visual representations by bending and aggregating edges in order to produce an organized layout. Despite this success, most of the exiting techniques create edge bundles based only on the visual space information, that is, there is no explicit connection between the edge bundling layout and the original data. There fore, these techniques generates less meaningful bundles and may lead users to misinterpret the data. This masters research presents a novel edge bundling technique based on the similarity relationships among vertices. We developed such technique based on two assumptions. First, it supports the hypothesis that edge bundling can better represent the data when there is an inherent connection between the proximity among the elements in the information space and the proximity between edges in the edge bundling layout. We address this question by presenting a similarity bundling framework, that considers the similarity between vertices when performing the edges bending. To guide the bundling, we create a similarity hierarchy, called backbone. This is based on a multilevel partition of the data, which groups edges of similar vertices. Second, we also support that a multiscale representation improves the visual and complexity scalability of bundling layouts. We present a multiscale edge bundling, which allows an overview plus detailed exploration, coarsening or revealing the bundling at different levelsof the same visualization. Our evaluation framework shows that our backbone produces a balanced hierarchy with a good representation of similarity relationships among vertices. Moreover, the edge bundling layout guided by the backbone reduces the visual clutter and surpass state-of-the-art techniques in displaying global and local edge patterns. / Grafos são empregados com sucesso em uma grande variedade de problemas e aplicações, sendo objeto de estudo na modelagem, análise e na construção de representações visuais. Embora existam diferentes formas para a visualização de grafos, a maioria delas sofrem pela desorganização do espaço visual quando o número de vértices ou arestas é alto. Entre as abordagens que lidam com este problema, as técnicas de agrupamentos visuais de arestas obtiveram sucesso na melhora da representação visual pelo encurvamento e agrupamento de arestas que aperfeiçoam a organização da representação. Apesar deste sucesso, a maioria das técniques criam grupos de arestas baseados apenas na informação do espaço visual, não existindo conexão explícita entre o desenho no espaço visual e o conjunto de dados original. Dessa forma, estas técnicas produzem agrupamentos de arestas com baixa significância e podem levar o usuário a uma interpretação incorreta da informação. Esta pesquisa de mestrado apresenta uma nova técnica de agrupamento visual de arestas baseado nas relações de similaridade entre os vértices. Nós desenvolvemos esta técnica com base em duas premissas. Primeiro, ela defende a hipótese que a representação por agrupamento de arestas pode representar melhor o conjunto de dados se existir uma conexão inerente entre a proximidade dos elementos no espaço de informação e a proximidade entre arestas no desenho de arestas agrupadas. Nós atendemos esta questão apresentando um arcabouço para o agrupamento de arestas baseado em similaridade, que considera a similaridade entre vértices para realizar o encurvamento das arestas. Para guiar este encurvamento, nós criamos uma estrutura de similaridade, denominada backbone. Esta estrutura é baseada em um particionamento multi-nível do conjunto de dados, que agrupa arestas de vértices similares. A segunda premissa, nós também defendemos que uma representação multiescala melhora a escalabilidade computacional e visual da representação visual de arestas agrupadas. Nós apresentamos um agrupamento visual multi-nível de arestas que permite uma exploração generalizada e detalhada, revelando detalhes em múltiplos níveis da visualização. Nosso processo de avaliação mostra que a construção do backbone produz uma hierarquia balanceada e com boa representação das relações de similaridade entre os vértices. Além disso, a visualização com arestas guiadas pelo backbone reduz a desordem visual e melhora as técnicas do estado-da-arte na identificação de padrões de arestas globais e locais.
110

"Desenvolvimento de um Framework para Análise Visual de Informações Suportando Data Mining" / "Development of a Framework for Visual Analysis of Information with Data Mining suport"

Rodrigues Junior, Jose Fernando 22 July 2003 (has links)
No presente documento são reunidas as colaborações de inúmeros trabalhos das áreas de Bancos de Dados, Descoberta de Conhecimento em Bases de Dados, Mineração de Dados, e Visualização de Informações Auxiliada por Computador que, juntos, estruturam o tema de pesquisa e trabalho da dissertação de Mestrado: a Visualização de Informações. A teoria relevante é revista e relacionada para dar suporte às atividades conclusivas teóricas e práticas relatadas no trabalho. O referido trabalho, embasado pela substância teórica pesquisada, faz diversas contribuições à ciência em voga, a Visualização de Informações, apresentando-as através de propostas formalizadas no decorrer deste texto e através de resultados práticos na forma de softwares habilitados à exploração visual de informações. As idéias apresentadas se baseiam na exibição visual de análises numéricas estatísticas básicas, frequenciais (Frequency Plot), e de relevância (Relevance Plot). São relatadas também as contribuições à ferramenta FastMapDB do Grupo de Bases de Dados e Imagens do ICMC-USP em conjunto com os resultados de sua utilização. Ainda, é apresentado o Arcabouço, previsto no projeto original, para construção de ferramentas visuais de análise, sua arquitetura, características e utilização. Por fim, é descrito o Pipeline de visualização decorrente da junção entre o Arcabouço de visualização e a ferramenta FastMapDB. O trabalho se encerra com uma breve análise da ciência de Visualização de Informações com base na literatura estudada, sendo traçado um cenário do estado da arte desta disciplina com sugestões de futuros trabalhos. / In the present document are joined the collaborations of many works from the fields of Databases, Knowledge Discovery in Databases, Data Mining, and Computer-based Information Visualization, collaborations that, together, define the structure of the research theme and the work of the Masters Dissertation presented herein. This research topic is the Information Visualization discipline, and its relevant theory is reviewed and related to support the concluding activities, both theoretical and practical, reported in this work. The referred work, anchored by the theoretical substance that was studied, makes several contributions to the science in investigation, the Information Visualization, presenting them through formalized proposals described across this text, and through practical results in the form of software enabled to the visual exploration of information. The presented ideas are based on the visual exhibition of numeric analysis, named basic statistics, frequency analysis (Frequency Plot), and according to a relevance analysis (Relevance Plot). There are also reported the contributions to the FastMapDB tool, a visual exploration tool built by the Grupo de Bases de Dados e Imagens do ICMC-USP, the performed enhancements are listed as achieved results in the text. Also, it is presented the Framework, as previewed in this work's original proposal, projected to allow the construction of visual analysis tools; besides its description are listed its architecture, characteristics and utilization. At last, it is described the visualization Pipeline that emerges from the joining of the visualization Framework and the FastMapDB tool. The work ends with a brief analysis of the Information Visualization science based on the studied literature, it is delineated a scenario of the state of the art of this discipline along with suggestions for future work.

Page generated in 0.0429 seconds