• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 9
  • 8
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 173
  • 173
  • 91
  • 61
  • 46
  • 44
  • 31
  • 30
  • 27
  • 22
  • 19
  • 18
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Uma nova metáfora visual escalável para dados tabulares e sua aplicação na análise de agrupamentos / A scalable visual metaphor for tabular data and its application on clustering analysis

Evinton Antonio Cordoba Mosquera 19 September 2017 (has links)
A rápida evolução dos recursos computacionais vem permitindo que grandes conjuntos de dados sejam armazenados e recuperados. No entanto, a exploração, compreensão e extração de informação útil ainda são um desafio. Com relação às ferramentas computacionais que visam tratar desse problema, a Visualização de Informação possibilita a análise de conjuntos de dados por meio de representações gráficas e a Mineração de Dados fornece processos automáticos para a descoberta e interpretação de padrões. Apesar da recente popularidade dos métodos de visualização de informação, um problema recorrente é a baixa escalabilidade visual quando se está analisando grandes conjuntos de dados, resultando em perda de contexto e desordem visual. Com intuito de representar grandes conjuntos de dados reduzindo a perda de informação relevante, o processo de agregação visual de dados vem sendo empregado. A agregação diminui a quantidade de dados a serem representados, preservando a distribuição e as tendências do conjunto de dados original. Quanto à mineração de dados, visualização de informação vêm se tornando ferramental essencial na interpretação dos modelos computacionais e resultados gerados, em especial das técnicas não-supervisionados, como as de agrupamento. Isso porque nessas técnicas, a única forma do usuário interagir com o processo de mineração é por meio de parametrização, limitando a inserção de conhecimento de domínio no processo de análise de dados. Nesta dissertação, propomos e desenvolvemos uma metáfora visual baseada na TableLens que emprega abordagens baseadas no conceito de agregação para criar representações mais escaláveis para a interpretação de dados tabulares. Como aplicação, empregamos a metáfora desenvolvida na análise de resultados de técnicas de agrupamento. O ferramental resultante não somente suporta análise de grandes bases de dados com reduzida perda de contexto, mas também fornece subsídios para entender como os atributos dos dados contribuem para a formação de agrupamentos em termos da coesão e separação dos grupos formados. / The rapid evolution of computing resources has enabled large datasets to be stored and retrieved. However, exploring, understanding and extracting useful information is still a challenge. Among the computational tools to address this problem, information visualization techniques enable the data analysis employing the human visual ability by making a graphic representation of the data set, and data mining provides automatic processes for the discovery and interpretation of patterns. Despite the recent popularity of information visualization methods, a recurring problem is the low visual scalability when analyzing large data sets resulting in context loss and visual disorder. To represent large datasets reducing the loss of relevant information, the process of aggregation is being used. Aggregation decreases the amount of data to be represented, preserving the distribution and trends of the original dataset. Regarding data mining, information visualization has become an essential tool in the interpretation of computational models and generated results, especially of unsupervised techniques, such as clustering. This occurs because, in these techniques, the only way the user interacts with the mining process is through parameterization, limiting the insertion of domain knowledge in the process. In this thesis, we propose and develop the new visual metaphor based on the TableLens that employs approaches based on the concept of aggregation to create more scalable representations of tabular data. As application, we use the developed metaphor in the analysis of the results of clustering techniques. The resulting framework does not only support large database analysis but also provides insights into how data attributes contribute to clustering regarding cohesion and separation of the composed groups
142

Visual analytics via graph signal processing / Análise visual via processamento de signal em grafo

Alcebíades Dal Col Júnior 08 May 2018 (has links)
The classical wavelet transform has been widely used in image and signal processing, where a signal is decomposed into a combination of basis signals. By analyzing the individual contribution of the basis signals, one can infer properties of the original signal. This dissertation presents an overview of the extension of the classical signal processing theory to graph domains. Specifically, we review the graph Fourier transform and graph wavelet transforms both of which based on the spectral graph theory, and explore their properties through illustrative examples. The main features of the spectral graph wavelet transforms are presented using synthetic and real-world data. Furthermore, we introduce in this dissertation a novel method for visual analysis of dynamic networks, which relies on the graph wavelet theory. Dynamic networks naturally appear in a multitude of applications from different domains. Analyzing and exploring dynamic networks in order to understand and detect patterns and phenomena is challenging, fostering the development of new methodologies, particularly in the field of visual analytics. Our method enables the automatic analysis of a signal defined on the nodes of a network, making viable the detection of network properties. Specifically, we use a fast approximation of the graph wavelet transform to derive a set of wavelet coefficients, which are then used to identify activity patterns on large networks, including their temporal recurrence. The wavelet coefficients naturally encode spatial and temporal variations of the signal, leading to an efficient and meaningful representation. This method allows for the exploration of the structural evolution of the network and their patterns over time. The effectiveness of our approach is demonstrated using different scenarios and comparisons involving real dynamic networks. / A transformada wavelet clássica tem sido amplamente usada no processamento de imagens e sinais, onde um sinal é decomposto em uma combinação de sinais de base. Analisando a contribuição individual dos sinais de base, pode-se inferir propriedades do sinal original. Esta tese apresenta uma visão geral da extensão da teoria clássica de processamento de sinais para grafos. Especificamente, revisamos a transformada de Fourier em grafo e as transformadas wavelet em grafo ambas fundamentadas na teoria espectral de grafos, e exploramos suas propriedades através de exemplos ilustrativos. As principais características das transformadas wavelet espectrais em grafo são apresentadas usando dados sintéticos e reais. Além disso, introduzimos nesta tese um método inovador para análise visual de redes dinâmicas, que utiliza a teoria de wavelets em grafo. Redes dinâmicas aparecem naturalmente em uma infinidade de aplicações de diferentes domínios. Analisar e explorar redes dinâmicas a fim de entender e detectar padrões e fenômenos é desafiador, fomentando o desenvolvimento de novas metodologias, particularmente no campo de análise visual. Nosso método permite a análise automática de um sinal definido nos vértices de uma rede, tornando possível a detecção de propriedades da rede. Especificamente, usamos uma aproximação da transformada wavelet em grafo para obter um conjunto de coeficientes wavelet, que são então usados para identificar padrões de atividade em redes de grande porte, incluindo a sua recorrência temporal. Os coeficientes wavelet naturalmente codificam variações espaciais e temporais do sinal, criando uma representação eficiente e com significado expressivo. Esse método permite explorar a evolução estrutural da rede e seus padrões ao longo do tempo. A eficácia da nossa abordagem é demonstrada usando diferentes cenários e comparações envolvendo redes dinâmicas reais.
143

Leveraging storytelling in visual analytics by redesigning the user interface

Kusoffsky, Madeleine January 2013 (has links)
Storytelling is a way of packaging the knowledge and insights gained from analyzing statistical data. The knowledge is transformed into a format that lends itself to be understood by non-experts more easily. The story with links to interactive diagrams. The purpose of this design study was to improve the interaction design of the storytelling feature. The target audience for the new design was intermediate users. Evaluation of the current design by interviewing and observing beginner and intermediate users gave valuable understanding about the users goals related to the storytelling feature. The new design is a product from design goals derived from the user data and research through design. Sketching and exploring possible solutions was a process that meant producing multiple sketches, documenting design decisions through annotations and a way of keeping track of trade-offs and compromises. Screenshots from the application containing the redesigned interface of the storytelling feature was used during a final evaluation. Developers, a user and the designer evaluated the new design through a pluralistic usability walkthrough. The result showed that the new design had improved the storytelling feature in some aspects but that new problems had emerged. This indicates that interaction design processes should contain iterations. Designs should be tested, redesigned and tested again together with users and stakeholder to ensure that user goals are fulfilled, that design goals are reached and that the feature will deliver a positive user experience.
144

Espaço incremental para a mineração visual de conjuntos dinâmicos de documentos / An incremental space for visual mining of dynamic document collections

Roberto Dantas de Pinho 05 June 2009 (has links)
Representações visuais têm sido adotadas na exploração de conjuntos de documentos, auxiliando a extração de conhecimento sem que seja necessária a análise individual de milhares de textos. Mapas de documentos, em particular, apresentam documentos individualmente representados espalhados em um espaço visual, refletindo suas relações de similaridade ou conexões. A construção destes mapas de documentos inclui, entre outras tarefas, o posicionamento dos textos e a identificação automática de áreas temáticas. Um desafio é a visualização de conjuntos dinâmicos de documentos. Na visualização de informação, é comum que alterações no conjunto de dados tenham um forte impacto na organização do espaço visual, dificultando a manutenção, por parte do usuário, de um mapa mental que o auxilie na interpretação dos dados apresentados e no acompanhamento das mudanças sofridas pelo conjunto de dados. Esta tese introduz um algoritmo para a construção dinâmica de mapas de documentos, capaz de manter uma disposição coerente à medida que elementos são adicionados ou removidos. O processo, inerentemente incremental e de baixa complexidade, utiliza um espaço bidimensional dividido em células, análogo a um tabuleiro de xadrez. Resultados consistentes foram alcançados em comparação com técnicas não incrementais de projeção de dados multidimensionais, tendo sido a técnica aplicada também em outros domínios, além de conjuntos de documentos. A visualização resultante não está sujeita a problemas de oclusão. A identificação de áreas temáticas é alcançada com técnicas de extração de regras de associação representativas para a identificação automática de tópicos. A combinação da extração de tópicos com a projeção incremental de dados em um processo integrado de mineração visual de textos compõe um espaço visual em que tópicos e áreas de interesse são destacados e atualizados à medida que o conjunto de dados é modificado / Visual representations are often adopted to explore document collections, assisting in knowledge extraction, and avoiding the thorough analysis of thousands of documents. Document maps present individual documents in visual spaces in such a way that their placement reflects similarity relations or connections between them. Building these maps requires, among other tasks, placing each document and identifying interesting areas or subsets. A current challenge is to visualize dynamic data sets. In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. That can prevent a user from preserving a mental map that could assist her/him on understanding the content of a growing collection of documents or tracking changes on the underlying data set. This thesis presents a novel algorithm to create dynamic document maps, capable of maintaining a coherent disposition of elements, even for completely renewed sets. The process is inherently incremental, has low complexity and places elements on a 2D grid, analogous to a chess board. Consistent results were obtained as compared to (non-incremental) multidimensional scaling solutions, even when applied to visualizing domains other than document collections. Moreover, the corresponding visualization is not susceptible to occlusion. To assist users in indentifying interesting subsets, a topic extraction technique based on association rule mining was also developed. Together, they create a visual space where topics and interesting subsets are highlighted and constantly updated as the data set changes
145

Predictive Visual Analytics of Social Media Data for Supporting Real-time Situational Awareness

Luke Snyder (8764473) 01 May 2020 (has links)
<div>Real-time social media data can provide useful information on evolving events and situations. In addition, various domain users are increasingly leveraging real-time social media data to gain rapid situational awareness. Informed by discussions with first responders and government officials, we focus on two major barriers limiting the widespread adoption of social media for situational awareness: the lack of geotagged data and the deluge of irrelevant information during events. Geotags are naturally useful, as they indicate the location of origin and provide geographic context. Only a small portion of social media is geotagged, however, limiting its practical use for situational awareness. The deluge of irrelevant data provides equal difficulties, impeding the effective identification of semantically relevant information. Existing methods for short text relevance classification fail to incorporate users' knowledge into the classification process. Therefore, classifiers cannot be interactively retrained for specific events or user-dependent needs in real-time, limiting situational awareness. In this work, we first adapt, improve, and evaluate a state-of-the-art deep learning model for city-level geolocation prediction, and integrate it with a visual analytics system tailored for real-time situational awareness. We then present a novel interactive learning framework in which users rapidly identify relevant data by iteratively correcting the relevance classification of tweets in real-time. We integrate our framework with the extended Social Media Analytics and Reporting Toolkit (SMART) 2.0 system, allowing the use of our interactive learning framework within a visual analytics system adapted for real-time situational awareness.</div>
146

A visual analytics approach for multi-resolution and multi-model analysis of text corpora : application to investigative journalism / Une approche de visualisation analytique pour une analyse multi-résolution de corpus textuels : application au journalisme d’investigation

Médoc, Nicolas 16 October 2017 (has links)
À mesure que la production de textes numériques croît exponentiellement, un besoin grandissant d’analyser des corpus de textes se manifeste dans beaucoup de domaines d’application, tant ces corpus constituent des sources inépuisables d’information et de connaissance partagées. Ainsi proposons-nous dans cette thèse une nouvelle approche de visualisation analytique pour l’analyse de corpus textuels, mise en œuvre pour les besoins spécifiques du journalisme d’investigation. Motivées par les problèmes et les tâches identifiés avec une journaliste d’investigation professionnelle, les visualisations et les interactions ont été conçues suivant une méthodologie centrée utilisateur, impliquant l’utilisateur durant tout le processus de développement. En l’occurrence, les journalistes d’investigation formulent des hypothèses, explorent leur sujet d’investigation sous tous ses angles, à la recherche de sources multiples étayant leurs hypothèses de travail. La réalisation de ces tâches, très fastidieuse lorsque les corpus sont volumineux, requiert l’usage de logiciels de visualisation analytique se confrontant aux problématiques de recherche abordées dans cette thèse. D’abord, la difficulté de donner du sens à un corpus textuel vient de sa nature non structurée. Nous avons donc recours au modèle vectoriel et son lien étroit avec l’hypothèse distributionnelle, ainsi qu’aux algorithmes qui l’exploitent pour révéler la structure sémantique latente du corpus. Les modèles de sujets et les algorithmes de biclustering sont efficaces pour l’extraction de sujets de haut niveau. Ces derniers correspondent à des groupes de documents concernant des sujets similaires, chacun représenté par un ensemble de termes extraits des contenus textuels. Une telle structuration par sujet permet notamment de résumer un corpus et de faciliter son exploration. Nous proposons une nouvelle visualisation, une carte pondérée des sujets, qui dresse une vue d’ensemble des sujets de haut niveau. Elle permet d’une part d’interpréter rapidement les contenus grâce à de multiples nuages de mots, et d’autre part, d’apprécier les propriétés des sujets telles que leur taille relative et leur proximité sémantique. Bien que l’exploration des sujets de haut niveau aide à localiser des sujets d’intérêt ainsi que leur voisinage, l’identification de faits précis, de points de vue ou d’angles d’analyse, en lien avec un événement ou une histoire, nécessite un niveau de structuration plus fin pour représenter des variantes de sujet. Cette structure imbriquée révélée par Bimax, une méthode de biclustering basée sur des motifs avec chevauchement, capture au sein des biclusters les co-occurrences de termes partagés par des sous-ensembles de documents pouvant dévoiler des faits, des points de vue ou des angles associés à des événements ou des histoires communes. Cette thèse aborde les problèmes de visualisation de biclusters avec chevauchement en organisant les biclusters terme-document en une hiérarchie qui limite la redondance des termes et met en exergue les parties communes et distinctives des biclusters. Nous avons évalué l’utilité de notre logiciel d’abord par un scénario d’utilisation doublé d’une évaluation qualitative avec une journaliste d’investigation. En outre, les motifs de co-occurrence des variantes de sujet révélées par Bima. sont déterminés par la structure de sujet englobante fournie par une méthode d’extraction de sujet. Cependant, la communauté a peu de recul quant au choix de la méthode et son impact sur l’exploration et l’interprétation des sujets et de ses variantes. Ainsi nous avons conduit une expérience computationnelle et une expérience utilisateur contrôlée afin de comparer deux méthodes d’extraction de sujet. D’un côté Coclu. est une méthode de biclustering disjointe, et de l’autre, hirarchical Latent Dirichlet Allocation (hLDA) est un modèle de sujet probabiliste dont les distributions de probabilité forment une structure de bicluster avec chevauchement. (...) / As the production of digital texts grows exponentially, a greater need to analyze text corpora arises in various domains of application, insofar as they constitute inexhaustible sources of shared information and knowledge. We therefore propose in this thesis a novel visual analytics approach for the analysis of text corpora, implemented for the real and concrete needs of investigative journalism. Motivated by the problems and tasks identified with a professional investigative journalist, visualizations and interactions are designed through a user-centered methodology involving the user during the whole development process. Specifically, investigative journalists formulate hypotheses and explore exhaustively the field under investigation in order to multiply sources showing pieces of evidence related to their working hypothesis. Carrying out such tasks in a large corpus is however a daunting endeavor and requires visual analytics software addressing several challenging research issues covered in this thesis. First, the difficulty to make sense of a large text corpus lies in its unstructured nature. We resort to the Vector Space Model (VSM) and its strong relationship with the distributional hypothesis, leveraged by multiple text mining algorithms, to discover the latent semantic structure of the corpus. Topic models and biclustering methods are recognized to be well suited to the extraction of coarse-grained topics, i.e. groups of documents concerning similar topics, each one represented by a set of terms extracted from textual contents. We provide a new Weighted Topic Map visualization that conveys a broad overview of coarse-grained topics by allowing quick interpretation of contents through multiple tag clouds while depicting the topical structure such as the relative importance of topics and their semantic similarity. Although the exploration of the coarse-grained topics helps locate topic of interest and its neighborhood, the identification of specific facts, viewpoints or angles related to events or stories requires finer level of structuration to represent topic variants. This nested structure, revealed by Bimax, a pattern-based overlapping biclustering algorithm, captures in biclusters the co-occurrences of terms shared by multiple documents and can disclose facts, viewpoints or angles related to events or stories. This thesis tackles issues related to the visualization of a large amount of overlapping biclusters by organizing term-document biclusters in a hierarchy that limits term redundancy and conveys their commonality and specificities. We evaluated the utility of our software through a usage scenario and a qualitative evaluation with an investigative journalist. In addition, the co-occurrence patterns of topic variants revealed by Bima. are determined by the enclosing topical structure supplied by the coarse-grained topic extraction method which is run beforehand. Nonetheless, little guidance is found regarding the choice of the latter method and its impact on the exploration and comprehension of topics and topic variants. Therefore we conducted both a numerical experiment and a controlled user experiment to compare two topic extraction methods, namely Coclus, a disjoint biclustering method, and hierarchical Latent Dirichlet Allocation (hLDA), an overlapping probabilistic topic model. The theoretical foundation of both methods is systematically analyzed by relating them to the distributional hypothesis. The numerical experiment provides statistical evidence of the difference between the resulting topical structure of both methods. The controlled experiment shows their impact on the comprehension of topic and topic variants, from analyst perspective. (...)
147

Statistical and Machine Learning Approaches For Visualizing and Analyzing Large-Scale Simulation Data

Hazarika, Subhashis January 2019 (has links)
No description available.
148

[en] BONNIE: BUILDING ONLINE NARRATIVES FROM NOTEWORTHY INTERACTION EVENTS / [pt] BONNIE: CONSTRUINDO NARRATIVAS ONLINE A PARTIR DE EVENTOS DE INTERAÇÃO RELEVANTES

VINICIUS COSTA VILLAS BOAS SEGURA 12 January 2017 (has links)
[pt] Nos dias de hoje, temos acesso a dados de tamanho, dimensionalidade e complexidade sem precedentes. Para extrair informações desconhecidas e inesperadas desses dados complexos e dinâmicos, necessitamos de estratégias efetivas e eficientes. Uma dessas estratégias é usar aplicações de análise visual (visual analytics), que combinam técnicas de análise de dados e de visualização. Depois do processo de descoberta de conhecimento, um grande desafio é filtrar a informação essencial que levou à descoberta e comunicar os achados a outras pessoas. Nós propomos tirar proveito do traço deixado pela análise exploratória de dados, sob a forma do histórico da interação do usuário, para ajudar nesse processo. Com o traço, o usuário pode escolher os passos de interação desejados e criar uma narrativa, compartilhando o conhecimento adquirido com os leitores. Para atingir nosso objetivo, desenvolvemos o arcabouço BONNIE (Building Online Narratives from Noteworthy Interaction Events - Construindo Narrativas Online a partir de Eventos de Interação Relevantes). O arcabouço compreende um modelo de log para registrar os eventos de interação, código auxiliar para ajudar o(a) desenvolvedor(a) a instrumentar o seu próprio código, e um ambiente para visualizar o histórico de interação e construir narrativas. Esta tese apresenta nossa proposta para comunicar descobertas em aplicações de análise visual, o arcabouço BONNIE, e alguns estudos empíricos que realizamos para avaliar nossa solução. / [en] Nowadays, we have access to data of unprecedentedly large size, high dimensionality, and complexity. To extract unknown and unexpected information from such complex and dynamic data, we need effective and efficient strategies. One such strategy is to combine data analysis and visualization techniques, which is the essence of visual analytics applications. After the knowledge discovery process, a major challenge is to filter the essential information that led to a discovery and to communicate the findings to other people. We propose to take advantage of the trace left by the exploratory data analysis, in the form of ser interaction history, to aid in this process. With the trace, the user can choose the desired interaction steps and create a narrative, sharing the acquired knowledge with readers. To achieve our goal, we have developed the BONNIE (Building Online Narratives from Noteworthy Interaction Events) framework. The framework comprises a log model to register the interaction events, auxiliary code to help the developer instrument his or her own code, and an environment to view the user s own interaction history and build narratives. This thesis presents our proposal for communicating discoveries in visual analytics applications, the BONNIE framework, and a few empirical studies we conducted to evaluate our solution.
149

Visual Analytics of Big Data from Molecular Dynamics Simulation

Rajendran, Catherine Jenifer Rajam 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Protein malfunction can cause human diseases, which makes the protein a target in the process of drug discovery. In-depth knowledge of how protein functions can widely contribute to the understanding of the mechanism of these diseases. Protein functions are determined by protein structures and their dynamic properties. Protein dynamics refers to the constant physical movement of atoms in a protein, which may result in the transition between different conformational states of the protein. These conformational transitions are critically important for the proteins to function. Understanding protein dynamics can help to understand and interfere with the conformational states and transitions, and thus with the function of the protein. If we can understand the mechanism of conformational transition of protein, we can design molecules to regulate this process and regulate the protein functions for new drug discovery. Protein Dynamics can be simulated by Molecular Dynamics (MD) Simulations. The MD simulation data generated are spatial-temporal and therefore very high dimensional. To analyze the data, distinguishing various atomic interactions within a protein by interpreting their 3D coordinate values plays a significant role. Since the data is humongous, the essential step is to find ways to interpret the data by generating more efficient algorithms to reduce the dimensionality and developing user-friendly visualization tools to find patterns and trends, which are not usually attainable by traditional methods of data process. The typical allosteric long-range nature of the interactions that lead to large conformational transition, pin-pointing the underlying forces and pathways responsible for the global conformational transition at atomic level is very challenging. To address the problems, Various analytical techniques are performed on the simulation data to better understand the mechanism of protein dynamics at atomic level by developing a new program called Probing Long-distance interactions by Tapping into Paired-Distances (PLITIP), which contains a set of new tools based on analysis of paired distances to remove the interference of the translation and rotation of the protein itself and therefore can capture the absolute changes within the protein. Firstly, we developed a tool called Decomposition of Paired Distances (DPD). This tool generates a distance matrix of all paired residues from our simulation data. This paired distance matrix therefore is not subjected to the interference of the translation or rotation of the protein and can capture the absolute changes within the protein. This matrix is then decomposed by DPD using Principal Component Analysis (PCA) to reduce dimensionality and to capture the largest structural variation. To showcase how DPD works, two protein systems, HIV-1 protease and 14-3-3 σ, that both have tremendous structural changes and conformational transitions as displayed by their MD simulation trajectories. The largest structural variation and conformational transition were captured by the first principal component in both cases. In addition, structural clustering and ranking of representative frames by their PC1 values revealed the long-distance nature of the conformational transition and locked the key candidate regions that might be responsible for the large conformational transitions. Secondly, to facilitate further analysis of identification of the long-distance path, a tool called Pearson Coefficient Spiral (PCP) that generates and visualizes Pearson Coefficient to measure the linear correlation between any two sets of residue pairs is developed. PCP allows users to fix one residue pair and examine the correlation of its change with other residue pairs. Thirdly, a set of visualization tools that generate paired atomic distances for the shortlisted candidate residue and captured significant interactions among them were developed. The first tool is the Residue Interaction Network Graph for Paired Atomic Distances (NG-PAD), which not only generates paired atomic distances for the shortlisted candidate residues, but also display significant interactions by a Network Graph for convenient visualization. Second, the Chord Diagram for Interaction Mapping (CD-IP) was developed to map the interactions to protein secondary structural elements and to further narrow down important interactions. Third, a Distance Plotting for Direct Comparison (DP-DC), which plots any two paired distances at user’s choice, either at residue or atomic level, to facilitate identification of similar or opposite pattern change of distances along the simulation time. All the above tools of PLITIP enabled us to identify critical residues contributing to the large conformational transitions in both HIV-1 protease and 14-3-3σ proteins. Beside the above major project, a side project of developing tools to study protein pseudo-symmetry is also reported. It has been proposed that symmetry provides protein stability, opportunities for allosteric regulation, and even functionality. This tool helps us to answer the questions of why there is a deviation from perfect symmetry in protein and how to quantify it.
150

Data Visualization of Software Test Results : A Financial Technology Case Study / Datavisualisering av Mjukvarutestresultat : En Fallstudie av Finansiell Teknologi

Dzidic, Elvira January 2023 (has links)
With the increasing pace of development, the process of interpreting software test results data has become more challenging and time-consuming. While the test results provide valuable insights into the software product, the increasing complexity of software systems and the growing volume of test data pose challenges in effectively analyzing this data to ensure quality. To address these challenges, organizations are adopting various tools. Visualization dashboards are a common approach used to streamline the analysis process. By aggregating and visualizing test results data, these dashboards enable easier identification of patterns and trends, facilitating informed decision-making. This study proposes a management dashboard with visualizations of test results data as a decision support system. A case study was conducted involving eleven quality assurance experts with a number of various roles, including managers, directors, testers, and project managers. User interviews were conducted to evaluate the need for a dashboard and identify relevant test results data to visualize. The participants expressed the need for a dashboard, which would benefit both newcomers and experienced employees. A low-fidelity prototype of the dashboard was created and A/B testing was performed through a survey to prioritize features and choose the preferred version of the prototype. The results of the user interviews highlighted pass-rate, executed test cases, and failed test cases as the most important features. However, different professions showed interest in different test result metrics, leading to the creation of multiple views in the prototype to accommodate varying needs. A high-fidelity prototype was implemented based on feedback and underwent user testing, leading to iterative improvements. Despite the numerous advantages of a dashboard, integrating it into an organization can pose challenges due to variations in testing processes and guidelines across companies and teams. Hence, the dashboards require customization. The main contribution of this study is twofold. Firstly, it provides recommendations for relevant test result metrics and suitable visualizations to effectively communicate test results. Secondly, it offers insights into the visualization preferences of different professions within a quality assurance team that were missing in previous studies. / Med den ökande utvecklingstakten har processen att tolka testresultatdata för programvara blivit mer utmanande och tidskrävande. Även om testresultaten ger värdefulla insikter i mjukvaruprodukten, innebär den ökande komplexiteten hos mjukvarusystemen och den växande volymen testdata utmaningar när det gäller att effektivt analysera dessa data för att säkerställa kvalitet. För att möta dessa utmaningar använder organisationer olika verktyg. Visualiseringspaneler är ett vanligt tillvägagångssätt som används för att effektivisera analysprocessen. Genom att aggregera och visualisera testresultatdata möjliggör dessa instrumentpaneler enklare identifiering av mönster och trender, vilket underlättar välgrundat beslutsfattande. Den här studien föreslår en management-panel med visualiseringar av testresultatdata som ett beslutsstödssystem. En fallstudie genomfördes med elva experter inom kvalitetssäkring med olika roller, inklusive chefer, direktörer, testare och projektledare. Användarintervjuer genomfördes för att utvärdera behovet av en panel och identifiera relevanta testresultatdata att visualisera. Deltagarna uttryckte behovet av en visualiseringspanel, som skulle gynna både nyanställda och erfarna medarbetare. En prototyp av panelen med låg detaljnivå skapades och A/B-testning genomfördes genom en enkät för att prioritera funktioner och välja den föredragna versionen av prototypen. Resultaten av användarintervjuerna lyfte fram andel av godkända testresultat, exekverade testfall och misslyckade testfall som de viktigaste egenskaperna. Men olika yrkesgrupper visade intresse för olika testresultatmått, vilket ledde till skapandet av flera vyer i prototypen för att tillgodose olika behov. En prototyp med hög detaljnivå implementerades baserat på feedback och genomgick användartestning, vilket ledde till iterativa förbättringar. Trots de många fördelarna med en instrumentpanel kan det innebära utmaningar att integrera den i en organisation på grund av variationer i testprocesser och riktlinjer mellan företag och team. Därför kräver paneler anpassning. Det huvudsakliga bidraget från denna studie är dubbelt. För det första ger den rekommendationer för relevanta testresultatmått och lämpliga visualiseringar för att effektivt kommunicera testresultat. För det andra ger den insikter i visualiseringspreferenser för olika yrken inom ett kvalitetssäkringsteam vilket saknats i tidigare studier.

Page generated in 0.0761 seconds