• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 9
  • 8
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 174
  • 174
  • 91
  • 61
  • 46
  • 44
  • 31
  • 30
  • 27
  • 22
  • 19
  • 18
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Exploration interactive, incrémentale et multi-niveau de larges collections d'images / Interactive, incremental and multi-level exploration of large collections of images

Rayar, Frédéric 22 November 2016 (has links)
Les travaux de recherche présentés et discutés dans cette thèse s’intéressent aux grandes collections d’images numériques. Plus particulièrement, nous cherchons à donner à un utilisateur la possibilité d’explorer ces collections d’images, soit dans le but d’en extraire de l’information et de la connaissance, soit de permettre une certaine sérendipité dans l’exploration. Ainsi, cette problématique est abordée du point de vue de l’analyse et l’exploration interactive des données. Nous tirons profit du paradigme de navigation par similarité et visons à respecter simultanément les trois contraintes suivantes : (i) traiter de grandes collections d’images, (ii) traiter des collections dont le nombre d’images ne cesse de croître au cours du temps et (iii) donner des moyens d’explorer interactivement des collections d’images. Pour ce faire, nous proposons d’effectuer une étude conjointe de l’indexation et de la visualisation de grandes collections d’images qui s’agrandissent au cours du temps. / The research work that is presented and discussed in this thesis focuses on large and evergrowing image collections. More specifically, we aim at providing one the possibility to explore such image collections, either to extract some kind of information and knowledge, or to wander in the collections. This thesis addresses this issue from the perspective of Interactive Data Exploration and Analytics. We take advantage of the similarity-based image collection browsing paradigm and aim at meeting simultaneously the three following constraints: (i) handling large image collections, up to millions of images, (ii) handling dynamic image collections, to deal with ever-growing image collections, and (iii) providing interactive means to explore image collections. To do so, we jointly study the indexing and the interactive visualisation of large and ever-growing image collections.
162

Une approche par composants pour l'analyse visuelle interactive de résultats issus de simulations numériques / A component-based approach for interactive visual analysis of numerical simulation results

Ait Wakrime, Abderrahim 10 December 2015 (has links)
Les architectures par composants sont de plus en plus étudiées et utilisées pour le développement efficace des applications en génie logiciel. Elles offrent, d’un côté, une architecture claire aux développeurs, et de l’autre, une séparation des différentes parties fonctionnelles et en particulier dans les applications de visualisation scientifique interactives. La modélisation de ces applications doit permettre la description des comportements de chaque composant et les actions globales du système. De plus, les interactions entre composants s’expriment par des schémas de communication qui peuvent être très complexes avec, par exemple, la possibilité de perdre des messages pour gagner en performance. Cette thèse décrit le modèle ComSA (Component-based approach for Scientific Applications) qui est basé sur une approche par composants dédiée aux applications de visualisation scientifique interactive et dynamique formalisée par les réseaux FIFO colorés stricts (sCFN). Les principales contributions de cette thèse sont dans un premier temps, un ensemble d’outils pour modéliser les différents comportements des composants ainsi que les différentes politiques de communication au sein de l’application. Dans un second temps, la définition de propriétés garantissant un démarrage propre de l’application en analysant et détectant les blocages. Cela permet de garantir la vivacité tout au long de l’exécution de l’application. Finalement l’étude de la reconfiguration dynamique des applications d’analyse visuelle par ajout ou suppression à la volée d’un composant sans arrêter toute l’application. Cette reconfiguration permet de minimiser le nombre de services non disponibles. / Component-based approaches are increasingly studied and used for the effective development of the applications in software engineering. They offer, on the one hand, safe architecture to developers, and on the other one, a separation of the various functional parts and particularly in the interactive scientific visualization applications. Modeling such applications enables the behavior description of each component and the global system’s actions. Moreover, the interactions between components are expressed through a communication schemes sometimes very complex with, for example, the possibility to lose messages to enhance performance. This thesis describes ComSA model (Component-based approach for Scientific Applications) that relies on a component-based approach dedicated to interactive and dynamic scientific visualization applications and its formalization in strict Colored FIFO Nets (sCFN). The main contributions of this thesis are, first, the definition of a set of tools to model the component’s behaviors and the various application communication policies. Second, providing some properties on the application to guarantee it starts properly. It is done by analyzing and detecting deadlocks. This ensures the liveness throughout the application execution. Finally, we present dynamic reconfiguration of visual analytics applications by adding or removing on the fly of a component without stopping the whole application. This reconfiguration minimizes the number of unavailable services.
163

A model for the visual representation of the coherence of facts in a textual document set

Engelbrecht, Louis January 2016 (has links)
A large amount of information is contained in textual records, which originate from a variety of sources such as handwritten records and digital media like audio and video files. The information contained in these records is unstructured and to visualise the content of the records is not a trivialtask.In order to visualise information contained in unstructured textual records, the information must be extracted from the records and transformed into a structured format. This research aimed to visualise the coherence of facts contained in textual sources in order to allow the user who make use of the visualisation to make an assumption about the validity of the textual records as a set. For the purpose of the study, it was contemplated that the coherence of facts contained in a document set was indicated by the multiple occurrences of the same fact over several documents in the set. The output of this research is a model that abstracts the process required to transform information contained in unstructured textual records into a structured format and the visual representation of the multiple occurrences of facts in order to support the process of making an assumption about the coherence of facts in the set. This assumption enables the user to make a decision.based on the coherence theory of truth.about the validity of the document set. The modelprovides guidance and practices for performing tasks on similar textualdocument sets containing secondary data. The development of the model was informed by a phased construction of three specific software solution instantiations.namely an initial information extraction, an intermediate visual representation and a final information visualisation instantiation. The final solution instantiation was demonstrated to research participants and was evaluated as well. A pragmatic design science research approach was followed in order to solve the research problem. In conducting the research an adaption of the Peffers et at. (2006) design research process model was followed. The result of the research is a model for the visual representation of the coherence of facts in a textual document set. Expert review of the model is added through a process of peer review and academic scrutiny by means of conference papers and a journal article. It is envisaged that the results of the research can be applied to a number of research fields such as Indigenous Knowledge, History and Law. / School of Computing / M. Sc. (Computing)
164

Graph signal processing for visual analysis and data exploration / Processamento de sinais em grafos para analise visual e exploração de dados

Paola Tatiana Llerena Valdivia 17 May 2018 (has links)
Signal processing is used in a wide variety of applications, ranging from digital image processing to biomedicine. Recently, some tools from signal processing have been extended to the context of graphs, allowing its use on irregular domains. Among others, the Fourier Transform and the Wavelet Transform have been adapted to such context. Graph signal processing (GSP) is a new field with many potential applications on data exploration. In this dissertation we show how tools from graph signal processing can be used for visual analysis. Specifically, we proposed a data filtering method, based on spectral graph filtering, that led to high quality visualizations which were attested qualitatively and quantitatively. On the other hand, we relied on the graph wavelet transform to enable the visual analysis of massive time-varying data revealing interesting phenomena and events. The proposed applications of GSP to visually analyze data are a first step towards incorporating the use of this theory into information visualization methods. Many possibilities from GSP can be explored by improving the understanding of static and time-varying phenomena that are yet to be uncovered. / O processamento de sinais é usado em uma ampla variedade de aplicações, desde o processamento digital de imagens até a biomedicina. Recentemente, algumas ferramentas do processamento de sinais foram estendidas ao contexto de grafos, permitindo seu uso em domínios irregulares. Entre outros, a Transformada de Fourier e a Transformada Wavelet foram adaptadas nesse contexto. O Processamento de Sinais em Grafos (PSG) é um novo campo com muitos aplicativos potenciais na exploração de dados. Nesta dissertação mostramos como ferramentas de processamento de sinal gráfico podem ser usadas para análise visual. Especificamente, o método de filtragem de dados porposto, baseado na filtragem de grafos espectrais, levou a visualizações de alta qualidade que foram atestadas qualitativa e quantitativamente. Por outro lado, usamos a transformada de wavelet em grafos para permitir a análise visual de dados massivos variantes no tempo, revelando fenômenos e eventos interessantes. As aplicações propostas do PSG para analisar visualmente os dados são um primeiro passo para incorporar o uso desta teoria nos métodos de visualização da informação. Muitas possibilidades do PSG podem ser exploradas melhorando a compreensão de fenômenos estáticos e variantes no tempo que ainda não foram descobertos.
165

Konzeption und Entwicklung eines automatisierten Workflows zur geovisuellen Analyse von georeferenzierten Textdaten(strömen) / Microblogging Content

Gröbe, Mathias 13 October 2015 (has links)
Die vorliegende Masterarbeit behandelt den Entwurf und die exemplarische Umsetzung eines Arbeitsablaufs zur Aufbereitung von georeferenziertem Microblogging Content. Als beispielhafte Datenquelle wurde Twitter herangezogen. Darauf basierend, wurden Überlegungen angestellt, welche Arbeitsschritte nötig und mit welchen Mitteln sie am besten realisiert werden können. Dabei zeigte sich, dass eine ganze Reihe von Bausteinen aus dem Bereich des Data Mining und des Text Mining für eine Pipeline bereits vorhanden sind und diese zum Teil nur noch mit den richtigen Einstellungen aneinandergereiht werden müssen. Zwar kann eine logische Reihenfolge definiert werden, aber weitere Anpassungen auf die Fragestellung und die verwendeten Daten können notwendig sein. Unterstützt wird dieser Prozess durch verschiedenen Visualisierungen mittels Histogrammen, Wortwolken und Kartendarstellungen. So kann neues Wissen entdeckt und nach und nach die Parametrisierung der Schritte gemäß den Prinzipien des Geovisual Analytics verfeinert werden. Für eine exemplarische Umsetzung wurde nach der Betrachtung verschiedener Softwareprodukte die für statistische Anwendungen optimierte Programmiersprache R ausgewählt. Abschließend wurden die Software mit Daten von Twitter und Flickr evaluiert. / This Master's Thesis deals with the conception and exemplary implementation of a workflow for georeferenced Microblogging Content. Data from Twitter is used as an example and as a starting point to think about how to build that workflow. In the field of Data Mining and Text Mining, there was found a whole range of useful software modules that already exist. Mostly, they only need to get lined up to a process pipeline using appropriate preferences. Although a logical order can be defined, further adjustments according to the research question and the data are required. The process is supported by different forms of visualizations such as histograms, tag clouds and maps. This way new knowledge can be discovered and the options for the preparation can be improved. This way of knowledge discovery is already known as Geovisual Analytics. After a review of multiple existing software tools, the programming language R is used to implement the workflow as this language is optimized for solving statistical problems. Finally, the workflow has been tested using data from Twitter and Flickr.
166

Lineamientos para la integración de minería de procesos y visualización de datos / Guidelines for the integration of process mining and data visualization

Chise Teran, Bryhan, Hurtado Bravo, Jimmy Manuel 04 December 2020 (has links)
Process mining es una disciplina que ha tomado mayor relevancia en los últimos años; prueba de ello es un estudio realizado por la consultora italiana HSPI en el 2018, donde se indica un crecimiento del 72% de casos de estudio aplicados sobre process mining con respecto al año 2017. Así mismo, un reporte publicado en el mismo año por BPTrends, firma especializada en procesos de negocio, afirma que las organizaciones tienen como prioridad en sus proyectos estratégicos el rediseño y automatización de sus principales procesos de negocio. La evolución de esta disciplina ha permitido superar varios de los retos que se identificaron en un manifiesto [1] realizado por los miembros de la IEEE Task Force on Process Mining en el 2012. En este sentido, y apoyados en el desafío número 11 de este manifiesto, el objetivo de este proyecto es integrar las disciplinas de process mining y data visualization a través de un modelo de interacción de lineamientos que permitan mejorar el entendimiento de los usuarios no expertos1 en los resultados gráficos de proyectos de process mining, a fin de optimizar los procesos de negocio en las organizaciones. Nuestro aporte tiene como objetivo mejorar el entendimiento de los usuarios no expertos en el campo de process mining. Por ello, nos apoyamos de las técnicas de data visualization y de la psicología del color para proponer un modelo de interacción de lineamientos que permita guiar a los especialistas en process mining a diseñar gráficos que transmitan de forma clara y comprensible. Con ello, se busca comprender de mejor forma los resultados de los proyectos de process mining, permitiéndonos tomar mejores decisiones sobre el desempeño de los procesos de negocio en las organizaciones. El modelo de interacción generado en nuestra investigación se validó con un grupo de usuarios relacionados a procesos críticos de diversas organizaciones del país. Esta validación se realizó a través de una encuesta donde se muestran casos a dichos usuarios a fin de constatar las 5 variables que se definieron para medir de forma cualitativa el nivel de mejora en la compresión de los gráficos al aplicar los lineamientos del modelo de interacción. Los resultados obtenidos demostraron que 4 de las 5 variables tuvieron un impacto positivo en la percepción de los usuarios según el caso que se propuso en forma de pregunta. / Process mining is a discipline that has become more relevant in recent years; proof of this is a study carried out by the Italian consultancy HSPI in 2018, where a growth of 72% of case studies applied on process mining is indicated compared to 2017. Likewise, a report published in the same year by BPTrends, a firm specialized in business processes, affirms that organizations have as a priority in their strategic projects the redesign and automation of their main business processes. The evolution of this discipline has made it possible to overcome several of the challenges that were identified in a manifesto [1] made by the members of the IEEE Task Force on Process Mining in 2012. In this sense, and supported by challenge number 11 of this manifesto, the objective of this project is to integrate the disciplines of process mining and data visualization through an interaction model of guidelines that allow to improve the understanding of non-expert users in the graphical results of process mining projects, in order to optimize the business processes in organizations. Our contribution aims to improve the understanding of non-expert users in the field of process mining. For this reason, we rely on data visualization techniques and color psychology to propose an interaction model of guidelines that allows us to guide process mining specialists to design graphics that convey clearly and understandably. With this, it seeks to better understand the results of process mining projects, allowing us to make better decisions about the performance of business processes in organizations. The interaction model generated in our research was validated with a group of users related to critical processes from various organizations in the country. This validation was carried out through a survey where cases are shown to these users in order to verify the 5 variables that were defined to qualitatively measure the level of improvement in the compression of the graphs when applying the guidelines of the interaction model. The results obtained showed that 4 of the 5 variables had a positive impact on the perception of users according to the case that was proposed in the form of a question. / Tesis
167

Understanding High-Dimensional Data Using Reeb Graphs

Harvey, William John 14 August 2012 (has links)
No description available.
168

Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA / with Applications for QuantNet 2.0 and GitHub

Borke, Lukas 08 September 2017 (has links)
Mit der wachsenden Popularität von GitHub, dem größten Online-Anbieter von Programm-Quellcode und der größten Kollaborationsplattform der Welt, hat es sich zu einer Big-Data-Ressource entfaltet, die eine Vielfalt von Open-Source-Repositorien (OSR) anbietet. Gegenwärtig gibt es auf GitHub mehr als eine Million Organisationen, darunter solche wie Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly und viele mehr. GitHub verfügt über eine umfassende REST API, die es Forschern ermöglicht, wertvolle Informationen über die Entwicklungszyklen von Software und Forschung abzurufen. Unsere Arbeit verfolgt zwei Hauptziele: (I) ein automatisches OSR-Kategorisierungssystem für Data Science Teams und Softwareentwickler zu ermöglichen, das Entdeckbarkeit, Technologietransfer und Koexistenz fördert. (II) Visuelle Daten-Exploration und thematisch strukturierte Navigation innerhalb von GitHub-Organisationen für reproduzierbare Kooperationsforschung und Web-Applikationen zu etablieren. Um Mehrwert aus Big Data zu generieren, ist die Speicherung und Verarbeitung der Datensemantik und Metadaten essenziell. Ferner ist die Wahl eines geeigneten Text Mining (TM) Modells von Bedeutung. Die dynamische Kalibrierung der Metadaten-Konfigurationen, TM Modelle (VSM, GVSM, LSA), Clustering-Methoden und Clustering-Qualitätsindizes wird als "Smart Clusterization" abgekürzt. Data-Driven Documents (D3) und Three.js (3D) sind JavaScript-Bibliotheken, um dynamische, interaktive Datenvisualisierung zu erzeugen. Beide Techniken erlauben Visuelles Data Mining (VDM) in Webbrowsern, und werden als D3-3D abgekürzt. Latent Semantic Analysis (LSA) misst semantische Information durch Kontingenzanalyse des Textkorpus. Ihre Eigenschaften und Anwendbarkeit für Big-Data-Analytik werden demonstriert. "Smart clusterization", kombiniert mit den dynamischen VDM-Möglichkeiten von D3-3D, wird unter dem Begriff "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA" zusammengefasst. / With the growing popularity of GitHub, the largest host of source code and collaboration platform in the world, it has evolved to a Big Data resource offering a variety of Open Source repositories (OSR). At present, there are more than one million organizations on GitHub, among them Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly and many more. GitHub provides an extensive REST API, which enables scientists to retrieve valuable information about the software and research development life cycles. Our research pursues two main objectives: (I) provide an automatic OSR categorization system for data science teams and software developers promoting discoverability, technology transfer and coexistence; (II) establish visual data exploration and topic driven navigation of GitHub organizations for collaborative reproducible research and web deployment. To transform Big Data into value, in other words into Smart Data, storing and processing of the data semantics and metadata is essential. Further, the choice of an adequate text mining (TM) model is important. The dynamic calibration of metadata configurations, TM models (VSM, GVSM, LSA), clustering methods and clustering quality indices will be shortened as "smart clusterization". Data-Driven Documents (D3) and Three.js (3D) are JavaScript libraries for producing dynamic, interactive data visualizations, featuring hardware acceleration for rendering complex 2D or 3D computer animations of large data sets. Both techniques enable visual data mining (VDM) in web browsers, and will be abbreviated as D3-3D. Latent Semantic Analysis (LSA) measures semantic information through co-occurrence analysis in the text corpus. Its properties and applicability for Big Data analytics will be demonstrated. "Smart clusterization" combined with the dynamic VDM capabilities of D3-3D will be summarized under the term "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA".
169

Visualizing the Ethiopian Commodity Market

Rogstadius, Jakob January 2009 (has links)
The Ethiopia Commodity Exchange (ECX), like many other data intensive organizations, is having difficulties making full use of the vast amounts of data that it collects. This MSc thesis identifies areas within the organization where concepts from the academic fields of information visualization and visual analytics can be applied to address this issue.Software solutions are designed and implemented in two areas with the purpose of evaluating the approach and to demonstrate to potential users, developers and managers what can be achieved using this method. A number of presentation methods are proposed for the ECX website, which previously contained no graphing functionality for market data, to make it easier for users to find trends, patterns and outliers in prices and trade volumes of commodieties traded at the exchange. A software application is also developed to support the ECX market surveillance team by drastically improving its capabilities of investigating complex trader relationships.Finally, as ECX lacked previous experiences with visualization, one software developer was trained in computer graphics and involved in the work, to enable continued maintenance and future development of new visualization solutions within the organization.
170

Searching for novel gene functions in yeast : identification of thousands of novel molecular interactions by protein-fragment complementation assay followed by automated gene function prediction and high-throughput lipidomics

Tarasov, Kirill 09 1900 (has links)
No description available.

Page generated in 0.0526 seconds