• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 8
  • 7
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 150
  • 150
  • 84
  • 54
  • 43
  • 39
  • 29
  • 27
  • 22
  • 22
  • 19
  • 17
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Predictive Visual Analytics of Social Media Data for Supporting Real-time Situational Awareness

Luke Snyder (8764473) 01 May 2020 (has links)
<div>Real-time social media data can provide useful information on evolving events and situations. In addition, various domain users are increasingly leveraging real-time social media data to gain rapid situational awareness. Informed by discussions with first responders and government officials, we focus on two major barriers limiting the widespread adoption of social media for situational awareness: the lack of geotagged data and the deluge of irrelevant information during events. Geotags are naturally useful, as they indicate the location of origin and provide geographic context. Only a small portion of social media is geotagged, however, limiting its practical use for situational awareness. The deluge of irrelevant data provides equal difficulties, impeding the effective identification of semantically relevant information. Existing methods for short text relevance classification fail to incorporate users' knowledge into the classification process. Therefore, classifiers cannot be interactively retrained for specific events or user-dependent needs in real-time, limiting situational awareness. In this work, we first adapt, improve, and evaluate a state-of-the-art deep learning model for city-level geolocation prediction, and integrate it with a visual analytics system tailored for real-time situational awareness. We then present a novel interactive learning framework in which users rapidly identify relevant data by iteratively correcting the relevance classification of tweets in real-time. We integrate our framework with the extended Social Media Analytics and Reporting Toolkit (SMART) 2.0 system, allowing the use of our interactive learning framework within a visual analytics system adapted for real-time situational awareness.</div>
122

Visualisering som bromsmedicin för returer inom E-handel : En kvalitativ studie om användarnas behov för utformningen av Visual Analytics inom beslutsstödsystem

Björner, Olivia January 2022 (has links)
Visual Analytics is a powerful tool for decision makers to gather new insights from data. Since Visual Analytics can be hard to get into at first, previous studies have been conducted to bridge the gap between industry experts and these tools. However, few studies have examined the user’s needs regarding how Visual Analytics can generate these valuable insights. In order to examine these needs, the area selected was returns in E-commerce since the returns are devastating both to the companies and to society. The companies collect a lot of data as the goods get returned, which can be visualized. In order to highlight the e-tailer’s needs for visualization tools for their return data, a qualitative empirical study has been conducted. A prototype was developed in order to aid the semi-structured interviews visually. Six e-tailers was interviewed and got to test the prototype, in order to analyze their needs for visualization tools. The results shows that some graphic elements performed better than others, and that return data needs to be presented in comparison to sales data to be relevant. The study’s findings suggests that predefined graphs helped the E-tailers to get into the Visual Analytics mindset and may work as a way to introduce more users into the world of Visual Analytics. / För beslutsfattare är Visual Analytics inom beslutsstödssystem ett kraftfullt verktyg för att få fram nya insikter ur data. Tidigare forskning inom området fokuserar på att brygga gapet mellan branschexperter och Visual Analytics eftersom verktygen ofta är svåra att sätta sig in i. Dock är det få studier som har undersökt vad användarna har för behov av visualiseringsverktygen för att kunna få ut dessa värdefulla insikter. För att undersöka behoven har returer inom E-handel valts ut som tillämpningsområde, eftersom returerna är skadliga för företagen och samhället i stort. I samband med att varor returneras samlar E-handlarna in en hel del data som kan visualiseras. För att identifiera vilka behov E-handlarna har på visualiseringsverktyg kopplat till denna returdata, genomfördes en kvalitativ empirisk studie. I och med att Visual Analytics är visuellt togs en prototyp fram för att enklare kunna genomföra semistrukturerade intervjuer. Sex stycken E-handlare har intervjuats och testat prototypen för att samla in vilka behov dessa har av visualiseringsverktyg. Det framkom att visa grafiska element var att föredra över andra, samt att returdata i sig inte är särskilt intressant för E-handlarna utan att ha den totala försäljningen att jämföra mot. Det visade sig att de flesta E-handlarna var helt nya till Visual Analytics och att de fördefinierade grafiska elementen hjälpte de till att komma in i verktyget samt väckte tankar för hur de skulle vilja arbeta sig vidare i verktyget.
123

A visual analytics approach for multi-resolution and multi-model analysis of text corpora : application to investigative journalism / Une approche de visualisation analytique pour une analyse multi-résolution de corpus textuels : application au journalisme d’investigation

Médoc, Nicolas 16 October 2017 (has links)
À mesure que la production de textes numériques croît exponentiellement, un besoin grandissant d’analyser des corpus de textes se manifeste dans beaucoup de domaines d’application, tant ces corpus constituent des sources inépuisables d’information et de connaissance partagées. Ainsi proposons-nous dans cette thèse une nouvelle approche de visualisation analytique pour l’analyse de corpus textuels, mise en œuvre pour les besoins spécifiques du journalisme d’investigation. Motivées par les problèmes et les tâches identifiés avec une journaliste d’investigation professionnelle, les visualisations et les interactions ont été conçues suivant une méthodologie centrée utilisateur, impliquant l’utilisateur durant tout le processus de développement. En l’occurrence, les journalistes d’investigation formulent des hypothèses, explorent leur sujet d’investigation sous tous ses angles, à la recherche de sources multiples étayant leurs hypothèses de travail. La réalisation de ces tâches, très fastidieuse lorsque les corpus sont volumineux, requiert l’usage de logiciels de visualisation analytique se confrontant aux problématiques de recherche abordées dans cette thèse. D’abord, la difficulté de donner du sens à un corpus textuel vient de sa nature non structurée. Nous avons donc recours au modèle vectoriel et son lien étroit avec l’hypothèse distributionnelle, ainsi qu’aux algorithmes qui l’exploitent pour révéler la structure sémantique latente du corpus. Les modèles de sujets et les algorithmes de biclustering sont efficaces pour l’extraction de sujets de haut niveau. Ces derniers correspondent à des groupes de documents concernant des sujets similaires, chacun représenté par un ensemble de termes extraits des contenus textuels. Une telle structuration par sujet permet notamment de résumer un corpus et de faciliter son exploration. Nous proposons une nouvelle visualisation, une carte pondérée des sujets, qui dresse une vue d’ensemble des sujets de haut niveau. Elle permet d’une part d’interpréter rapidement les contenus grâce à de multiples nuages de mots, et d’autre part, d’apprécier les propriétés des sujets telles que leur taille relative et leur proximité sémantique. Bien que l’exploration des sujets de haut niveau aide à localiser des sujets d’intérêt ainsi que leur voisinage, l’identification de faits précis, de points de vue ou d’angles d’analyse, en lien avec un événement ou une histoire, nécessite un niveau de structuration plus fin pour représenter des variantes de sujet. Cette structure imbriquée révélée par Bimax, une méthode de biclustering basée sur des motifs avec chevauchement, capture au sein des biclusters les co-occurrences de termes partagés par des sous-ensembles de documents pouvant dévoiler des faits, des points de vue ou des angles associés à des événements ou des histoires communes. Cette thèse aborde les problèmes de visualisation de biclusters avec chevauchement en organisant les biclusters terme-document en une hiérarchie qui limite la redondance des termes et met en exergue les parties communes et distinctives des biclusters. Nous avons évalué l’utilité de notre logiciel d’abord par un scénario d’utilisation doublé d’une évaluation qualitative avec une journaliste d’investigation. En outre, les motifs de co-occurrence des variantes de sujet révélées par Bima. sont déterminés par la structure de sujet englobante fournie par une méthode d’extraction de sujet. Cependant, la communauté a peu de recul quant au choix de la méthode et son impact sur l’exploration et l’interprétation des sujets et de ses variantes. Ainsi nous avons conduit une expérience computationnelle et une expérience utilisateur contrôlée afin de comparer deux méthodes d’extraction de sujet. D’un côté Coclu. est une méthode de biclustering disjointe, et de l’autre, hirarchical Latent Dirichlet Allocation (hLDA) est un modèle de sujet probabiliste dont les distributions de probabilité forment une structure de bicluster avec chevauchement. (...) / As the production of digital texts grows exponentially, a greater need to analyze text corpora arises in various domains of application, insofar as they constitute inexhaustible sources of shared information and knowledge. We therefore propose in this thesis a novel visual analytics approach for the analysis of text corpora, implemented for the real and concrete needs of investigative journalism. Motivated by the problems and tasks identified with a professional investigative journalist, visualizations and interactions are designed through a user-centered methodology involving the user during the whole development process. Specifically, investigative journalists formulate hypotheses and explore exhaustively the field under investigation in order to multiply sources showing pieces of evidence related to their working hypothesis. Carrying out such tasks in a large corpus is however a daunting endeavor and requires visual analytics software addressing several challenging research issues covered in this thesis. First, the difficulty to make sense of a large text corpus lies in its unstructured nature. We resort to the Vector Space Model (VSM) and its strong relationship with the distributional hypothesis, leveraged by multiple text mining algorithms, to discover the latent semantic structure of the corpus. Topic models and biclustering methods are recognized to be well suited to the extraction of coarse-grained topics, i.e. groups of documents concerning similar topics, each one represented by a set of terms extracted from textual contents. We provide a new Weighted Topic Map visualization that conveys a broad overview of coarse-grained topics by allowing quick interpretation of contents through multiple tag clouds while depicting the topical structure such as the relative importance of topics and their semantic similarity. Although the exploration of the coarse-grained topics helps locate topic of interest and its neighborhood, the identification of specific facts, viewpoints or angles related to events or stories requires finer level of structuration to represent topic variants. This nested structure, revealed by Bimax, a pattern-based overlapping biclustering algorithm, captures in biclusters the co-occurrences of terms shared by multiple documents and can disclose facts, viewpoints or angles related to events or stories. This thesis tackles issues related to the visualization of a large amount of overlapping biclusters by organizing term-document biclusters in a hierarchy that limits term redundancy and conveys their commonality and specificities. We evaluated the utility of our software through a usage scenario and a qualitative evaluation with an investigative journalist. In addition, the co-occurrence patterns of topic variants revealed by Bima. are determined by the enclosing topical structure supplied by the coarse-grained topic extraction method which is run beforehand. Nonetheless, little guidance is found regarding the choice of the latter method and its impact on the exploration and comprehension of topics and topic variants. Therefore we conducted both a numerical experiment and a controlled user experiment to compare two topic extraction methods, namely Coclus, a disjoint biclustering method, and hierarchical Latent Dirichlet Allocation (hLDA), an overlapping probabilistic topic model. The theoretical foundation of both methods is systematically analyzed by relating them to the distributional hypothesis. The numerical experiment provides statistical evidence of the difference between the resulting topical structure of both methods. The controlled experiment shows their impact on the comprehension of topic and topic variants, from analyst perspective. (...)
124

Statistical and Machine Learning Approaches For Visualizing and Analyzing Large-Scale Simulation Data

Hazarika, Subhashis January 2019 (has links)
No description available.
125

[en] BONNIE: BUILDING ONLINE NARRATIVES FROM NOTEWORTHY INTERACTION EVENTS / [pt] BONNIE: CONSTRUINDO NARRATIVAS ONLINE A PARTIR DE EVENTOS DE INTERAÇÃO RELEVANTES

VINICIUS COSTA VILLAS BOAS SEGURA 12 January 2017 (has links)
[pt] Nos dias de hoje, temos acesso a dados de tamanho, dimensionalidade e complexidade sem precedentes. Para extrair informações desconhecidas e inesperadas desses dados complexos e dinâmicos, necessitamos de estratégias efetivas e eficientes. Uma dessas estratégias é usar aplicações de análise visual (visual analytics), que combinam técnicas de análise de dados e de visualização. Depois do processo de descoberta de conhecimento, um grande desafio é filtrar a informação essencial que levou à descoberta e comunicar os achados a outras pessoas. Nós propomos tirar proveito do traço deixado pela análise exploratória de dados, sob a forma do histórico da interação do usuário, para ajudar nesse processo. Com o traço, o usuário pode escolher os passos de interação desejados e criar uma narrativa, compartilhando o conhecimento adquirido com os leitores. Para atingir nosso objetivo, desenvolvemos o arcabouço BONNIE (Building Online Narratives from Noteworthy Interaction Events - Construindo Narrativas Online a partir de Eventos de Interação Relevantes). O arcabouço compreende um modelo de log para registrar os eventos de interação, código auxiliar para ajudar o(a) desenvolvedor(a) a instrumentar o seu próprio código, e um ambiente para visualizar o histórico de interação e construir narrativas. Esta tese apresenta nossa proposta para comunicar descobertas em aplicações de análise visual, o arcabouço BONNIE, e alguns estudos empíricos que realizamos para avaliar nossa solução. / [en] Nowadays, we have access to data of unprecedentedly large size, high dimensionality, and complexity. To extract unknown and unexpected information from such complex and dynamic data, we need effective and efficient strategies. One such strategy is to combine data analysis and visualization techniques, which is the essence of visual analytics applications. After the knowledge discovery process, a major challenge is to filter the essential information that led to a discovery and to communicate the findings to other people. We propose to take advantage of the trace left by the exploratory data analysis, in the form of ser interaction history, to aid in this process. With the trace, the user can choose the desired interaction steps and create a narrative, sharing the acquired knowledge with readers. To achieve our goal, we have developed the BONNIE (Building Online Narratives from Noteworthy Interaction Events) framework. The framework comprises a log model to register the interaction events, auxiliary code to help the developer instrument his or her own code, and an environment to view the user s own interaction history and build narratives. This thesis presents our proposal for communicating discoveries in visual analytics applications, the BONNIE framework, and a few empirical studies we conducted to evaluate our solution.
126

Visual Analytics for the Exploratory Analysis and Labeling of Cultural Data

Meinecke, Christofer 20 October 2023 (has links)
Cultural data can come in various forms and modalities, such as text traditions, artworks, music, crafted objects, or even as intangible heritage such as biographies of people, performing arts, cultural customs and rites. The assignment of metadata to such cultural heritage objects is an important task that people working in galleries, libraries, archives, and museums (GLAM) do on a daily basis. These rich metadata collections are used to categorize, structure, and study collections, but can also be used to apply computational methods. Such computational methods are in the focus of Computational and Digital Humanities projects and research. For the longest time, the digital humanities community has focused on textual corpora, including text mining, and other natural language processing techniques. Although some disciplines of the humanities, such as art history and archaeology have a long history of using visualizations. In recent years, the digital humanities community has started to shift the focus to include other modalities, such as audio-visual data. In turn, methods in machine learning and computer vision have been proposed for the specificities of such corpora. Over the last decade, the visualization community has engaged in several collaborations with the digital humanities, often with a focus on exploratory or comparative analysis of the data at hand. This includes both methods and systems that support classical Close Reading of the material and Distant Reading methods that give an overview of larger collections, as well as methods in between, such as Meso Reading. Furthermore, a wider application of machine learning methods can be observed on cultural heritage collections. But they are rarely applied together with visualizations to allow for further perspectives on the collections in a visual analytics or human-in-the-loop setting. Visual analytics can help in the decision-making process by guiding domain experts through the collection of interest. However, state-of-the-art supervised machine learning methods are often not applicable to the collection of interest due to missing ground truth. One form of ground truth are class labels, e.g., of entities depicted in an image collection, assigned to the individual images. Labeling all objects in a collection is an arduous task when performed manually, because cultural heritage collections contain a wide variety of different objects with plenty of details. A problem that arises with these collections curated in different institutions is that not always a specific standard is followed, so the vocabulary used can drift apart from another, making it difficult to combine the data from these institutions for large-scale analysis. This thesis presents a series of projects that combine machine learning methods with interactive visualizations for the exploratory analysis and labeling of cultural data. First, we define cultural data with regard to heritage and contemporary data, then we look at the state-of-the-art of existing visualization, computer vision, and visual analytics methods and projects focusing on cultural data collections. After this, we present the problems addressed in this thesis and their solutions, starting with a series of visualizations to explore different facets of rap lyrics and rap artists with a focus on text reuse. Next, we engage in a more complex case of text reuse, the collation of medieval vernacular text editions. For this, a human-in-the-loop process is presented that applies word embeddings and interactive visualizations to perform textual alignments on under-resourced languages supported by labeling of the relations between lines and the relations between words. We then switch the focus from textual data to another modality of cultural data by presenting a Virtual Museum that combines interactive visualizations and computer vision in order to explore a collection of artworks. With the lessons learned from the previous projects, we engage in the labeling and analysis of medieval illuminated manuscripts and so combine some of the machine learning methods and visualizations that were used for textual data with computer vision methods. Finally, we give reflections on the interdisciplinary projects and the lessons learned, before we discuss existing challenges when working with cultural heritage data from the computer science perspective to outline potential research directions for machine learning and visual analytics of cultural heritage data.
127

Data Visualization of Software Test Results : A Financial Technology Case Study / Datavisualisering av Mjukvarutestresultat : En Fallstudie av Finansiell Teknologi

Dzidic, Elvira January 2023 (has links)
With the increasing pace of development, the process of interpreting software test results data has become more challenging and time-consuming. While the test results provide valuable insights into the software product, the increasing complexity of software systems and the growing volume of test data pose challenges in effectively analyzing this data to ensure quality. To address these challenges, organizations are adopting various tools. Visualization dashboards are a common approach used to streamline the analysis process. By aggregating and visualizing test results data, these dashboards enable easier identification of patterns and trends, facilitating informed decision-making. This study proposes a management dashboard with visualizations of test results data as a decision support system. A case study was conducted involving eleven quality assurance experts with a number of various roles, including managers, directors, testers, and project managers. User interviews were conducted to evaluate the need for a dashboard and identify relevant test results data to visualize. The participants expressed the need for a dashboard, which would benefit both newcomers and experienced employees. A low-fidelity prototype of the dashboard was created and A/B testing was performed through a survey to prioritize features and choose the preferred version of the prototype. The results of the user interviews highlighted pass-rate, executed test cases, and failed test cases as the most important features. However, different professions showed interest in different test result metrics, leading to the creation of multiple views in the prototype to accommodate varying needs. A high-fidelity prototype was implemented based on feedback and underwent user testing, leading to iterative improvements. Despite the numerous advantages of a dashboard, integrating it into an organization can pose challenges due to variations in testing processes and guidelines across companies and teams. Hence, the dashboards require customization. The main contribution of this study is twofold. Firstly, it provides recommendations for relevant test result metrics and suitable visualizations to effectively communicate test results. Secondly, it offers insights into the visualization preferences of different professions within a quality assurance team that were missing in previous studies. / Med den ökande utvecklingstakten har processen att tolka testresultatdata för programvara blivit mer utmanande och tidskrävande. Även om testresultaten ger värdefulla insikter i mjukvaruprodukten, innebär den ökande komplexiteten hos mjukvarusystemen och den växande volymen testdata utmaningar när det gäller att effektivt analysera dessa data för att säkerställa kvalitet. För att möta dessa utmaningar använder organisationer olika verktyg. Visualiseringspaneler är ett vanligt tillvägagångssätt som används för att effektivisera analysprocessen. Genom att aggregera och visualisera testresultatdata möjliggör dessa instrumentpaneler enklare identifiering av mönster och trender, vilket underlättar välgrundat beslutsfattande. Den här studien föreslår en management-panel med visualiseringar av testresultatdata som ett beslutsstödssystem. En fallstudie genomfördes med elva experter inom kvalitetssäkring med olika roller, inklusive chefer, direktörer, testare och projektledare. Användarintervjuer genomfördes för att utvärdera behovet av en panel och identifiera relevanta testresultatdata att visualisera. Deltagarna uttryckte behovet av en visualiseringspanel, som skulle gynna både nyanställda och erfarna medarbetare. En prototyp av panelen med låg detaljnivå skapades och A/B-testning genomfördes genom en enkät för att prioritera funktioner och välja den föredragna versionen av prototypen. Resultaten av användarintervjuerna lyfte fram andel av godkända testresultat, exekverade testfall och misslyckade testfall som de viktigaste egenskaperna. Men olika yrkesgrupper visade intresse för olika testresultatmått, vilket ledde till skapandet av flera vyer i prototypen för att tillgodose olika behov. En prototyp med hög detaljnivå implementerades baserat på feedback och genomgick användartestning, vilket ledde till iterativa förbättringar. Trots de många fördelarna med en instrumentpanel kan det innebära utmaningar att integrera den i en organisation på grund av variationer i testprocesser och riktlinjer mellan företag och team. Därför kräver paneler anpassning. Det huvudsakliga bidraget från denna studie är dubbelt. För det första ger den rekommendationer för relevanta testresultatmått och lämpliga visualiseringar för att effektivt kommunicera testresultat. För det andra ger den insikter i visualiseringspreferenser för olika yrken inom ett kvalitetssäkringsteam vilket saknats i tidigare studier.
128

Applied Visual Analytics in Molecular, Cellular, and Microbiology

Dabdoub, Shareef Majed 19 December 2011 (has links)
No description available.
129

Visual analytics for detection and assessment of process-related patterns in geoscientific spatiotemporal data

Köthur, Patrick 04 January 2016 (has links)
Diese Arbeit untersucht, inwiefern Visual Analytics die Analyse von Prozessen in geowissenschaftlichen raum-zeitlichen Daten unterstützen kann. Hierzu wurden drei neuartige Visual Analytics Ansätze entwickelt. Jeder Ansatz addressiert eine wichtige Analyseperspektive. Der erste Ansatz erlaubt es, wichtige räumliche Zustände in den Daten sowie deren auftreten in der Zeit zu untersuchen. Mittels hierarchischem Clustering werden alle in den Daten enthaltenen räumlichen Zustände in einer Clusterhierarchie verortet. Interaktive visuelle Analyse ermöglicht es, verschiedene räumliche Zustände aus den Daten zu extrahieren und die dazugehörigen raum-zeitlichen Muster zu interpretieren und zu bewerten. Der zweite Ansatz unterstützt die systematische Analyse des in den Daten zu beobachtenden zeitlichen Verhaltens sowie dessen Auftreten im geographischen Raum mittels einer Kombination aus Cluster Ensembles und interaktiver visueller Exploration. Der dritte Ansatz gestattet die Detektion und Analyse von zeitlichen Zusammenhängen in den Daten. Hierzu wurde eine etablierte Methode zur Analyse von zeitlichen Zusammenhängen zwischen zwei einzelnen Zeitreihen, gefensterte Kreuzkorrelation, durch Visual Analytics auf den Vergleich von Zeitreihenensembles erweitert. Dadurch ist es nicht nur möglich, Zusammenhänge zwischen Zeitreihen zu untersuchen, sondern auch Unsicherheiten in den Daten zu berücksichtigen. Alle Ansätze wurden anhand einer nutzer- und aufgabenorientierten Methodik entwickelt und erfolgreich in Anwendungsfällen aus der Erdsystem-Modellierung, der Ozeanmodellierung, der Paläoklimatologie und sogar den Kognitionswissenschaften eingesetzt. Diese Dissertation zeigt, dass Visual Analytics einen wertvollen Ansatz zur Analyse von Prozess-bezogenen Mustern in raum-zeitlichen Daten darstellt. Es kann die Grenzen existierender Analysemethoden erweitern und ermöglicht Geowissenschaftlern neue, aufschlussreiche Sichtweisen auf Daten und die darin beschriebenen Prozesse. / This thesis studied how visual analytics can facilitate the analysis of processes in geoscientific spatiotemporal data. Three novel visual analytics solutions were developed, each addressing an important analysis perspective. The first solution addresses the analysis of prominent spatial situations in the data and their occurrence over time. Hierarchical clustering is used to arrange all spatial situations in the data in a hierarchy of clusters. The combination with interactive visual analysis enables geoscientists to explore and alter the resulting hierarchy, to extract different sets of representative spatial situations, and to interpret and assess the corresponding spatiotemporal patterns. The second solution supports geoscientists in the analysis of prominent types of temporal behavior and their location in geographic space. Cluster ensembles are integrated with interactive visual exploration to enable users to systematically detect and interpret various types of temporal behavior in different data sets and to use this information for assessment of simulation model output. The third solution enables geoscientists to detect and analyze interrelations of temporal behavior in the data. Windowed cross-correlation, a technique for comparison of two individual time series, was extended to the comparison of entire ensembles of time series through visual analytics. This not only allows scientists to study interrelations, but also to assess how much these interrelations vary between two ensembles. All visual analytics solutions were developed following a rigorous user- and task-centered methodology and successfully applied to use cases in Earth system modeling, ocean modeling, paleoclimatology, and even cognitive science. The results of this thesis demonstrate that visual analytics successfully addresses important analysis perspectives and that it is a valuable approach to the analysis of process-related patterns in geoscientific spatiotemporal data.
130

Analysis, structure and organization of complex networks / Analyse, structure et organisation des réseaux complexes

Zaidi, Faraz 25 November 2010 (has links)
La Science des réseaux est apparue comme un domaine d'étude fondamental pour modéliser un grand nombre de systèmes synthétiques ou du monde réel.La découverte du graphe petit monde et du graphe sans échelle dans ces réseaux a révolutionné la façon d'étudier, d'analyser, de modéliser et de traiter ces réseaux. Dans cette thèse, nous nous intéressons à l'étude des réseaux ayant ces propriétés et souvent qualifiés de réseaux complexes.A notre avis, les recherches menées dans ce domaine peuvent être regroupées en quatre catégories: l'analyse, la structure, le processus/organisation et la visualisation.Nous abordons des problèmes relatifs à chacune de ces catégories tout au long de cette thèse. (...) / Network science has emerged as a fundamental field of study to model many physicaland real world systems around us. The discovery of small world and scale free propertiesof these real world networks has revolutionized the way we study, analyze, model andprocess these networks. In this thesis, we are interested in the study of networks havingthese properties often termed as complex networks. In our opinion, research conducted inthis field can be grouped into four categories, Analysis, Structure, Processes-Organizationand Visualization. We address problems pertaining to each of these categories throughoutthis thesis. (...)

Page generated in 0.0885 seconds