• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 7
  • 7
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Narrative Maps: A Computational Model to Support Analysts in Narrative Sensemaking

Keith Norambuena, Brian Felipe 08 August 2023 (has links)
Narratives are fundamental to our understanding of the world, and they are pervasive in all activities that involve representing events in time. Narrative analysis has a series of applications in computational journalism, intelligence analysis, and misinformation modeling. In particular, narratives are a key element of the sensemaking process of analysts. In this work, we propose a narrative model and visualization method to aid analysts with this process. In particular, we propose the narrative maps framework—an event-based representation that uses a directed acyclic graph to represent the narrative structure—and a series of empirically defined design guidelines for map construction obtained from a user study. Furthermore, our narrative extraction pipeline is based on maximizing coherence—modeled as a function of surface text similarity and topical similarity—subject to coverage—modeled through topical clusters—and structural constraints through the use of linear programming optimization. For the purposes of our evaluation, we focus on the news narrative domain and showcase the capabilities of our model through several case studies and user evaluations. Moreover, we augment the narrative maps framework with interactive AI techniques—using semantic interaction and explainable AI—to create an interactive narrative model that is capable of learning from user interactions to customize the narrative model based on the user's needs and providing explanations for each core component of the narrative model. Throughout this process, we propose a general framework for interactive AI that can handle similar models to narrative maps—that is, models that mix continuous low-level representations (e.g., dimensionality reduction) with more abstract high-level discrete structures (e.g., graphs). Finally, we evaluate our proposed framework through an insight-based user study. In particular, we perform a quantitative and qualitative assessment of the behavior of users and explore their cognitive strategies, including how they use the explainable AI and semantic interaction capabilities of our system. Our evaluation shows that our proposed interactive AI framework for narrative maps is capable of aiding users in finding more insights from data when compared to the baseline. / Doctor of Philosophy / Narratives are essential to how we understand the world. They help us make sense of events that happen over time. This research focuses on developing a method to assist people, like journalists and analysts, in understanding complex information. To do this, we introduce a new approach called narrative maps. This model allows us to extract and visualize stories from text data. To improve our model, we use interactive artificial intelligence techniques. These techniques allow our model to learn from user feedback and be customized to fit different needs. We also use these methods to explain how the model works, so users can understand it better. We evaluate our approach by studying how users interact with it when doing a task with news stories. We consider how useful the system is in helping users gain insights. Our results show that our method aids users in finding important insights compared to traditional methods.
2

Explainable Interactive Projections for Image Data

Han, Huimin 12 January 2023 (has links)
Making sense of large collections of images is difficult. Dimension reductions (DR) assist by organizing images in a 2D space based on similarities, but provide little support for explaining why images were placed together or apart in the 2D space. Additionally, they do not provide support for modifying and updating the 2D space to explore new relationships and organizations of images. To address these problems, we present an interactive DR method for images that uses visual features extracted by a deep neural network to project the images into 2D space and provides visual explanations of image features that contributed to the 2D location. In addition, it allows people to directly manipulate the 2D projection space to define alternative relationships and explore subsequent projections of the images. With an iterative cycle of semantic interaction and explainable-AI feedback, people can explore complex visual relationships in image data. Our approach to human-AI interaction integrates visual knowledge from both human mental models and pre-trained deep neural models to explore image data. Two usage scenarios are provided to demonstrate that our method is able to capture human feedback and incorporate it into the model. Our visual explanations help bridge the gap between the feature space and the original images to illustrate the knowledge learned by the model, creating a synergy between human and machine that facilitates a more complete analysis experience. / Master of Science / High-dimensional data is everywhere. A spreadsheet with many columns, text documents, images, ... ,etc. Exploring and visualizing high-dimensional data can be challenging. Dimension reduction (DR) techniques can help. High dimensional data can be projected into 3d or 2d space and visualized as a scatter plot.Additionally, DR tool can be interactive to help users better explore data and understand underlying algorithms. Designing such interactive DR tool is challenging for images. To address this problem, this thesis presents a tool that can visualize images to a 2D plot, data points that are considered similar are projected close to each other and vice versa. Users can manipulate images directly on this scatterplot-like visualization based on own knowledge to update the display, saliency maps are provided to reflect model's re-projection reasoning.
3

Dimension Reduction and Clustering for Interactive Visual Analytics

Wenskovitch Jr, John Edward 06 September 2019 (has links)
When exploring large, high-dimensional datasets, analysts often utilize two techniques for reducing the data to make exploration more tractable. The first technique, dimension reduction, reduces the high-dimensional dataset into a low-dimensional space while preserving high-dimensional structures. The second, clustering, groups similar observations while simultaneously separating dissimilar observations. Existing work presents a number of systems and approaches that utilize these techniques; however, these techniques can cooperate or conflict in unexpected ways. The core contribution of this work is the systematic examination of the design space at the intersection of dimension reduction and clustering when building intelligent, interactive tools in visual analytics. I survey existing techniques for dimension reduction and clustering algorithms in visual analytics tools, and I explore the design space for creating projections and interactions that include dimension reduction and clustering algorithms in the same visual interface. Further, I implement and evaluate three prototype tools that implement specific points within this design space. Finally, I run a cognitive study to understand how analysts perform dimension reduction (spatialization) and clustering (grouping) operations. Contributions of this work include surveys of existing techniques, three interactive tools and usage cases demonstrating their utility, design decisions for implementing future tools, and a presentation of complex human organizational behaviors. / Doctor of Philosophy / When an analyst is exploring a dataset, they seek to gain insight from the data. With data sets growing larger, analysts require techniques to help them reduce the size of the data while still maintaining its meaning. Two commonly-utilized techniques are dimension reduction and clustering. Dimension reduction seeks to eliminate unnecessary features from the data, reducing the number of columns to a smaller number. Clustering seeks to group similar objects together, reducing the number of rows to a smaller number. The contribution of this work is to explore how dimension reduction and clustering are currently being used in interactive visual analytics systems, as well as to explore how they could be used to address challenges faced by analysts in the future. To do so, I survey existing techniques and explore the design space for creating visualizations that incorporate both types of computations. I look at methods by which an analyst could interact with those projections in other to communicate their interests to the system, thereby producing visualizations that better match the needs of the analyst. I develop and evaluate three tools that incorporate both dimension reduction and clustering in separate computational pipelines. Finally, I conduct a cognitive study to better understand how users think about these operations, in order to create guidelines for better systems in the future.
4

Semantic Interaction for Symmetrical Analysis and Automated Foraging of Documents and Terms

Dowling, Michelle Veronica 23 April 2020 (has links)
Sensemaking tasks, such as reading many news articles to determine the truthfulness of a given claim, are difficult. These tasks require a series of iterative steps to first forage for relevant information and then synthesize this information into a final hypothesis. To assist with such tasks, visual analytics systems provide interactive visualizations of data to enable faster, more accurate, or more thorough analyses. For example, semantic interaction techniques leverage natural or intuitive interactions, like highlighting text, to automatically update the visualization parameters using machine learning. However, this process of using machine learning based on user interaction is not yet well defined. We begin our research efforts by developing a computational pipeline that models and captures how a system processes semantic interactions. We then expanded this model to denote specifically how each component of the pipeline supports steps of the Sensemaking Process. Additionally, we recognized a cognitive symmetry in how analysts consider data items (like news articles) and their attributes (such as terms that appear within the articles). To support this symmetry, we also modeled how to visualize and interact with data items and their attributes simultaneously. We built a testbed system and conducted a user study to determine which analytic tasks are best supported by such symmetry. Then, we augmented the testbed system to scale up to large data using semantic interaction foraging, a method for automated foraging based on user interaction. This experience enabled our development of design challenges and a corresponding future research agenda centered on semantic interaction foraging. We began investigating this research agenda by conducting a second user study on when to apply semantic interaction foraging to better match the analyst's Sensemaking Process. / Doctor of Philosophy / Sensemaking tasks such as determining the truthfulness of a claim using news articles are complex, requiring a series of steps in which the relevance of each piece of information within the articles is first determined. Relevant pieces of information are then combined together until a conclusion may be reached regarding the truthfulness of the claim. To help with these tasks, interactive visualizations of data can make it easier or faster to find or combine information together. In this research, we focus on leveraging natural or intuitive interactions, such organizing documents in a 2-D space, which the system uses to perform machine learning to automatically adjust the visualization to better support the given task. We first model how systems perform such machine learning based on interaction as well as model how each component of the system supports the user's sensemaking task. Additionally, we developed a model and accompanying testbed system for simultaneously evaluating both data items (like news articles) and their attributes (such as terms within the articles) through symmetrical visualization and interaction methods. With this testbed system, we devised and conducted a user study to determine which types of tasks are supported or hindered by such symmetry. We then combined these models to build an additional testbed system that implemented a searching technique to automatically add previously unseen, relevant pieces of information to the visualization. Using our experience in implementing this automated searching technique, we defined design challenges to guide future implementations, along with a research agenda to refine the technique. We also devised and conducted another user study to determine when such automated searching should be triggered to best support the user's sensemaking task.
5

Human-AI Sensemaking with Semantic Interaction and Deep Learning

Bian, Yali 07 March 2022 (has links)
Human-AI interaction can improve overall performance, exceeding the performance that either humans or AI could achieve separately, thus producing a whole greater than the sum of the parts. Visual analytics enables collaboration between humans and AI through interactive visual interfaces. Semantic interaction is a design methodology to enhance visual analytics systems for sensemaking tasks. It is widely applied for sensemaking in high-stakes domains such as intelligence analysis and academic research. However, existing semantic interaction systems support collaboration between humans and traditional machine learning models only; they do not apply state-of-the-art deep learning techniques. The contribution of this work is the effective integration of deep neural networks into visual analytics systems with semantic interaction. More specifically, I explore how to redesign the semantic interaction pipeline to enable collaboration between human and deep learning models for sensemaking tasks. First, I validate that semantic interaction systems with pre-trained deep learning better support sensemaking than existing semantic interaction systems with traditional machine learning. Second, I integrate interactive deep learning into the semantic interaction pipeline to enhance inference ability in capturing analysts' precise intents, thereby promoting sensemaking. Third, I add semantic explanation into the pipeline to interpret the interactively steered deep learning model. With a clear understanding of DL, analysts can make better decisions. Finally, I present a neural design of the semantic interaction pipeline to further boost collaboration between humans and deep learning for sensemaking. / Doctor of Philosophy / Human AI interaction can harness the separate strengths of human and machine intelligence to accomplish tasks neither can solve alone. Analysts are good at making high-level hypotheses and reasoning from their domain knowledge. AI models are better at data computation based on low-level input features. Successful human-AI interactions can perform real-world, high-stakes tasks, such as issuing medical diagnoses, making credit assessments, and determining cases of discrimination. Semantic interaction is a visual methodology providing intuitive communications between analysts and traditional machine learning models. It is commonly utilized to enhance visual analytics systems for sensemaking tasks, such as intelligence analysis and scientific research. The contribution of this work is to explore how to use semantic interaction to achieve collaboration between humans and state-of-the-art deep learning models for complex sensemaking tasks. To do this, I first evaluate the straightforward solution of integrating the pretrained deep learning model into the traditional semantic interaction pipeline. Results show that the deep learning representation matches human cognition better than hand engineering features via semantic interaction. Next, I look at methods for supporting semantic interaction systems with interactive and interpretable deep learning. The new pipeline provides effective communication between human and deep learning models. Interactive deep learning enables the system to better capture users' intents. Interpretable deep learning lets users have a clear understanding of models. Finally, I improve the pipeline to better support collaboration using a neural design. I hope this work can contribute to future designs for the human-in-the-loop analysis with deep learning and visual analytics techniques.
6

Music complexity: a multi-faceted description of audio content

Streich, Sebastian 21 February 2007 (has links)
Esta tesis propone un juego de algoritmos que puede emplearse para computar estimaciones de las distintas facetas de complejidad que ofrecen señales musicales auditivas. Están enfocados en los aspectos de acústica, ritmo, timbre y tonalidad. Así pues, la complejidad musical se entiende aquí en el nivel más basto del común acuerdo entre oyentes humanos. El objetivo es obtener juicios de complejidad mediante computación automática que resulten similares al punto de vista de un oyente ingenuo. La motivación de la presente investigación es la de mejorar la interacción humana con colecciones de música digital. Según se discute en la tesis,hay toda una serie de tareas a considerar, como la visualización de una colección, la generación de listas de reproducción o la recomendación automática de música. A través de las estimaciones de complejidad musical provistas por los algoritmos descritos, podemos obtener acceso a un nivel de descripción semántica de la música que ofrecerá novedosas e interesantes soluciones para estas tareas. / This thesis proposes a set of algorithms that can be used to compute estimates of music complexity facets from musical audio signals. They focus on aspects of acoustics, rhythm, timbre, and tonality. Music complexity is thereby considered on the coarse level of common agreement among human listeners. The target is to obtain complexity judgments through automatic computation that resemble a naive listener's point of view. The motivation for the presented research lies in the enhancement of human interaction with digital music collections. As we will discuss, there is a variety of tasks to be considered, such as collection visualization, play-list generation, or the automatic recommendation of music. Through the music complexity estimates provided by the described algorithms we can obtain access to a level of semantic music description, which allows for novel and interesting solutions of these tasks.
7

La plasticité sonore : la création visuelle et sonore, une interaction sensorielle, émotionnelle et sémantique / Sound plasticity : visual and sound creation, sensory, emotional and semantic interaction

Le Fur, Iris 12 May 2017 (has links)
Sur la base d’une pratique artistique explorant le phénomène vibratoire de la matière, cette thèse propose une réflexion sur les interactions entre matières sonores et matières visuelles au sein d’une même production plastique. Il s’agit d’analyser l’acte de création composé d’un agencement sensible de divers éléments auditifs et visuels réagissant de manière réciproque et provoquant une mutation de leur perception sensorielle, émotionnelle et sémantique. Un tour d’horizon de certains grands acteurs de l’histoire des pratiques sonores au XX° et XXI° siècle permettra d’aborder la question de l’interaction entre l’ouïe et la vue dans une production artistique. Seront abordées les notions de plasticité sonore, de mouvement vibratoire par l’altération, ainsi que du métissage plastique issu du métissage culturel. Dans un second temps, mon étude portera sur le processus de création d’une installation sonore par mode vibratoire, à travers l’écoute, le processus d’écriture sonore et les caractéristiques des espaces publics en tant que lieu d’accueil d’une œuvre. En dernier lieu, l’étude de la spécificité de la vibration sonore à générer des émotions, mettant en relief les mécanismes cérébraux sollicités par la perception bi-sensorielle aussi bien au niveau du corps de l’artiste créant, que ceux du public expérimentant l’œuvre. / On the basis of an artistic practice exploring the vibratory phenomenon of matter, this thesis proposes a reflection on the interactions between sound and visual materials within the same plastic production. It is a matter of analyzing the act of creation comprising a sensoriel arrangement of various auditory and ocular elements reacting reciprocally and causing a mutation of their sensory, emotional and semantic perception. A survey of some of the major 20th and 21st century players in the history of sonic cultural practices will address the issue of the interaction between hearing and sight in an artistic production. The notions of sound plasticity, of vibratory movement through alteration, as well as that of plastic mixing resulting from cultural intermingling will be discussed. secondly, my study will focus on the process of creating a sound installation by vibratory mode, through listening, the process of sound writing and the characteristics of public spaces as an artwork. Finally, the study of the specificity of sound vibration to generate emotions, highlighting the cerebral mechanisms required by bi-sensory perception both in the body of the creative artist and those of the public experiencing the 'artwork.

Page generated in 0.1458 seconds