• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1735
  • 461
  • 197
  • 184
  • 140
  • 106
  • 56
  • 28
  • 19
  • 18
  • 12
  • 10
  • 8
  • 7
  • 7
  • Tagged with
  • 3405
  • 840
  • 818
  • 769
  • 392
  • 371
  • 357
  • 345
  • 343
  • 297
  • 280
  • 270
  • 244
  • 232
  • 221
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Visualization of data from transportation simulation systems

Hajinasab, Banafsheh January 2011 (has links)
Nowadays by increasing importance of information in all sectors, illustrating data in a communicative format helps decision makers to understand and analyze effectively large amount of information in a short time. Information visualization, as a way of presenting different data types in a more understandable form, is growing increasingly in various areas. This thesis aims to investigate how information visualization could be used to increase readability and usability of transportation simulation data. Most of the previous studies in this area have focused on visualization of transportation infrastructures such as roads, bridges, etc.; while the main focus of this thesis is visualizing the outputs of transportation simulation systems. In order to study the role of information visualization in the transportation systems, we have investigated the visualization applications in a real implemented agent-based transportation simulator called TAPAS as case. In our case study, the visualization related requirements of users were analyzed, and the visualization tool was designed and developed based on the identified requirements.
232

Understanding The Effects of Incorporating Scientific Knowledge on Neural Network Outputs and Loss Landscapes

Elhamod, Mohannad 06 June 2023 (has links)
While machine learning (ML) methods have achieved considerable success on several mainstream problems in vision and language modeling, they are still challenged by their lack of interpretable decision-making that is consistent with scientific knowledge, limiting their applicability for scientific discovery applications. Recently, a new field of machine learning that infuses domain knowledge into data-driven ML approaches, termed Knowledge-Guided Machine Learning (KGML), has gained traction to address the challenges of traditional ML. Nonetheless, the inner workings of KGML models and algorithms are still not fully understood, and a better comprehension of its advantages and pitfalls over a suite of scientific applications is yet to be realized. In this thesis, I first tackle the task of understanding the role KGML plays at shaping the outputs of a neural network, including its latent space, and how such influence could be harnessed to achieve desirable properties, including robustness, generalizability beyond training data, and capturing knowledge priors that are of importance to experts. Second, I use and further develop loss landscape visualization tools to better understand ML model optimization at the network parameter level. Such an understanding has proven to be effective at evaluating and diagnosing different model architectures and loss functions in the field of KGML, with potential applications to a broad class of ML problems. / Doctor of Philosophy / My research aims to address some of the major shortcomings of machine learning, namely its opaque decision-making process and the inadequate understanding of its inner workings when applied in scientific problems. In this thesis, I address some of these shortcomings by investigating the effect of supplementing the traditionally data-centric method with human knowledge. This includes developing visualization tools that make understanding such practice and further advancing it easier. Conducting this research is critical to achieving wider adoption of machine learning in scientific fields as it builds up the community's confidence not only in the accuracy of the framework's results, but also in its ability to provide satisfactory rationale.
233

Intended Use Evaluation Approach for Information Visualization

Park, Albert 15 February 2007 (has links)
Information visualization is applied in many fields to gain faster insights with lighter user cognitive loads in analyzing large sets of data. As more products are being introduced each year, how can one select the most effective tool or representation form for the task? There are a number of information visualization evaluation methods currently available. However, these evaluation methods are often limited by the appropriateness of the tool for a given domain since they are not evaluating according to tools' intended use. Current methods conduct evaluations in a laboratory environment with "benchmark" tasks and often with field data sets not aligned with the intended use of the tools. The absence of realistic data sets and routine tests reduces the effectiveness of the evaluation in terms of the appropriateness of the tool for a given domain. Intended use evaluation approach captures the key activities that will use the visual technology to calibrate the evaluation criteria toward these first-order needs. This research thesis presents the results from an investigation into an intended use evaluation approach and its effectiveness of measuring domain specific information visualization tools. In investigating the evaluation approach, criteria for the intelligence analysis community have been developed for demonstration purposes. While the observations from this research are compelling for the intelligence community, the principles of the evaluation approach should apply to a wider range of visualization technologies. All the design rationale and processes were captured in this thesis. This thesis presents a design process of developing criteria and measuring five intelligence analysis visual analytic tools. The study suggests that in selecting and/or evaluating visual analytic tools, a little up front effort to analyze key activities regarding the domain field will be beneficial. Such analysis can substantially reduce evaluation time and necessary effort throughout a longer period of time. / Master of Science
234

Black hole visualization and animation

Krawisz, Daniel Gregory 25 October 2010 (has links)
Black hole visualization is a problem of raytracing over curved spacetimes. This paper discusses the physics of light in curved spacetimes, the geometry of black holes, and the appearance of objects as viewed through a relativistic camera (the Penrose-Terrell effect). It then discusses computational issues of how to generate images of black holes with a computer. A method of determining the most efficient series of steps to calculate the value of a mathematical expression is described and used to improve the speed of the program. The details of raytracing over curved spaces not covered by a single chart are described. A method of generating images of several black holes in the same spacetime is discussed. Finally, a series of images generated by these methods is given and interpreted. / text
235

Visualization of multivariate process data for fault detection and diagnosis

Wang, Ray Chen 02 October 2014 (has links)
This report introduces the concept of three-dimensional (3D) radial plots for the visualization of multivariate large scale datasets in plant operations. A key concept of this representation of data is the introduction of time as the third dimension in a two dimensional radial plot, which allows for the display of time series data in any number of process variables. This report shows the ability of 3D radial plots to conduct systemic fault detection and classification in chemical processes through the use of confidence ellipses, which capture the desired operating region of process variables during a defined period of steady-state operation. Principal component analysis (PCA) is incorporated into the method to reduce multivariate interactions and the dimensionality of the data. The method is applied to two case studies with systemic faults present (compressor surge and column flooding) as well as data obtained from the Tennessee Eastman simulator, which contained localized faults. Fault classification using the interior angles of the radial plots is also demonstrated in the paper. / text
236

Analysis and visualization of historical traffic data collected on the Stockholm highway system

Reim, Erich January 2013 (has links)
The congestion due to traffic is a worldwide occurrence in major cities, where also the biggest part of the human population lives. To be able to control and oversee the ongoing traffic development in cities, traffic operators use different methods to observe the current trend. This is done by collecting data from stationary sensors to mobile sensors like floating car data. The data collected from stationary sensors is stored in a central database. This historical traffic data is used for analysis of traffic behavior along the main roadway network in Stockholm. Areas which are highly congested can be located as well as areas where traffic flows without problems. This thesis deals with methods to analyze and visualize the traffic behavior based on historical traffic data, measured in the city of Stockholm. Therefore a toolbox is implemented which is used to figure out bottlenecks and typical speed and flow patterns along the Stockholm highway system. Based on the typical speed and flow patterns, it is possible to calculate areas that are affected of congestion and also to determine whether congestion appears due to an incident or a bottleneck.
237

3T MRI in the Evaluation of Acute Appendicitis in the Pediatric Population

Carotenuto, Giuseppe 24 April 2017 (has links)
A Thesis submitted to The University of Arizona College of Medicine - Phoenix in partial fulfillment of the requirements for the Degree of Doctor of Medicine. / Computer tomography (CT) is commonly used to evaluate suspected acute appendicitis; however, ionizing radiation limits its use in children. This study assesses 3T magnetic resonance imaging (MRI) as an imaging modality in the evaluation of suspected acute appendicitis in the pediatric population. This study is a retrospective review of prospectively‐collected data from 155 pediatric subjects who underwent MRI and 197 pediatric subjects who underwent CT for suspected acute appendicitis. Sensitivity, specificity, appendix visualization rate, positive appendicitis rate, and alternative diagnosis rate are determined. Sensitivity and specificity of MRI are 100% and 98%, 99% and 97% for CT (p = 0.61 and 0.53), respectively. Appendix visualization rate is 77% for MRI, 90% for CT (p = 0.0002), positive appendicitis rate is 25% for MRI, 34% for CT (p = 0.175), and alternative diagnosis rate is 3% for MRI, 3% for CT (p = 0.175). This study supports 3T MRI as a comparable modality to CT in the evaluation of suspected acute appendicitis in the pediatric population. Although MRI visualizes the appendix at a lower rate than CT, our protocol maintains 100% sensitivity with no false negatives. Our appendix visualization rate with 3T MRI (77%) is an improvement from published data from both 1.5T and 3T MRI systems. The exam time differential is clinically insignificant and use of MRI spares the patient the ionizing radiation and intravenous contrast of CT.
238

Dissimilarity Plots. A Visual Exploration Tool for Partitional Clustering.

Hahsler, Michael, Hornik, Kurt January 2009 (has links) (PDF)
For hierarchical clustering, dendrograms provide convenient and powerful visualization. Although many visualization methods have been suggested for partitional clustering, their usefulness deteriorates quickly with increasing dimensionality of the data and/or they fail to represent structure between and within clusters simultaneously. In this paper we extend (dissimilarity) matrix shading with several reordering steps based on seriation. Both methods, matrix shading and seriation, have been well-known for a long time. However, only recent algorithmic improvements allow to use seriation for larger problems. Furthermore, seriation is used in a novel stepwise process (within each cluster and between clusters) which leads to a visualization technique that is independent of the dimensionality of the data. A big advantage is that it presents the structure between clusters and the micro-structure within clusters in one concise plot. This not only allows for judging cluster quality but also makes mis-specification of the number of clusters apparent. We give a detailed discussion of the construction of dissimilarity plots and demonstrate their usefulness with several examples. / Series: Research Report Series / Department of Statistics and Mathematics
239

Data Visualization for the Benchmarking Engine

Joish, Sudha 16 May 2003 (has links)
In today's information age, data collection is not the ultimate goal; it is simply the first step in extracting knowledge-rich information to shape future decisions. In this thesis, we present ChartVisio - a simple web-based visual data-mining system that lets users quickly explore databases and transform raw data into processed visuals. It is highly interactive, easy to use and hides the underlying complexity of querying from its users. Data from tables is internally mapped into charts using aggregate functions across tables. The tool thus integrates querying and charting into a single general-purpose application. ChartVisio has been designed as a component of the Benchmark data engine, being developed at the Computer Science department, University of New Orleans. The data engine is an intelligent website generator and users who create websites using the Data Engine are the site owners. Using ChartVisio, owners may generate new charts and save them as XML templates for prospective website surfers. Everyday Internet users may view saved charts with the touch of a button and get real-time data, since charts are generated dynamically. Website surfers may also generate new charts, but may not save them as templates. As a result, even non-technical users can design and generate charts with minimal time and effort.
240

Quantifying, Modeling and Managing How People Interact with Visualizations on the Web

Feng, Mi 16 April 2019 (has links)
The growing number of interactive visualizations on the web has made it possible for the general public to access data and insights that were once only available to domain experts. At the same time, this rise has yielded new challenges for visualization creators, who must now understand and engage a growing and diverse audience. To bridge this gap between creators and audiences, we explore and evaluate components of a design-feedback loop that would enable visualization creators to better accommodate their audiences as they explore the visualizations. In this dissertation, we approach this goal by quantifying, modeling and creating tools that manage people’s open-ended explorations of visualizations on the web. In particular, we: 1. Quantify the effects of design alternatives on people’s interaction patterns in visualizations. We define and evaluate two techniques: HindSight (encoding a user’s interaction history) and text-based search, where controlled experiments suggest that design details can significantly modulate the interaction patterns we observe from participants using a given visualization. 2. Develop new metrics that characterize facets of people’s exploration processes. Specifically, we derive expressive metrics describing interaction patterns such as exploration uniqueness, and use Bayesian inference to model distributional effects on interaction behavior. Our results show that these metrics capture novel patterns in people’s interactions with visualizations. 3. Create tools that manage and analyze an audience’s interaction data for a given visualization. We develop a prototype tool, ReVisIt, that visualizes an audience’s interactions with a given visualization. Through an interview study with visualization creators, we found that ReVisIt make creators aware of individual and overall trends in their audiences’ interaction patterns. By establishing some of the core elements of a design-feedback loop for visualization creators, the results in this research may have a tangible impact on the future of publishing interactive visualizations on the web. Equipped with techniques, metrics, and tools that realize an initial feedback loop, creators are better able to understand the behavior and user needs, and thus create visualizations that make data and insights more accessible to the diverse audiences on the web.

Page generated in 0.1105 seconds