• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 39
  • 19
  • 13
  • 8
  • 7
  • 7
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 340
  • 340
  • 85
  • 69
  • 60
  • 49
  • 47
  • 47
  • 40
  • 38
  • 38
  • 37
  • 37
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Visual Analytics Tool for the Global Change Assessment Model

January 2015 (has links)
abstract: The Global Change Assessment Model (GCAM) is an integrated assessment tool for exploring consequences and responses to global change. However, the current iteration of GCAM relies on NetCDF file outputs which need to be exported for visualization and analysis purposes. Such a requirement limits the uptake of this modeling platform for analysts that may wish to explore future scenarios. This work has focused on a web-based geovisual analytics interface for GCAM. Challenges of this work include enabling both domain expert and model experts to be able to functionally explore the model. Furthermore, scenario analysis has been widely applied in climate science to understand the impact of climate change on the future human environment. The inter-comparison of scenario analysis remains a big challenge in both the climate science and visualization communities. In a close collaboration with the Global Change Assessment Model team, I developed the first visual analytics interface for GCAM with a series of interactive functions to help users understand the simulated impact of climate change on sectors of the global economy, and at the same time allow them to explore inter comparison of scenario analysis with GCAM models. This tool implements a hierarchical clustering approach to allow inter-comparison and similarity analysis among multiple scenarios over space, time, and multiple attributes through a set of coordinated multiple views. After working with this tool, the scientists from the GCAM team agree that the geovisual analytics tool can facilitate scenario exploration and enable scientific insight gaining process into scenario comparison. To demonstrate my work, I present two case studies, one of them explores the potential impact that the China south-north water transportation project in the Yangtze River basin will have on projected water demands. The other case study using GCAM models demonstrates how the impact of spatial variations and scales on similarity analysis of climate scenarios varies at world, continental, and country scales. / Dissertation/Thesis / Masters Thesis Computer Science 2015
42

Interpretations of Data in Ethical vs. Unethical Data Visualizations

January 2017 (has links)
abstract: This paper presents the results of an empirical analysis of deceptive data visualizations paired with explanatory text. Data visualizations are used to communicate information about important social issues to large audiences and are found in the news, social media, and the Internet (Kirk, 2012). Modern technology and software allow people and organizations to easily produce and publish data visualizations, contributing to data visualizations becoming more prevalent as a means of communicating important information (Sue & Griffin, 2016). Ethical transgressions in data visualizations are the intentional or unintentional use of deceptive techniques with the potential of altering the audience’s understanding of the information being presented (Pandey et al., 2015). While many have discussed the importance of ethics in data visualization, scientists have only recently started to look at how deceptive data visualizations affect the reader. This study was administered as an on-line user survey and was designed to test the deceptive potential of data visualizations when they are accompanied by a paragraph of text. The study consisted of a demographic questionnaire, chart familiarity assessment, and data visualization survey. A total of 256 participants completed the survey and were evenly distributed between a control (non-deceptive) survey or a test (deceptive) survey in which participant were asked to observe a paragraph of text and data visualization paired together. Participants then answered a question relevant to the observed information to measure how they perceived the information to be. The individual differences between demographic groups and their responses were analyzed to understand how these groups reacted to deceptive data visualizations compared to the control group. The results of the study confirmed that deceptive techniques in data visualizations caused participants to misinterpret the information in the deceptive data visualizations even when they were accompanied by a paragraph of explanatory text. Furthermore, certain demographics and comfort levels with chart types were more susceptible to certain types of deceptive techniques. These results highlight the importance of education and practice in the area of data visualizations to ensure deceptive practices are not utilized and to avoid potential misinformation, especially when information can be called into question. / Dissertation/Thesis / Masters Thesis Technical Communication 2017
43

VRMol - um ambiente virtual distribuído para visualização e análise de moléculas de proteínas. / VRMol - a distributed virtual enviroment to visualize and analyze molecules of proteins.

Ildeberto Aparecido Rodello 12 February 2003 (has links)
Este trabalho utiliza conceitos de Realidade Virtual e Sistemas Distribuídos para desenvolver um Ambiente Virtual Distribuído para visualização e análise de moléculas de proteínas, denominado VRMol. O sistema foi implementado com a linguagem Java, incluindo as APls Java 3D e Java RMI, visando permitir que pesquisadores geograficamente dispersos troquem informações de uma maneira rápida e eficiente, acelerando a pesquisa e discussão remotas. Assim, foram desenvolvidos uma interface gráfica com Java 3D e um conjunto de métodos para troca de mensagens de acordo com o modelo de comunicação cliente/servidor, com Java RMI. Além disso, o sistema também permite a utilização de alguns dispositivos de entrada não convencionais como joystick e luvas. / This work use the Virtual Reality and the Distributed Systems concepts to develop a Distributed Virtual Environment to visualize and analyze molecules of proteins, called VRMol. The system was implemented with the Java programming language, including the Java 3D and Java RMI APIs, aiming to allow geographically disperse researches exchange information in a quick and efficient way, speeding up the remote research and discussion. Thus, was developed a graphical interface with Java 3D and a set of methods to exchange messages according to a client/server communication model with Java RMI. Furthermore, the system also allows the use of some non-conventional input devices as joysticks and gloves.
44

A visual training based approach to surface inspection

Niskanen, M. (Matti) 18 June 2003 (has links)
Abstract Training a visual inspection device is not straightforward but suffers from the high variation in material to be inspected. This variation causes major difficulties for a human, and this is directly reflected in classifier training. Many inspection devices utilize rule-based classifiers the building and training of which rely mainly on human expertise. While designing such a classifier, a human tries to find the questions that would provide proper categorization. In training, an operator tunes the classifier parameters, aiming to achieve as good classification accuracy as possible. Such classifiers require lot of time and expertise before they can be fully utilized. Supervised classifiers form another common category. These learn automatically from training material, but rely on labels that a human has set for it. However, these labels tend to be inconsistent and thus reduce the classification accuracy achieved. Furthermore, as class boundaries are learnt from training samples, they cannot in practise be later adjusted if needed. In this thesis, a visual based training method is presented. It avoids the problems related to traditional training methods by combining a classifier and a user interface. The method relies on unsupervised projection and provides an intuitive way to directly set and tune the class boundaries of high-dimensional data. As the method groups the data only by the similarities of its features, it is not affected by erroneous and inconsistent labelling made for training samples. Furthermore, it does not require knowledge of the internal structure of the classifier or iterative parameter tuning, where a combination of parameter values leading to the desired class boundaries are sought. On the contrary, the class boundaries can be set directly, changing the classification parameters. The time need to take such a classifier into use is small and tuning the class boundaries can happen even on-line, if needed. The proposed method is tested with various experiments in this thesis. Different projection methods are evaluated from the point of view of visual based training. The method is further evaluated using a self-organizing map (SOM) as the projection method and wood as the test material. Parameters such as accuracy, map size, and speed are measured and discussed, and overall the method is found to be an advantageous training and classification scheme.
45

Cartogram Visualization: Methods, Applications, and Effectiveness

Nusrat, Sabrina, Nusrat, Sabrina January 2017 (has links)
Cartograms are value-by-area maps which modify geographic regions, such as countries, in proportion to some variable of interest, such as population. These are popular georeferenced data visualizations that have been used for over a century to illustrate patterns and trends in the world around us. A wide variety of cartogram types exist, that were designed to optimize different cartogram dimensions, such as geographic accuracy and statistical accuracy. This work surveys cartogram research in visualization, cartography and geometry, covering a broad spectrum of different cartogram types: from the traditional rectangular cartograms, to Dorling and diffusion cartograms. Based on prior work in visualization and cartography, I propose a task taxonomy of cartograms, and describe a study of cartograms based on quantitative metric-based comparisons, task-based time-and-error evaluation, and subjective preference and feedback analysis. For these evaluations, I considered four major types of cartograms which allowed us to compare and analyze the evaluation strategies and discuss the implications of the surprising outcomes. In the context of maps, the ability to recall information shown in the map is one of the important factors in determining effectiveness. In spite of some early studies that involved cartograms, the memorability of different cartogram types has not been investigated. In order to create effective data presentations, we first need to understand what makes a visualization memorable. I investigate the memorability of contiguous and Dorling cartograms, both in terms of recognition of the map and recall of data. Finally, I describe bivariate cartograms, a technique specifically designed to allow for the simultaneous comparison of two geo-statistical variables. Traditional cartograms are designed to show only a single statistical variable, but in practice, it is often useful to show two variables (e.g., the total sales for two competing companies) simultaneously. Bivariate cartograms make it easy to find geographic patterns and outliers in a pre-attentive way. They are most effective for showing two variables from the same domain (e.g., population in two different years, sales for two different companies), although they can also be used for variables from different domains (e.g., population and income). I also describe a small-scale evaluation of the proposed techniques that indicates bivariate cartograms are especially effective for finding geo-statistical patterns, trends and outliers.
46

Towards a Cloud-based Data Analysis and Visualization System

Li, Zhongli January 2016 (has links)
In recent years, increasing attentions are paid on developing exceptional technologies for efficiently processing massive collection of heterogeneous data generated by different kinds of sensors. While we have observed great successes of utilizing big data in many innovative applications, the need on integrating information poses new challenges caused by the heterogeneity of the data. In this thesis, we target at geo-tagged data, and propose a cloud based platform named City Digital Pulse (CDP), where a unified mechanism and extensible architecture are provided to facilitate the various aspects in big data analysis, ranging from data acquisition to data visualization. We instantiate the proposed system using multi-model data collected from two social platforms, Twitter and Instagram, which include plenty of geo-tagged messages. Data analysis is performed to detect human affections from the user uploaded content. The emotional information in big social data can be uncovered by using a multi-dimension visualization interface, based on which users can easily grasp the evolving of human affective status within a given geographical area, and interact with the system. This offers costless opportunities to improve the decision making in many critical areas. Both the proposed architecture and algorithm are empirically demonstrated to be able to achieve real-time big data analysis.
47

A Flexible Service-Oriented Approach to Address Hydroinformatic Challenges in Large-Scale Hydrologic Predictions

Souffront Alcantara, Michael Antonio 01 December 2018 (has links)
Water security is defined as a combination of water for achieving our goals as a society, and an acceptable level of water-related risks. Hydrologic modeling can be used to predict streamflow and aid in the decision-making process with the goal of attaining water security. Developed countries usually have their own hydrologic models; however, developing countries often lack hydrologic models due to factors such as the maintenance, computational costs, and technical capacity needed to run models. A global streamflow prediction system (GSPS) would help decrease vulnerabilities in developing countries and fill gaps in areas where no local models exist by providing extensive results that can be filtered for specific locations. The development of a GSPS has been deemed a grand challenge of the hydrologic community. To this end, many scientists and engineers have started to develop large-scale systems to an acceptable degree of success. Renowned models like the Global Flood Awareness System (GloFAS), the US National Water Model (NWM), and NASA's Land Assimilation System (LDAS) are proof that our ability to model large areas has improved remarkably. Even so, during this evolution the hydrologic community has started to realize that having a large-scale forecasting system does not make it immediately useful. New hydroinformatic challenges have surfaced that prevent these models from reaching their full potential. I have divided these challenges in four main categories: big data, data communication, adoption, and validation. I present a description of the background leading to the development of a GSPS including existing models, and the components needed to create an operational system. A case study with the NWM is also presented where I address the big data and data communication challenges by developing cyberinfrastructure and accessibility tools such as web applications and services. Finally, I used the GloFAS-RAPID model to create a forecasting system covering Africa, North America, South America, and South Asia using a service-oriented approach that includes the development of web applications, and services for providing improved data accessibility, and helping address adoption and validation challenges. I have developed customized services in collaboration with countries that include Argentina, Bangladesh, Colombia, Peru, Nepal, and the Dominican Republic. I also conducted validation tests to ensure that results are acceptable. Overall, a model-agnostic approach to operationalize a GSPS and provide meaningful results at the local level is provided with the potential to allow decision makers to focus on solving some of the most pressing water-related issues we face as a society.
48

JupyterLab_Voyager: A Data Visualization Enhancement in JupyterLab

Zhang, Ji 01 June 2018 (has links)
With the emergence of big data, scientific data analysis and visualization (DAV) tools are critical components of the data science software ecosystem; the usability of these tools is becoming extremely important to facilitate next-generation scientific discoveries. JupyterLab has been considered as one of the best polyglot, web-based, open-source data science tools. As the next phase of extensible interface for the classic iPython Notebooks, this tool supports interactive data science and scientific computing across multiple programming languages with great performances. Despite these advantages, previous heuristics evaluation studies have shown that JupyterLab has some significant flaws in the data visualization side. The current DAV system in JupyterLab heavily relies on users’ understanding and familiarity with certain visualization libraries, and doesn’t support the golden visual-information-seeking mantra of “overview first, zoom and filter, then details-on-demand”. These limitations often lead to a workflow bottleneck at the start of a project. In this thesis, we present ‘JupyterLab_Voyager’, an extension for JupyterLab that provides a graphical user interface (GUI) for data visualization operations and couples faceted browsing with visualization recommendation to support exploration of multivariate, tabular data, as a solution to improve the usability of the DAV system. The new plugin works with various types of datasets in the JupyterLab ecosystem; using the plugin you can perform a high-level graphical analysis of fields within your dataset sans coding without leaving the JupyterLab environment. It helps analysts learn about the dataset and engage in both open-ended exploration and target specific answers from the dataset. User testings and evaluations demonstrated that this implementation has good usability and significantly improves the DAV system in JupyterLab.
49

Formalization of molecular interaction maps in systems biology; Application to simulations of the relationship between DNA damage response and circadian rhythms

Luna, Augustin 22 January 2016 (has links)
Quantitative exploration of biological pathway networks must begin with a qualitative understanding of them. Often researchers aggregate and disseminate experimental data using regulatory diagrams with ad hoc notations leading to ambiguous interpretations of presented results. This thesis has two main aims. First, it develops software to allow researchers to aggregate pathway data diagrammatically using the Molecular Interaction Map (MIM) notation in order to gain a better qualitative understanding of biological systems. Secondly, it develops a quantitative biological model to study the effect of DNA damage on circadian rhythms. The second aim benefits from the first by making use of visual representations to identify potential system boundaries for the quantitative model. I focus first on software for the MIM notation - a notation to concisely visualize bioregulatory complexity and to reduce ambiguity for readers. The thesis provides a formalized MIM specification for software implementation along with a base layer of software components for the inclusion of the MIM notation in other software packages. It also provides an implementation of the specification as a user-friendly tool, PathVisio-MIM, for creating and editing MIM diagrams along with software to validate and overlay external data onto the diagrams. I focus secondly on the application of the MIM software to the quantitative exploration of the poorly understood role of SIRT1 and PARP1, two NAD+-dependent enzymes, in the regulation of circadian rhythms during DNA damage response. SIRT1 and PARP1 participate in the regulation of several key DNA damage-repair proteins and are the subjects of study as potential cancer therapeutic targets. In this part of the thesis, I present an ordinary differential equation (ODE) model that simulates the core circadian clock and the involvement of SIRT1 in both the positive and negative arms of circadian regulation. I then use this model is then used to predict a potential role for the competition for NAD+ supplies by SIRT1 and PARP1 leading to the observed behavior of primarily phase advancement of circadian oscillations during DNA damage response. The model further predicts a potential mechanism by which multiple forms of post-transcriptional modification may cooperate to produce a primarily phase advancement.
50

Real-Time Feedback for In-Class Introductory Computer Programming Exercises

Sellers, Ariana Dawn 01 June 2018 (has links)
Computer programming is a difficult subject to master. Introductory programming courses often have low retention and high failure rates. Part of the problem is identifying if students understand the lecture material. In a traditional classroom, a professor can gauge a class's understanding on questions asked during lecture. However, many struggling students are unlikely to speak up in class. To address this problem, recent research has focused on gathering compiler data from programming exercises to identify at-risk students in these courses. These data allow professors to intervene with individual students who are at risk and, after analyzing the data for a given time period, a professor can also re-evaluate how certain topics are taught to improve understanding. However, current implementations do not provide information in real time. They may improve a professor's teaching long term, but they do not provide insight into how an individual student is understanding a specific topic during the lecture in time for the professor to make adjustments.This research explores a system that combines compiler data analytics with in-class exercises. The system incorporates the in-class exercise into a web-based text editor with data analytics. While the students are programming in their own browsers, the website analyzes their compiler errors and console output to determine where the students are struggling. A real-time summary is presented to the professor during the lecture. This system allows a professor to receive immediate feedback on student understanding, which enables him/her to clarify areas of confusion immediately. As a result, this dynamic learning environment allows course material to better evolve to meet the needs of the students.Results show that students in a simulated programming course performed slightly better on quizzes when the instructor had access to real-time feedback during a programming exercise. Instructors were able to determine what students were struggling with from the real-time feedback. Overall, both the student and instructor test subjects found the experimental website useful.Case studies performed in an actual programming lecture allowed the professor to address errors that are not considered in the curriculum of the course. Many students appreciated the fact that the professor was able to immediately answer questions based on the feedback. Students primarily had issues with the bugs present in the alpha version of the software.

Page generated in 0.0947 seconds