• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • Tagged with
  • 14
  • 9
  • 8
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Systematising glyph design for visualization

Maguire, Eamonn James January 2014 (has links)
The digitalisation of information now affects most fields of human activity. From the social sciences to biology to physics, the volume, velocity, and variety of data exhibit exponential growth trends. With such rates of expansion, efforts to understand and make sense of datasets of such scale, how- ever driven and directed, progress only at an incremental pace. The challenges are significant. For instance, the ability to display an ever growing amount of data is physically and naturally bound by the dimensions of the average sized display. A synergistic interplay between statistical analysis and visualisation approaches outlines a path for significant advances in the field of data exploration. We can turn to statistics to provide principled guidance for prioritisation of information to display. Using statistical results, and combining knowledge from the cognitive sciences, visual techniques can be used to highlight salient data attributes. The purpose of this thesis is to explore the link between computer science, statistics, visualization, and the cognitive sciences, to define and develop more systematic approaches towards the design of glyphs. Glyphs represent the variables of multivariate data records by mapping those variables to one or more visual channels (e.g., colour, shape, and texture). They offer a unique, compact solution to the presentation of a large amount of multivariate information. However, composing a meaningful, interpretable, and learnable glyph can pose a number of problems. The first of these problems exist in the subjectivity involved in the process of data to visual channel mapping, and in the organisation of those visual channels to form the overall glyph. Our first contribution outlines a computational technique to help systematise many of these otherwise subjective elements of the glyph design process. For visual information compression, common patterns (motifs) in time series or graph data for example, may be replaced with more compact, visual representations. Glyph-based techniques can provide such representations that can help users find common patterns more quickly, and at the same time, bring attention to anomalous areas of the data. However, replacing any data with a glyph is not going to make tasks such as visual search easier. A key problem is the selection of semantically meaningful motifs with the potential to compress large amounts of information. A second contribution of this thesis is a computational process for systematic design of such glyph libraries and their subsequent glyphs. A further problem in the glyph design process is in their evaluation. Evaluation is typically a time-consuming, highly subjective process. Moreover, domain experts are not always plentiful, therefore obtaining statistically significant evaluation results is often difficult. A final contribution of this work is to investigate if there are areas of evaluation that can be performed computationally.
2

Alternative Epigraphic Interpretations Of The Maya Snake Emblem Glyph

Savage, Christopher Tyra 01 January 2007 (has links)
This thesis seeks to demonstrate that the Maya snake emblem glyph is associated with religious specialists, instead of geographic locations, as emblem glyphs are typically understood to be. The inscriptions and the media on which the snake emblem glyph occurs will be analyzed to determine the role or function of the "Lord of the Snake." Temporal and spatial data has also been collected to aid in understanding the enigmatic glyph. The snake emblem glyph has recently been identified as originating from a broad area containing the sites of El Peru and La Corona in Guatemala, and Dzibanche, Mexico, a departure from the longstanding choice of Calakmul, Mexico. Unprovenanced snake emblem glyph texts have been cataloged under a "Site Q" designation ('Q' for the Spanish word Que, meaning "which") by Peter Mathews. Site Q is thus not securely identified geographically, which confounds efforts to designate a particular site as the snake emblem glyph site. Other problems with the snake emblem glyph, such as its geographically wide dispersal, hint that it is not a title of a particular city or region. Yet another problem is "a proper fit" between the individuals listed on unprovenanced material and individuals named at sites associated with the snake emblem glyph. It is argued that the interpretation of the snake emblem glyph differs from how emblem glyphs are presently understood. Rather than representing a physical location, the snake emblem glyph represents a mythological place or "state," containing members who legitimize their lineage (association) through ritual events such as communication with supernaturals via the vision serpent. The specialists perform rituals, scatterings, are ballplayers, and witness events. They are rarely associated with accession, which by current interpretation is implicitly tied to emblem glyphs.
3

Development of a Virtual Scientific Visualization Environment for the Analysis of Complex Flows

Etebari, Ali 27 March 2003 (has links)
This project offers a multidisciplinary approach towards the acquisition, analysis and visualization of experimental data that pertain to cardiovascular applications. First and foremost, the capabilities of our Time-Resolved Digital Particle Image Velocimetry (TRDPIV) system were improved, allowing near-wall wall TRDPIV on compliant, dynamically moving boundaries. As a result, false flow-field vectors due to reflections from the boundary walls were eliminated, and allowing measurement of wall shear stress, wall shear rate, and oscillating shear index within as little as fifty microns of the boundary. Similar in-vitro measurements have not been reported to date by any other group. Second, an immersive, virtual environment (VE) was developed for the investigation and analysis of vortical, spatio-temporally developing flows with complex fluid-structure interactions. This VE was used to study flows in the cardiovascular system, particularly for flow through mechanical heart valves and inside the heart left ventricle (LV). The simulation provides three-dimensional (3-D) visualization of in-vitro heart flow mechanics, allowing global, volumetric flow analysis, and a useful environment for comparison with in-vivo MRI velocimetry data. 3-D glyphs (symbols representing informational parameters) are used to visually represent the flow parameters in the form of an ellipse attached to a cone, where the ellipse represents a second-order Reynolds stress tensor, and the cone represents the velocity magnitude and direction at a particular point in space, and the color corresponds to an out-of-plane vorticity. This new system has a major advantage over conventional 2-D systems in that it successfully doubles the number of visualized parameters, and allows for visualization of a time-dependent series of flow data in the Virginia Tech CAVETM immersive VE. The user controls his/her viewpoint, and can thus navigate through the simulation and view the flow field from any perspective in the immersive VE. Finally, an edge detection algorithm was developed to determine the inner and outer myocardial boundaries, and from this information calculate the local thickness distribution of the myocardium and a myocardial area approximation. This information is important in validating our in-vitro system, and is integral to the evaluation and diagnosis of congestive heart disease and its progression. / Master of Science
4

Anagnosis

Damiani, Vincenzo 20 April 2016 (has links) (PDF)
In recent years many institutions holding papyri have put images of their collections online, while transcriptions previously published in print are now hosted in the Digital Corpus of Literary Papyri. Anagnosis aims to provide an intuitive and easy-to-use web interface between those images and related digitized texts. The main goal lies in automatic data processing and text-recognition accuracy: Through a dedicated OCR algorithm, letters on the image are identified with single boxes and thus linked to the transcription. A coordinates system of the glyphs on the image can then be transferred and applied to each new image uploaded for the same text section. Once all character boxes are generated, Anagnosis can extract a sample alphabet that users may rearrange to virtually restore lost parts of text directly on the image.
5

Dynamic and Static Approaches for Glyph-Based Visualization of Software Metrics

Majid, Raja January 2008 (has links)
<p>This project presents the research on software visualization techniques. We will introduce the concepts of software visualization, software metrics and our proposed visualization techniques: Static Visualization (glyphs object with static texture) and Dynamic Visualization (glyphs object with moving object). Our intent to study the existing visualization techniques for visualization of software</p><p>metrics and then proposed the new visualization approach that is more time efficient and easy to perceive by viewer. In this project, we focus on the practical aspects of visualization of multivariate dataset. This project also gives an implementation of proposed visualization techniques of software metrics. In this research based work, we have to compare practically the proposed visualization approaches. We will discuss the software development life cycle of our proposed visualization system, and we will also describe the complete software implementation of implemented software.</p>
6

Dynamic and Static Approaches for Glyph-Based Visualization of Software Metrics

Majid, Raja January 2008 (has links)
This project presents the research on software visualization techniques. We will introduce the concepts of software visualization, software metrics and our proposed visualization techniques: Static Visualization (glyphs object with static texture) and Dynamic Visualization (glyphs object with moving object). Our intent to study the existing visualization techniques for visualization of software metrics and then proposed the new visualization approach that is more time efficient and easy to perceive by viewer. In this project, we focus on the practical aspects of visualization of multivariate dataset. This project also gives an implementation of proposed visualization techniques of software metrics. In this research based work, we have to compare practically the proposed visualization approaches. We will discuss the software development life cycle of our proposed visualization system, and we will also describe the complete software implementation of implemented software.
7

Anagnosis: automatisierte Buchstabenverknüpfung von Transkript und Papyrusabbildung

Damiani, Vincenzo January 2016 (has links)
In recent years many institutions holding papyri have put images of their collections online, while transcriptions previously published in print are now hosted in the Digital Corpus of Literary Papyri. Anagnosis aims to provide an intuitive and easy-to-use web interface between those images and related digitized texts. The main goal lies in automatic data processing and text-recognition accuracy: Through a dedicated OCR algorithm, letters on the image are identified with single boxes and thus linked to the transcription. A coordinates system of the glyphs on the image can then be transferred and applied to each new image uploaded for the same text section. Once all character boxes are generated, Anagnosis can extract a sample alphabet that users may rearrange to virtually restore lost parts of text directly on the image.
8

Uncertainty visualization of ensemble simulations

Sanyal, Jibonananda 09 December 2011 (has links)
Ensemble simulation is a commonly used technique in operational forecasting of weather and floods. Multi-member ensemble output is usually large, multivariate, and challenging to interpret interactively. Forecast meteorologists and hydrologists are interested in understanding the uncertainties associated with the simulation; specifically variability between the ensemble members. The visualization of ensemble members is currently accomplished through spaghetti plots or hydrographs. To improve visualization techniques and tools for forecasters, we conducted a userstudy to evaluate the effectiveness of existing uncertainty visualization techniques on 1D and 2D synthetic datasets. We designed an uncertainty evaluation framework to enable easier design of such studies for scientific visualization. The techniques evaluated are errorbars, scaled size of glyphs, color-mapping on glyphs, and color-mapping of uncertainty on the data surface. Although we did not find a consistent order among the four techniques for all tasks, we found that the efficiency of techniques used highly depended on the tasks being performed. Errorbars consistently underperformed throughout the experiment. Scaling the size of glyphs and color-mapping of the surface performed reasonably well. With results from the user-study, we iteratively developed a tool named ‘Noodles’ to interactively explore the ensemble uncertainty in weather simulations. Uncertainty was quantified using standard deviation, inter-quartile range, width of the 95% confidence interval, and by bootstrapping the data. A coordinated view of ribbon and glyph-based uncertainty visualization, spaghetti plots, and data transect plots was provided to two meteorologists for expert evaluation. They found it useful in assessing uncertainty in the data, especially in finding outliers and avoiding the parametrizations leading to these outliers. Additionally, they could identify spatial regions with high uncertainty thereby determining poorly simulated storm environments and deriving physical interpretation of these model issues. We also describe uncertainty visualization capabilities developed for a tool named ‘FloodViz’ for visualization and analysis of flood simulation ensembles. Simple member and trend plots and composited inundation maps with uncertainty are described along with different types of glyph based uncertainty representations. We also provide feedback from a hydrologist using various features of the tool from an operational perspective.
9

Development of Visual Tools for Analyzing Ensemble Error and Uncertainty

Anreddy, Sujan Ranjan Reddy 04 May 2018 (has links)
Climate analysts use Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations to make sense of models performance in predicting extreme events such as heavy precipitation. Similarly, weather analysts use numerical weather prediction models (NWP) to simulate weather conditions either by perturbing initial conditions or by changing multiple input parameterization schemes, e.g., cumulus and microphysics schemes. These simulations are used in operational weather forecasting and for studying the role of parameterization schemes in synoptic weather events like storms. This work addresses the need for visualizing the differences in both CMIP5 and NWP model output. This work proposes three glyph designs used for communicating CMIP5 model error. It also describes Ensemble Visual eXplorer tool that provides multiple ways of visualizing NWP model output and the related input parameter space. The proposed interactive dendrogram provides an effective way to relate multiple input parameterization schemes with spatial characteristics of model uncertainty features. The glyphs that were designed to communicate CMIP5 model error are extended to encode both parameterization schemes and graduated uncertainty, to provide related insights at specific locations such as storm center and the areas surrounding it. The work analyzes different ways of using glyphs to represent parametric uncertainty using visual variables such as color and size, in conjunction with Gestalt visual properties. It demonstrates the use of visual analytics in resolving some of the issues such as visual scalability. As part of this dissertation, we evaluated three glyph designs using average precipitation rate predicted by CMIP5 simulations, and Ensemble Visual eXplorer tool using WRF 1999 March 4th, North American storm track dataset.
10

Document image analysis of Balinese palm leaf manuscripts / Analyse d'images de documents des manuscrits balinais sur feuilles de palmier

Kesiman, Made Windu Antara 05 July 2018 (has links)
Les collections de manuscrits sur feuilles de palmier sont devenues une partie intégrante de la culture et de la vie des peuples de l'Asie du Sud-Est. Avec l’augmentation des projets de numérisation des documents patrimoniaux à travers le monde, les collections de manuscrits sur feuilles de palmier ont finalement attiré l'attention des chercheurs en analyse d'images de documents (AID). Les travaux de recherche menés dans le cadre de cette thèse ont porté sur les manuscrits d'Indonésie, et en particulier sur les manuscrits de Bali. Nos travaux visent à proposer des méthodes d’analyse pour les manuscrits sur feuilles de palmier. En effet, ces collections offrent de nouveaux défis car elles utilisent, d’une part, un support spécifique : les feuilles de palmier, et d’autre part, un langage et un script qui n'ont jamais été analysés auparavant. Prenant en compte, le contexte et les conditions de stockage des collections de manuscrits sur feuilles de palmier à Bali, nos travaux ont pour objectif d’apporter une valeur ajoutée aux manuscrits numérisés en développant des outils pour analyser, translittérer et indexer le contenu des manuscrits sur feuilles de palmier. Ces systèmes rendront ces manuscrits plus accessibles, lisibles et compréhensibles à un public plus large ainsi que pour les chercheurs et les étudiants du monde entier. Cette thèse a permis de développer un système d’AID pour les images de documents sur feuilles de palmier, comprenant plusieurs tâches de traitement d'images : numérisation du document, construction de la vérité terrain, binarisation, segmentation des lignes de texte et des glyphes, la reconnaissance des glyphes et des mots, translittération et l’indexation de document. Nous avons ainsi créé le premier corpus et jeu de données de manuscrits balinais sur feuilles de palmier. Ce corpus est actuellement disponible pour les chercheurs en AID. Nous avons également développé un système de reconnaissance des glyphes et un système de translittération automatique des manuscrits balinais. Cette thèse propose un schéma complet de reconnaissance de glyphes spatialement catégorisé pour la translittération des manuscrits balinais sur feuilles de palmier. Le schéma proposé comprend six tâches : la segmentation de lignes de texte et de glyphes, un processus de classification de glyphes, la détection de la position spatiale pour la catégorisation des glyphes, une reconnaissance globale et catégorisée des glyphes, la sélection des glyphes et la translittération basée sur des règles phonologiques. La translittération automatique de l'écriture balinaise nécessite de mettre en œuvre des mécanismes de représentation des connaissances et des règles phonologiques. Nous proposons un système de translittération sans segmentation basée sur la méthode LSTM. Celui-ci a été testé sur des données réelles et synthétiques. Il comprend un schéma d'apprentissage à deux niveaux pouvant s’appliquer au niveau du mot et au niveau de la ligne de texte. / The collection of palm leaf manuscripts is an important part of Southeast Asian people’s culture and life. Following the increasing of the digitization projects of heritage documents around the world, the collection of palm leaf manuscripts in Southeast Asia finally attracted the attention of researchers in document image analysis (DIA). The research work conducted for this dissertation focused on the heritage documents of the collection of palm leaf manuscripts from Indonesia, especially the palm leaf manuscripts from Bali. This dissertation took part in exploring DIA researches for palm leaf manuscripts collection. This collection offers new challenges for DIA researches because it uses palm leaf as writing media and also with a language and script that have never been analyzed before. Motivated by the contextual situations and real conditions of the palm leaf manuscript collections in Bali, this research tried to bring added value to digitized palm leaf manuscripts by developing tools to analyze, to transliterate and to index the content of palm leaf manuscripts. These systems aim at making palm leaf manuscripts more accessible, readable and understandable to a wider audience and, to scholars and students all over the world. This research developed a DIA system for document images of palm leaf manuscripts, that includes several image processing tasks, beginning with digitization of the document, ground truth construction, binarization, text line and glyph segmentation, ending with glyph and word recognition, transliteration and document indexing and retrieval. In this research, we created the first corpus and dataset of the Balinese palm leaf manuscripts for the DIA research community. We also developed the glyph recognition system and the automatic transliteration system for the Balinese palm leaf manuscripts. This dissertation proposed a complete scheme of spatially categorized glyph recognition for the transliteration of Balinese palm leaf manuscripts. The proposed scheme consists of six tasks: the text line and glyph segmentation, the glyph ordering process, the detection of the spatial position for glyph category, the global and categorized glyph recognition, the option selection for glyph recognition and the transliteration with phonological rules-based machine. An implementation of knowledge representation and phonological rules for the automatic transliteration of Balinese script on palm leaf manuscript is proposed. The adaptation of a segmentation-free LSTM-based transliteration system with the generated synthetic dataset and the training schemes at two different levels (word level and text line level) is also proposed.

Page generated in 0.027 seconds