• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 9
  • 8
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 172
  • 172
  • 90
  • 61
  • 46
  • 43
  • 31
  • 30
  • 27
  • 22
  • 19
  • 18
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

LEVIA'22: Leipzig Symposium on Visualization in Applications 2022

Gillmann, Christina, Schmidt, Johanna, Jänicke, Stefan, Wiegreffe, Daniel 07 July 2022 (has links)
No description available.
52

Visual analytics of arsenic in various foods

Johnson, Matilda Olubunmi 06 1900 (has links)
Arsenic is a naturally occurring toxic metal and its presence in food composites could be a potential risk to the health of both humans and animals. Arseniccontaminated groundwater is often used for food and animal consumption, irrigation of soils, which could potentially lead to arsenic entering the human food chain. Its side effects include multiple organ damage, cancers, heart disease, diabetes mellitus, hypertension, lung disease and peripheral vascular disease. Research investigations, epidemiologic surveys and total diet studies (market baskets) provide datasets, information and knowledge on arsenic content in foods. The determination of the concentration of arsenic in rice varieties is an active area of research. With the increasing capability to measure the concentration of arsenic in foods, there are volumes of varied and continuously generated datasets on arsenic in food groups. Visual analytics, which integrates techniques from information visualization and computational data analysis via interactive visual interfaces, presents an approach to enable data on arsenic concentrations to be visually represented. The goal of this doctoral research in Environmental Science is to address the need to provide visual analytical decision support tools on arsenic content in various foods with special emphasis on rice. The hypothesis of this doctoral thesis research is that software enabled visual representation and user interaction facilitated by visual interfaces will help discover hidden relationships between arsenic content and food categories. The specific objectives investigated were: (1) Provide insightful visual analytic views of compiled data on arsenic in food categories; (2) Categorize table ready foods by arsenic content; (3) Compare arsenic content in rice product categories and (4) Identify informative sentences on arsenic concentrations in rice. The overall research method is secondary data analyses using visual analytics techniques implemented through Tableau Software. Several datasets were utilized to conduct visual analytical representations of data on arsenic concentrations in foods. These consisted of (i) arsenic concentrations in 459 crop samples; (ii) arsenic concentrations in 328 table ready foods from multi-year total diet studies; (iii) estimates of daily inorganic arsenic intake for 49 food groups from multicountry total diet studies; (iv) arsenic content in rice product categories for 193 samples of rice and rice products; (v) 758 sentences extracted from PubMed abstracts on arsenic in rice. Several key insights were made in this doctoral research. The concentration of inorganic arsenic in instant rice was lower than those of other rice types. The concentration of Dimethylarsinic Acid (DMA) in wild rice, an aquatic grass, was notably lower than rice varieties (e.g. 0.0099 ppm versus 0.182 for a long grain white rice). The categorization of 328 table ready foods into 12 categories enhances the communication on arsenic concentrations. Outlier concentration of arsenic in rice were observed in views constructed for integrating data from four total diet studies. The 193 rice samples were grouped into two groups using a cut-off level of 3 mcg of inorganic arsenic per serving. The visual analytics views constructed allow users to specify cut-off levels desired. A total of 86 sentences from 53 PubMed abstracts were identified as informative for arsenic concentrations. The sentences enabled literature curation for arsenic concentration and additional supporting information such as location of the research. An informative sentence provided global “normal” range of 0.08 to 0.20 mg/kg for arsenic in rice. A visual analytics resource developed was a dashboard that facilitates the interaction with text and a connection to the knowledge base of the PubMed literature database. The research reported provides a foundation for additional investigations on visual analytics of data on arsenic concentrations in foods. Considering the massive and complex data associated with contaminants in foods, the development of visual analytics tools are needed to facilitate diverse human cognitive tasks. Visual analytics tools can provide integrated automated analysis; interaction with data; and data visualization critically needed to enhance decision making. Stakeholders that would benefit include consumers; food and health safety personnel; farmers; and food producers. Arsenic content of baby foods warrants attention because of the early life exposures that could have life time adverse health consequences. The action of microorganisms in the soil is associated with availability of arsenic species for uptake by plants. Genomic data on microbial communities presents wealth of data to identify mitigation strategies for arsenic uptake by plants. Arsenic metabolism pathways encoded in microbial genomes warrants further research. Visual analytics tasks could facilitate the discovery of biological processes for mitigating arsenic uptake from soil. The increasing availability of central resources on data from total diet studies and research investigations presents a need for personnel with diverse levels of skills in data management and analysis. Training workshops and courses on the foundations and applications of visual analytics can contribute to global workforce development in food safety and environmental health. Research investigations could determine learning gains accomplished through hardware and software for visual analytics. Finally, there is need to develop and evaluate informatics tools that have visual analytics capabilities in the domain of contaminants in foods. / Environmental Sciences / P. Phil. (Environmental Science)
53

Close and Distant Reading Visualizations for the Comparative Analysis of Digital Humanities Data

Jänicke, Stefan 19 July 2016 (has links) (PDF)
Traditionally, humanities scholars carrying out research on a specific or on multiple literary work(s) are interested in the analysis of related texts or text passages. But the digital age has opened possibilities for scholars to enhance their traditional workflows. Enabled by digitization projects, humanities scholars can nowadays reach a large number of digitized texts through web portals such as Google Books or Internet Archive. Digital editions exist also for ancient texts; notable examples are PHI Latin Texts and the Perseus Digital Library. This shift from reading a single book “on paper” to the possibility of browsing many digital texts is one of the origins and principal pillars of the digital humanities domain, which helps developing solutions to handle vast amounts of cultural heritage data – text being the main data type. In contrast to the traditional methods, the digital humanities allow to pose new research questions on cultural heritage datasets. Some of these questions can be answered with existent algorithms and tools provided by the computer science domain, but for other humanities questions scholars need to formulate new methods in collaboration with computer scientists. Developed in the late 1980s, the digital humanities primarily focused on designing standards to represent cultural heritage data such as the Text Encoding Initiative (TEI) for texts, and to aggregate, digitize and deliver data. In the last years, visualization techniques have gained more and more importance when it comes to analyzing data. For example, Saito introduced her 2010 digital humanities conference paper with: “In recent years, people have tended to be overwhelmed by a vast amount of information in various contexts. Therefore, arguments about ’Information Visualization’ as a method to make information easy to comprehend are more than understandable.” A major impulse for this trend was given by Franco Moretti. In 2005, he published the book “Graphs, Maps, Trees”, in which he proposes so-called distant reading approaches for textual data that steer the traditional way of approaching literature towards a completely new direction. Instead of reading texts in the traditional way – so-called close reading –, he invites to count, to graph and to map them. In other words, to visualize them. This dissertation presents novel close and distant reading visualization techniques for hitherto unsolved problems. Appropriate visualization techniques have been applied to support basic tasks, e.g., visualizing geospatial metadata to analyze the geographical distribution of cultural heritage data items or using tag clouds to illustrate textual statistics of a historical corpus. In contrast, this dissertation focuses on developing information visualization and visual analytics methods that support investigating research questions that require the comparative analysis of various digital humanities datasets. We first take a look at the state-of-the-art of existing close and distant reading visualizations that have been developed to support humanities scholars working with literary texts. We thereby provide a taxonomy of visualization methods applied to show various aspects of the underlying digital humanities data. We point out open challenges and we present our visualizations designed to support humanities scholars in comparatively analyzing historical datasets. In short, we present (1) GeoTemCo for the comparative visualization of geospatial-temporal data, (2) the two tag cloud designs TagPies and TagSpheres that comparatively visualize faceted textual summaries, (3) TextReuseGrid and TextReuseBrowser to explore re-used text passages among the texts of a corpus, (4) TRAViz for the visualization of textual variation between multiple text editions, and (5) the visual analytics system MusikerProfiling to detect similar musicians to a given musician of interest. Finally, we summarize our and the collaboration experiences of other visualization researchers to emphasize the ingredients required for a successful project in the digital humanities, and we take a look at future challenges in that research field.
54

Compréhension fine du comportement des lignes des réseaux métro, RER ettramway pour la réalisation des études d’exploitabilité. / Detailed understanding of the metro, RER and streetcar network lines behaviour for the realization of operating studies

Dimanche, Vincent 11 June 2018 (has links)
Les réseaux ferroviaires en milieu dense font face à des saturations importantes. Et l'adéquation entre l'offre théorique et la demande croissante impose des contraintes d'exploitabilités fortes. Un déséquilibre générera des points conflictuels comme des goulets d'étranglement avec pour effet des retards sur les trains amonts. Comme le facteur humain, parmi une multitude, influence l'exploitation ; le prendre en compte plus finement devrait améliorer la compréhension et la modélisation des lignes pour en accroître la capacité sans sacrifier le confort des passagers. Pour répondre à cet objectif, nos travaux reposent sur une visualisation adaptée des données remontées de l'exploitation et sur leur fouille automatisée. Elles ont été adaptées et appliquées au domaine ferroviaire notamment aux lignes des réseaux ferrés exploités par la RATP. Le processus « Visual Analytics », mis en œuvre dans nos travaux pour répondre à ces besoins, englobe les étapes nécessaires à la valorisation de la donnée, allant de leur préparation à l’analyse experte en passant par leur représentation graphique et par l’utilisation d'algorithmes de fouille de données. Parmi ces derniers, le CorEx et le Sieve nous ont permis par un apprentissage non supervisé basé sur une mesure de l'information mutuelle multivariée d'analyser les données d'exploitation pour en extraire des caractéristiques du comportement humain. Enfin, nous proposons aussi une visualisation intuitive d'une grande quantité de données permettant leur intégration et facilitant le diagnostic global du comportement des lignes ferroviaires. / Dense railway networks face significant saturation. And the balance between the theoretical offer and the growing demand imposes strong operability constraints. An imbalance will generate conflicting points such as bottlenecks with the effect of delays on the following trains. As the human factor influences the operation performance; taking it into account more accurately should improve understanding and modeling of railway lines to increase capacity without reducing passenger comfort. To fulfill this objective, we are working on an adapted visualization of the operating data and on their automated mining. These two solutions have been adapted and applied to the railway sector, particularly to the lines of rail networks operated by RATP. The "Visual Analytics" process, implemented in our work to meet these needs, encompasses the steps required to value the data, going from the preparation of the data to the expert analysis. This expert analysis is made through graphic representation and the use of data mining algorithms. Among these data mining algorithms, CorEx and Sieve allowed us to analyze operating data and then extract characteristics human behavior thanks to unsupervised learning based on a multivariate mutual information measure to. Finally, we propose an intuitive visualization of a large amount of data allowing their global integration and facilitating the overall diagnosis of the railway lines behavior.
55

Integration of computational methods and visual analytics for large-scale high-dimensional data

Choo, Jae gul 20 September 2013 (has links)
With the increasing amount of collected data, large-scale high-dimensional data analysis is becoming essential in many areas. These data can be analyzed either by using fully computational methods or by leveraging human capabilities via interactive visualization. However, each method has its drawbacks. While a fully computational method can deal with large amounts of data, it lacks depth in its understanding of the data, which is critical to the analysis. With the interactive visualization method, the user can give a deeper insight on the data but suffers when large amounts of data need to be analyzed. Even with an apparent need for these two approaches to be integrated, little progress has been made. As ways to tackle this problem, computational methods have to be re-designed both theoretically and algorithmically, and the visual analytics system has to expose these computational methods to users so that they can choose the proper algorithms and settings. To achieve an appropriate integration between computational methods and visual analytics, the thesis focuses on essential computational methods for visualization, such as dimension reduction and clustering, and it presents fundamental development of computational methods as well as visual analytic systems involving newly developed methods. The contributions of the thesis include (1) the two-stage dimension reduction framework that better handles significant information loss in visualization of high-dimensional data, (2) efficient parametric updating of computational methods for fast and smooth user interactions, and (3) an iteration-wise integration framework of computational methods in real-time visual analytics. The latter parts of the thesis focus on the development of visual analytics systems involving the presented computational methods, such as (1) Testbed: an interactive visual testbed system for various dimension reduction and clustering methods, (2) iVisClassifier: an interactive visual classification system using supervised dimension reduction, and (3) VisIRR: an interactive visual information retrieval and recommender system for large-scale document data.
56

Visual analytics of arsenic in various foods

Johnson, Matilda Olubunmi 06 1900 (has links)
Arsenic is a naturally occurring toxic metal and its presence in food composites could be a potential risk to the health of both humans and animals. Arseniccontaminated groundwater is often used for food and animal consumption, irrigation of soils, which could potentially lead to arsenic entering the human food chain. Its side effects include multiple organ damage, cancers, heart disease, diabetes mellitus, hypertension, lung disease and peripheral vascular disease. Research investigations, epidemiologic surveys and total diet studies (market baskets) provide datasets, information and knowledge on arsenic content in foods. The determination of the concentration of arsenic in rice varieties is an active area of research. With the increasing capability to measure the concentration of arsenic in foods, there are volumes of varied and continuously generated datasets on arsenic in food groups. Visual analytics, which integrates techniques from information visualization and computational data analysis via interactive visual interfaces, presents an approach to enable data on arsenic concentrations to be visually represented. The goal of this doctoral research in Environmental Science is to address the need to provide visual analytical decision support tools on arsenic content in various foods with special emphasis on rice. The hypothesis of this doctoral thesis research is that software enabled visual representation and user interaction facilitated by visual interfaces will help discover hidden relationships between arsenic content and food categories. The specific objectives investigated were: (1) Provide insightful visual analytic views of compiled data on arsenic in food categories; (2) Categorize table ready foods by arsenic content; (3) Compare arsenic content in rice product categories and (4) Identify informative sentences on arsenic concentrations in rice. The overall research method is secondary data analyses using visual analytics techniques implemented through Tableau Software. Several datasets were utilized to conduct visual analytical representations of data on arsenic concentrations in foods. These consisted of (i) arsenic concentrations in 459 crop samples; (ii) arsenic concentrations in 328 table ready foods from multi-year total diet studies; (iii) estimates of daily inorganic arsenic intake for 49 food groups from multicountry total diet studies; (iv) arsenic content in rice product categories for 193 samples of rice and rice products; (v) 758 sentences extracted from PubMed abstracts on arsenic in rice. Several key insights were made in this doctoral research. The concentration of inorganic arsenic in instant rice was lower than those of other rice types. The concentration of Dimethylarsinic Acid (DMA) in wild rice, an aquatic grass, was notably lower than rice varieties (e.g. 0.0099 ppm versus 0.182 for a long grain white rice). The categorization of 328 table ready foods into 12 categories enhances the communication on arsenic concentrations. Outlier concentration of arsenic in rice were observed in views constructed for integrating data from four total diet studies. The 193 rice samples were grouped into two groups using a cut-off level of 3 mcg of inorganic arsenic per serving. The visual analytics views constructed allow users to specify cut-off levels desired. A total of 86 sentences from 53 PubMed abstracts were identified as informative for arsenic concentrations. The sentences enabled literature curation for arsenic concentration and additional supporting information such as location of the research. An informative sentence provided global “normal” range of 0.08 to 0.20 mg/kg for arsenic in rice. A visual analytics resource developed was a dashboard that facilitates the interaction with text and a connection to the knowledge base of the PubMed literature database. The research reported provides a foundation for additional investigations on visual analytics of data on arsenic concentrations in foods. Considering the massive and complex data associated with contaminants in foods, the development of visual analytics tools are needed to facilitate diverse human cognitive tasks. Visual analytics tools can provide integrated automated analysis; interaction with data; and data visualization critically needed to enhance decision making. Stakeholders that would benefit include consumers; food and health safety personnel; farmers; and food producers. Arsenic content of baby foods warrants attention because of the early life exposures that could have life time adverse health consequences. The action of microorganisms in the soil is associated with availability of arsenic species for uptake by plants. Genomic data on microbial communities presents wealth of data to identify mitigation strategies for arsenic uptake by plants. Arsenic metabolism pathways encoded in microbial genomes warrants further research. Visual analytics tasks could facilitate the discovery of biological processes for mitigating arsenic uptake from soil. The increasing availability of central resources on data from total diet studies and research investigations presents a need for personnel with diverse levels of skills in data management and analysis. Training workshops and courses on the foundations and applications of visual analytics can contribute to global workforce development in food safety and environmental health. Research investigations could determine learning gains accomplished through hardware and software for visual analytics. Finally, there is need to develop and evaluate informatics tools that have visual analytics capabilities in the domain of contaminants in foods. / Environmental Sciences / P. Phil. (Environmental Science)
57

Close and Distant Reading Visualizations for the Comparative Analysis of Digital Humanities Data

Jänicke, Stefan 06 July 2016 (has links)
Traditionally, humanities scholars carrying out research on a specific or on multiple literary work(s) are interested in the analysis of related texts or text passages. But the digital age has opened possibilities for scholars to enhance their traditional workflows. Enabled by digitization projects, humanities scholars can nowadays reach a large number of digitized texts through web portals such as Google Books or Internet Archive. Digital editions exist also for ancient texts; notable examples are PHI Latin Texts and the Perseus Digital Library. This shift from reading a single book “on paper” to the possibility of browsing many digital texts is one of the origins and principal pillars of the digital humanities domain, which helps developing solutions to handle vast amounts of cultural heritage data – text being the main data type. In contrast to the traditional methods, the digital humanities allow to pose new research questions on cultural heritage datasets. Some of these questions can be answered with existent algorithms and tools provided by the computer science domain, but for other humanities questions scholars need to formulate new methods in collaboration with computer scientists. Developed in the late 1980s, the digital humanities primarily focused on designing standards to represent cultural heritage data such as the Text Encoding Initiative (TEI) for texts, and to aggregate, digitize and deliver data. In the last years, visualization techniques have gained more and more importance when it comes to analyzing data. For example, Saito introduced her 2010 digital humanities conference paper with: “In recent years, people have tended to be overwhelmed by a vast amount of information in various contexts. Therefore, arguments about ’Information Visualization’ as a method to make information easy to comprehend are more than understandable.” A major impulse for this trend was given by Franco Moretti. In 2005, he published the book “Graphs, Maps, Trees”, in which he proposes so-called distant reading approaches for textual data that steer the traditional way of approaching literature towards a completely new direction. Instead of reading texts in the traditional way – so-called close reading –, he invites to count, to graph and to map them. In other words, to visualize them. This dissertation presents novel close and distant reading visualization techniques for hitherto unsolved problems. Appropriate visualization techniques have been applied to support basic tasks, e.g., visualizing geospatial metadata to analyze the geographical distribution of cultural heritage data items or using tag clouds to illustrate textual statistics of a historical corpus. In contrast, this dissertation focuses on developing information visualization and visual analytics methods that support investigating research questions that require the comparative analysis of various digital humanities datasets. We first take a look at the state-of-the-art of existing close and distant reading visualizations that have been developed to support humanities scholars working with literary texts. We thereby provide a taxonomy of visualization methods applied to show various aspects of the underlying digital humanities data. We point out open challenges and we present our visualizations designed to support humanities scholars in comparatively analyzing historical datasets. In short, we present (1) GeoTemCo for the comparative visualization of geospatial-temporal data, (2) the two tag cloud designs TagPies and TagSpheres that comparatively visualize faceted textual summaries, (3) TextReuseGrid and TextReuseBrowser to explore re-used text passages among the texts of a corpus, (4) TRAViz for the visualization of textual variation between multiple text editions, and (5) the visual analytics system MusikerProfiling to detect similar musicians to a given musician of interest. Finally, we summarize our and the collaboration experiences of other visualization researchers to emphasize the ingredients required for a successful project in the digital humanities, and we take a look at future challenges in that research field.
58

Visualisering som bromsmedicin för returer inom E-handel : En kvalitativ studie om användarnas behov för utformningen av Visual Analytics inom beslutsstödsystem

Björner, Olivia January 2022 (has links)
Visual Analytics is a powerful tool for decision makers to gather new insights from data. Since Visual Analytics can be hard to get into at first, previous studies have been conducted to bridge the gap between industry experts and these tools. However, few studies have examined the user’s needs regarding how Visual Analytics can generate these valuable insights. In order to examine these needs, the area selected was returns in E-commerce since the returns are devastating both to the companies and to society. The companies collect a lot of data as the goods get returned, which can be visualized. In order to highlight the e-tailer’s needs for visualization tools for their return data, a qualitative empirical study has been conducted. A prototype was developed in order to aid the semi-structured interviews visually. Six e-tailers was interviewed and got to test the prototype, in order to analyze their needs for visualization tools. The results shows that some graphic elements performed better than others, and that return data needs to be presented in comparison to sales data to be relevant. The study’s findings suggests that predefined graphs helped the E-tailers to get into the Visual Analytics mindset and may work as a way to introduce more users into the world of Visual Analytics. / För beslutsfattare är Visual Analytics inom beslutsstödssystem ett kraftfullt verktyg för att få fram nya insikter ur data. Tidigare forskning inom området fokuserar på att brygga gapet mellan branschexperter och Visual Analytics eftersom verktygen ofta är svåra att sätta sig in i. Dock är det få studier som har undersökt vad användarna har för behov av visualiseringsverktygen för att kunna få ut dessa värdefulla insikter. För att undersöka behoven har returer inom E-handel valts ut som tillämpningsområde, eftersom returerna är skadliga för företagen och samhället i stort. I samband med att varor returneras samlar E-handlarna in en hel del data som kan visualiseras. För att identifiera vilka behov E-handlarna har på visualiseringsverktyg kopplat till denna returdata, genomfördes en kvalitativ empirisk studie. I och med att Visual Analytics är visuellt togs en prototyp fram för att enklare kunna genomföra semistrukturerade intervjuer. Sex stycken E-handlare har intervjuats och testat prototypen för att samla in vilka behov dessa har av visualiseringsverktyg. Det framkom att visa grafiska element var att föredra över andra, samt att returdata i sig inte är särskilt intressant för E-handlarna utan att ha den totala försäljningen att jämföra mot. Det visade sig att de flesta E-handlarna var helt nya till Visual Analytics och att de fördefinierade grafiska elementen hjälpte de till att komma in i verktyget samt väckte tankar för hur de skulle vilja arbeta sig vidare i verktyget.
59

Visual Analytics for the Exploratory Analysis and Labeling of Cultural Data

Meinecke, Christofer 20 October 2023 (has links)
Cultural data can come in various forms and modalities, such as text traditions, artworks, music, crafted objects, or even as intangible heritage such as biographies of people, performing arts, cultural customs and rites. The assignment of metadata to such cultural heritage objects is an important task that people working in galleries, libraries, archives, and museums (GLAM) do on a daily basis. These rich metadata collections are used to categorize, structure, and study collections, but can also be used to apply computational methods. Such computational methods are in the focus of Computational and Digital Humanities projects and research. For the longest time, the digital humanities community has focused on textual corpora, including text mining, and other natural language processing techniques. Although some disciplines of the humanities, such as art history and archaeology have a long history of using visualizations. In recent years, the digital humanities community has started to shift the focus to include other modalities, such as audio-visual data. In turn, methods in machine learning and computer vision have been proposed for the specificities of such corpora. Over the last decade, the visualization community has engaged in several collaborations with the digital humanities, often with a focus on exploratory or comparative analysis of the data at hand. This includes both methods and systems that support classical Close Reading of the material and Distant Reading methods that give an overview of larger collections, as well as methods in between, such as Meso Reading. Furthermore, a wider application of machine learning methods can be observed on cultural heritage collections. But they are rarely applied together with visualizations to allow for further perspectives on the collections in a visual analytics or human-in-the-loop setting. Visual analytics can help in the decision-making process by guiding domain experts through the collection of interest. However, state-of-the-art supervised machine learning methods are often not applicable to the collection of interest due to missing ground truth. One form of ground truth are class labels, e.g., of entities depicted in an image collection, assigned to the individual images. Labeling all objects in a collection is an arduous task when performed manually, because cultural heritage collections contain a wide variety of different objects with plenty of details. A problem that arises with these collections curated in different institutions is that not always a specific standard is followed, so the vocabulary used can drift apart from another, making it difficult to combine the data from these institutions for large-scale analysis. This thesis presents a series of projects that combine machine learning methods with interactive visualizations for the exploratory analysis and labeling of cultural data. First, we define cultural data with regard to heritage and contemporary data, then we look at the state-of-the-art of existing visualization, computer vision, and visual analytics methods and projects focusing on cultural data collections. After this, we present the problems addressed in this thesis and their solutions, starting with a series of visualizations to explore different facets of rap lyrics and rap artists with a focus on text reuse. Next, we engage in a more complex case of text reuse, the collation of medieval vernacular text editions. For this, a human-in-the-loop process is presented that applies word embeddings and interactive visualizations to perform textual alignments on under-resourced languages supported by labeling of the relations between lines and the relations between words. We then switch the focus from textual data to another modality of cultural data by presenting a Virtual Museum that combines interactive visualizations and computer vision in order to explore a collection of artworks. With the lessons learned from the previous projects, we engage in the labeling and analysis of medieval illuminated manuscripts and so combine some of the machine learning methods and visualizations that were used for textual data with computer vision methods. Finally, we give reflections on the interdisciplinary projects and the lessons learned, before we discuss existing challenges when working with cultural heritage data from the computer science perspective to outline potential research directions for machine learning and visual analytics of cultural heritage data.
60

Systematising glyph design for visualization

Maguire, Eamonn James January 2014 (has links)
The digitalisation of information now affects most fields of human activity. From the social sciences to biology to physics, the volume, velocity, and variety of data exhibit exponential growth trends. With such rates of expansion, efforts to understand and make sense of datasets of such scale, how- ever driven and directed, progress only at an incremental pace. The challenges are significant. For instance, the ability to display an ever growing amount of data is physically and naturally bound by the dimensions of the average sized display. A synergistic interplay between statistical analysis and visualisation approaches outlines a path for significant advances in the field of data exploration. We can turn to statistics to provide principled guidance for prioritisation of information to display. Using statistical results, and combining knowledge from the cognitive sciences, visual techniques can be used to highlight salient data attributes. The purpose of this thesis is to explore the link between computer science, statistics, visualization, and the cognitive sciences, to define and develop more systematic approaches towards the design of glyphs. Glyphs represent the variables of multivariate data records by mapping those variables to one or more visual channels (e.g., colour, shape, and texture). They offer a unique, compact solution to the presentation of a large amount of multivariate information. However, composing a meaningful, interpretable, and learnable glyph can pose a number of problems. The first of these problems exist in the subjectivity involved in the process of data to visual channel mapping, and in the organisation of those visual channels to form the overall glyph. Our first contribution outlines a computational technique to help systematise many of these otherwise subjective elements of the glyph design process. For visual information compression, common patterns (motifs) in time series or graph data for example, may be replaced with more compact, visual representations. Glyph-based techniques can provide such representations that can help users find common patterns more quickly, and at the same time, bring attention to anomalous areas of the data. However, replacing any data with a glyph is not going to make tasks such as visual search easier. A key problem is the selection of semantically meaningful motifs with the potential to compress large amounts of information. A second contribution of this thesis is a computational process for systematic design of such glyph libraries and their subsequent glyphs. A further problem in the glyph design process is in their evaluation. Evaluation is typically a time-consuming, highly subjective process. Moreover, domain experts are not always plentiful, therefore obtaining statistically significant evaluation results is often difficult. A final contribution of this work is to investigate if there are areas of evaluation that can be performed computationally.

Page generated in 0.2048 seconds