• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 20
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 57
  • 57
  • 32
  • 29
  • 26
  • 21
  • 18
  • 17
  • 14
  • 12
  • 8
  • 8
  • 8
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

In Situ Summarization and Visual Exploration of Large-scale Simulation Data Sets

Dutta, Soumya 17 September 2018 (has links)
No description available.
52

Design of a system for visualizing trends and behaviors based on customer data / Design av ett system för visualisering av trender och beteenden baserat på kunddata.

Andersson, Oskar January 2021 (has links)
Big amounts of data are produced every day in companies. By analyzing and visualizing the data a lot of insights can be gained. The company Solution Xperts wanted to create a system that could import and visualize Big Data. In this work a system was created and evaluated. The report shows that it can be difficult to visualize Big Data, but when a system is created it can easily be adapted to data coming from different companies and provide a lot of value to companies and organizations.
53

文獻關聯之視覺化瀏覽平台建構研究 / Building a Visualization Platform for Browsing Academic Paper Relationships

趙逢毅, Chao,August Unknown Date (has links)
每一項學術研究進行,其理論基礎都必需要建立於過去已完成的研究之上,因此文獻尋找與探討是進行研究過程非常重要的一個步驟。在數位時代與網際網路的加乘效益之下,改變了過去研究者必需為參考文獻東奔西跑的文獻資料尋找方式,但是卻會造成研究者被許多數位文獻淹沒。借用自網頁分析技術而設計的Google學術搜尋網路工具,能透過已經計算好的文獻權重PaperRank排序使用者所尋找的文獻集合,讓使用者能在數位文獻之中依單篇文獻被引用次數為原則而理出頭緒,但其順序式的排列仍然不能夠揭露出搜尋到的文獻集合裡彼此之間的關聯,其中包括了文獻所使用的關鍵字、作者與參考文獻。為了處理了解文獻中多維度的複雜資料關聯,最好的方式還是依賴人類的視覺化資訊處理能力,特別是當資料量大並且需要在短時間內決策時。 此外使用在文獻分析研究中,學者們使用共同引用(co-citation)、共同作者(co-work)、共同作者引用(co-author)等分析方式,配合延伸自社會網路分析理論中的社會密度(social distance)、關聯層級(social degree)、群(clique)等參數概念,試將複雜的文獻資料有脈絡地按排供參考。僅管此是工作難以機械化且消耗時間的(Börner, Chen , Boyack, 2003),但是卻能將某一特定領域的發展直覺地呈現出來,如此若能將這些分析方式配合視覺化的呈現,則研究學者便能更進一步了進行大量文獻資料視覺化的分析、探索。 本研究試提出一個新的協助文獻探索平台系統架構,將傳統的文字搜尋轉變為視覺化的資料探索。使用者能透過三種不同的層級的資料:知識本體與關鍵字層、引文網路層及人員網路層,並與呈現的資料互動進一步了解資料間的關聯方式。最後實作視覺化雛型平台,並使用在國家圖書館所提供的博、碩士論文網所提供的論文資料,提供給研究人員探索特定知識領域中新研究方向的探索工具,並能協助研究者能在尚未完瞭解的專業領域之前,能快速地瞭解在該其領域重要文獻的導引平台。 / Paper survey is the most important task for building earnest theories, while researchers conducting academic researches. One must touches the fundamental detail of each theory and track down the develop-path of what achievement have been established by previous researches. Benefit from synergy of information age and document digitalized, it not only reduces the cost of finding reference documents, but also makes researchers suffer from information overwhelming after click single “search it” bottom. Stand in for traditional paper web search methods, new academic paper search technology borrowing from the idea of web search engine calculates the importance of each paper by cited number, and recommends users the most important papers by serial listing. However, serial listing does never spell the relationships of suggesting papers out, but only those results match some specific criteria. Those relationships of papers can be classified into 3 different types: the relations of keywords and references that author used and social relationship of authors like co-author and author co-citation which have been developed to explain the complex citation network structures. Those multi-dimensional relationships are extremely abundant and complex, so there is no better way to deal with but depending on visual data processing within human nature. In this paper, we try to propose a new platform to transform paper search in serial listing, into a visualized explore platform by demonstrating 3 different types of relationship: ontology-keywords, papers-references and personnel-references. End users can fallow the relationships between each difference nodes to explore considerable references, as well as change into different view and interact with existing information by using interactive mechanizes. In order to bring this idea to practical application usage, we build a proto-type platform to show our idea by using data from ETDS (electronic theses and dissertations system) of Ministry of education. We hope sincerely by using this proto-type platform, users can catch the major ideas of specific knowledge domain and researchers can explore acceptable references and even conduct new search topic.
54

Mineração visual de imagens aliada a consultas pelos k-vizinhos diversos mais próximos: flexibilizando e maximizando o entendimento de consultas por conteúdo de imagens / Mineração visual de imagens aliada a consultas pelos k-vizinhos diversos mais próximos: flexibilizando e maximizando o entendimento de consultas por conteúdo de imagens

Dias, Rafael Loosli 23 August 2013 (has links)
Made available in DSpace on 2016-06-02T19:06:11Z (GMT). No. of bitstreams: 1 5726.pdf: 4603491 bytes, checksum: 0fe3fa824a018f481106303c4816bf07 (MD5) Previous issue date: 2013-08-23 / Financiadora de Estudos e Projetos / Content-Based Image Retrieval systems use visual information like color, shape and texture to represent images in feature vectors. The numerical representation found for the images is used in query execution through a metric to evaluate the distance between vectors. In general, there is an inconsistency in the evaluation of similarity between images according to human perception and the results computed by CBIR systems, which is called Semantic Gap. One way to overcome this problem is by the addition of a diversity factor in query execution, allowing the user to specify a degree of dissimilarity between the resulting images and changing the query result. Adding diversity in consultation, however, requires high computational cost and the reduction of possible subsets to be analyzed is a difficult task to be understood by the user. This masters degree thesis aims to make use of Visual Data Mining techniques applied to queries in CBIR systems, improving the interpretability of the measure of similarity and diversity, as well as the relevance of the result according to the judgment and prior knowledge of the user. The user takes an active role in the retrieval of images by their content, guiding its result and, consequently, reducing the Semantic Gap. Additionally, a better understanding of the diversity and similarity factors involved in the query is supported by visualization and interaction techniques. / Sistemas de recuperação de imagens por conteúdo (do Inglês, Content-Based Image Retrieval - CBIR) utilizam informações visuais de cor, forma e textura para representar as imagens em vetores de características. A representação numérica encontrada para as imagens é utilizada na execução da consulta através de uma métrica que avalie a distância entre os vetores. Em geral, existe uma inconsistência entre a percepção do ser humano na avaliação de similaridade entre imagens se comparada com a computada por sistemas CBIR, sendo esta descontinuidade denominada Gap Semântico. Adicionar um fator de diversidade na consulta tem-se mostrado como uma maneira de superar este problema, permitindo que o usuário especifique o grau de dissimilaridade entre as imagens resultantes e altere o resultado da consulta. Adicionar diversidade em consulta, no entanto, requer alto custo computacional e a redução das possibilidades de conjuntos para resposta é de difícil entendimento para o usuário. Este trabalho de mestrado propôs a utilização de técnicas de Mineração Visual de Dados (MVD) aplicadas sobre consultas em sistemas CBIR, melhorando a interpretabilidade da medida de similaridade e diversidade, assim como a relevância do resultado obtido. O usuário passa a exercer um papel ativo na consulta por conteúdo de imagens, permitindo que o mesmo dirija o processo, aproximando o resultado ao esperado pela cognição humana e reduzindo o gap semântico.
55

Visual Data Analysis in Device Ecologies

Horak, Tom 07 September 2021 (has links)
With the continued development towards a digitalized and data-driven world, the importance of visual data analysis is increasing as well. Visual data analysis enables people to interactively explore and reason on certain data through the combined use of multiple visualizations. This is relevant for a wide range of application domains, including personal, professional, and public ones. In parallel, a ubiquity of modern devices with very heterogeneous characteristics has spawned. These devices, such as smartphones, tablets, or digital whiteboards, can enable more flexible workflows during our daily work, for example, while on-the-go, in meetings, or at home. One way to enable flexible workflows is the combination of multiple devices in so-called device ecologies. This thesis investigates how such a combined usage of devices can facilitate the visual data analysis of multivariate data sets. For that, new approaches for both visualization and interaction are presented here, allowing to make full use of the dynamic nature of device ecologies. So far, the literature on these aspects is limited and lacks a broader consideration of data analysis in device ecologies. This doctoral thesis presents investigations into three main parts, each addressing one research question: (i) how visualizations can be adapted for heterogeneous devices, (ii) how device pairings can be used to support data exploration workflows, and (iii) how visual data analysis can be supported in fully dynamic device ecologies. For the first part, an extended analytical investigation of the notion of responsive visualization is contributed. This investigation is then complemented by the introduction of a novel matrix-based visualization approach that incorporates such responsive visualizations as local focus regions. For the two other parts, multiple conceptual frameworks are presented that are innovative combinations of visualization and interaction techniques. In the second part, such work is conducted for two selected display pairings, the extension of smartwatches with display-equipped watchstraps and the contrary combination of smartwatch and large display. For these device ensembles, it is investigated how analysis workflows can be facilitated. Then, in the third part, it is explored how interactive mechanisms can be used for flexibly combining and coordinating devices by utilizing spatial arrangements, as well as how the view distribution process can be supported through automated optimization processes. This thesis’s extensive conceptual work is accompanied by the design of prototypical systems, qualitative evaluations, and reviews of existing literature.
56

Exploring Mobile Device Interactions for Information Visualization

Langner, Ricardo 14 January 2025 (has links)
Information visualization (InfoVis) makes data accessible in a graphical form, enables visual and interactive data exploration, and is becoming increasingly important in our data-driven world - InfoVis empowers people from various domains to truly benefit from abstract and vast amounts of data. Although they often target desktop environments, nowadays, data visualizations are also used on omnipresent mobile devices, such as smartphones and tablets. However, most mobile devices are personal digital companions, typically visualizing moderately complex data (e.g., fitness, health, finances, weather, public transport data) on a single and very compact display, making it inherently hard to show the full range or simultaneously different perspectives of data. The research in this thesis engages with these aspects by striving for novel mobile device interactions that enable data analysis with more than a single device, more than a single visualization view, and more than a single user. At the core of this dissertation are four realized projects that can be connected by the following research objectives: (i) Facilitating data visualization beyond the casual exploration of personal data, (ii) Integrating mobile devices in multi-device settings for InfoVis, and (iii) Exploiting the mobility and spatiality of mobile devices for InfoVis. To address the first objective, my research mainly concentrates on interactions with multivariate data represented in multiple coordinated views (MCV). To address the second objective, I consider two different device settings in my work: One part investigates scenarios where one or more people sit at a regular table and analyze data in MCV that are distributed across several mobile devices (mobile devices on a table). The other part focuses on scenarios in which a wall-sized display shows large-scale MCV and mobile devices enable interactions with the visualizations from varying positions and distances (mobile devices in 3D space). The settings also allow to look at different purposes and roles of mobile devices during data exploration. To address the third objective, I examine different spatial device interactions. This includes placing and organizing multiple mobile devices in meaningful spatial arrangements and also pointing interaction that combines touch and spatial device input. Overall, with my research, I apply an exploratory approach and develop a range of techniques and studies that contribute to the understanding of how mobile devices can be used not only for typical personal visualization but also in more professional settings as part of novel and beyond-the-desktop InfoVis environments.:Publications ... ix List of Figures ... xix List of Tables ... xx 1. Introduction ... 1 1.1. Research Objectives and Questions ... 5 1.2. Methodological Approach ... 8 1.3. Scope of the Thesis ... 10 1.4. Thesis Outline & Contributions ... 13 2. Background & Related Work ... 15 2.1. Data Visualization on a Mobile Device ... 16 2.1.1. Revisiting Differences of Data Visualization for Desktops and Mobiles ... 16 2.1.2. Visualization on Handheld Devices: PDAs to Smartphones ... 18 2.1.3. Visualization on Tablet Computers ... 20 2.1.4. Visualization on Smartwatches and Fitness Trackers ... 21 2.1.5. Mobile Data Visualization and Adjacent Topics ... 22 2.2. Cross-Device Data Visualization ... 24 2.2.1. General Components of Cross-Device Interaction ... ... 24 2.2.2. Cross-Device Settings with Large Displays ... 26 2.2.3. Cross-Device Settings with Several Mobile Devices ... 27 2.2.4. Augmented Displays ... 29 2.2.5. Collaborative Data Analysis ... 30 2.2.6. Technological Aspects ... 31 2.3. Interaction for Visualization ... 32 2.3.1. Touch Interaction for InfoVis ... 33 2.3.2. Spatial Interaction for InfoVis ... 36 2.4. Summary ... 38 3. VisTiles: Combination & Spatial Arrangement of Mobile Devices ... 41 3.1. Introduction ... 43 3.2. Dynamic Layout and Coordination ... 45 3.2.1. Design Space: Input and Output ... 46 3.2.2. Tiles: View Types and Distribution ... 46 3.2.3. Workspaces: Coordination of Visualizations ... 47 3.2.4. User-defined View Layout ... 49 3.3. Smart Adaptations and Combinations ... 49 3.3.1. Expanded Input Design Space ... 50 3.3.2. Use of Side-by-Side Arrangements ... 50 3.3.3. Use of Continuous Device Movements ... 53 3.3.4. Managing Adaptations and Combinations ... 54 3.4. Realizing a Working Prototype of VisTiles ... 55 3.4.1. Phase I: Proof of Concept ... 55 3.4.2. Phase II: Preliminary User Study ... 56 3.4.3. Phase III: Framework Revision and Final Prototype ... 59 3.5. Discussion ... 63 3.5.1. Limitations of the Technical Realization ... 63 3.5.2. Understanding the Use of Space and User Behavior ... 64 3.5.3. Divide and Conquer: Single-Display or Multi-Display? ... 64 3.5.4. Space to Think: Physical Tiles or Virtual Tiles? ... 65 3.6. Chapter Summary & Conclusion ... 66 4. Marvis: Mobile Devices and Augmented Reality ... 69 4.1. Introduction ... 71 4.2. Related Work: Augmented Reality for Information Visualization ... 74 4.3. Design Process & Design Rationale ... 75 4.3.1. Overview of the Development Process ... 75 4.3.2. Expert Interviews in the Design Phase ... 76 4.3.3. Design Choices & Rationales ... 78 4.4. Visualization and Interaction Concepts ... 79 4.4.1. Single Mobile Device with Augmented Reality ... 79 4.4.2. Two and More Mobile Devices with Augmented Reality ... 83 4.5. Prototype Realization ... 86 4.5.1. Technical Implementation and Setup ... 87 4.5.2. Implemented Example Use Cases ... 88 4.6. Discussion ... 94 4.6.1. Expert Reviews ... 94 4.6.2. Lessons Learned ... 95 4.7. Chapter Summary & Conclusion ... 98 5. FlowTransfer: Content Sharing Between Phones and a Large Display ... 101 5.1. Introduction ... 103 5.2. Related Work ... 104 5.2.1. Interaction with Large Displays ... 104 5.2.2. Interactive Cross-Device Data Transfer ... 105 5.2.3. Distal Pointing ... 106 5.3. Development Process and Design Goals ... 106 5.4. FlowTransfer’s Pointing Cursor and Transfer Techniques ... 108 5.4.1. Distance-dependent Pointing Cursor ... 109 5.4.2. Description of Individual Transfer Techniques ... 110 5.5. Technical Implementation and Setup ... 115 5.6. User Study ... 115 5.6.1. Study Design and Methodology ... 115 5.6.2. General Results ... 117 5.6.3. Results for Individual Techniques ... 117 5.7. Design Space for Content Sharing Techniques ... 119 5.8. Discussion ... 120 5.8.1. Design Space Parameters and Consequences ... 121 5.8.2. Interaction Design ... 121 5.8.3. Content Sharing-inspired Techniques for Information Visual- ization ... 122 5.9. Chapter Summary & Conclusion ... 123 6. Divico: Touch and Pointing Interaction for Multiple Coordinated Views ... 125 6.1. Introduction ... 127 6.2. Bringing Large-Scale MCV to Wall-Sized Displays ... 129 6.3. Interaction Design for Large-Scale MCV ... 130 6.3.1. Interaction Style and Vocabulary ... 131 6.3.2. Interaction with Visual Elements of Views ... 132 6.3.3. Control of Analysis Tools ... 134 6.3.4. Interaction with Visualization Views ... 134 6.4. Data Set and Prototype Implementation ... 135 6.5. User Study: Goals and Methodology ... 136 6.5.1. Participants ... 137 6.5.2. Apparatus ... 137 6.5.3. Procedure and Tasks ... 138 6.5.4. Collected and Derived Data ... 139 6.6. Results: User Behavior and Usage Patterns ... 140 6.6.1. Data Analysis Method ... 140 6.6.2. Analysis of User Behavior and Movement ... 140 6.6.3. Analysis of Collaboration Aspects ... 142 6.6.4. Analysis of Application Usage ... 145 6.7. Discussion ... 146 6.7.1. Setup ... 146 6.7.2. Movement ... 147 6.7.3. Distance and Interaction Modality ... 147 6.7.4. Device Usage ... 148 6.7.5. MCV Aspects ... 149 6.8. Chapter Summary & Conclusion ... 149 7. Discussion and Conclusion ... 151 7.1. Summary of the Chapters ... 151 7.2. Contributions ... 152 7.2.1. Beyond Casual Exploration of Personal Data ... 153 7.2.2. Multi-Device Settings ... 154 7.2.3. Spatial Interaction ... 156 7.3. Facets of Mobile Device Interaction for InfoVis ... 157 7.3.1. Mobile Devices ... 158 7.3.2. Interaction ... 160 7.3.3. Data Visualization ... 161 7.3.4. Situation ... 162 7.4. Limitations, Open Questions, and Future Work ... 162 7.4.1. Technical Realization ... 163 7.4.2. Extent of Visual Data Analysis ... 164 7.4.3. Natural Movement in the Spectrum of Explicit and Implicit User Input ... 165 7.4.4. Novel Setups & Future Devices ... 166 7.5. Closing Remarks ... 167 Bibliography ... 169 A. Appendix for ViTiles ... 219 A.1. Examples of Early Sketches and Notes ... 219 A.2. Color Scheme for Visualizations ... 220 A.3. Notes Sheet with Interview Procedure ... 221 A.4. Demographic Questionaire ... 222 A.5. Examplary MCV Images for Explanation ... 223 B. Appendix for Marvis ... 225 B.1. Participants’ Expertise ... 225 B.2. Notes Sheet with Interview Procedure ... 226 B.3. Sketches of Ideas by the Participants ... 227 B.4. Grouped Comments from Expert Interviews (Design Phase) ... 228 C. Appendix for FlowTransfer ... 229 C.1. State Diagram for the LayoutTransfer Technique ... 229 C.2. User Study: Demographic Questionnaire ... 230 C.3. User Study: Techniques Questionnaire ... 231 D. Appendix for Divico ... 235 D.1. User Study: Demographic Information ... 235 D.2. User Study: Expertise Information ... 237 D.3. User Study: Training Questionnaire ... 239 D.4. User Study: Final Questionnaire ... 241 D.5. Study Tasks ... 245 D.5.1. Themed Exploration Phase ... 245 D.5.2. Open Exploration Phase ... 246 D.6. Grouping and Categorization of Protocol Data ... 246 D.7. Usage of Open-Source Tool GIAnT for Video Coding Analysis ... 248 D.8. Movement of Participants (Themed Exploration Phase) ... 250 D.9. Movement of Participants (Open Exploration Phase) ... 254 E. List of Co-supervised Student Theses ... 259 / Informationsvisualisierung (InfoVis) macht Daten in grafischer Form zugänglich, ermöglicht eine visuelle und interaktive Datenexploration und wird in unserer von Daten bestimmten Welt immer wichtiger. InfoVis ermöglicht es Menschen in verschiedenen Anwendungsbereichen, aus den abstrakten und enormen Datenmengen einen echten Nutzen zu ziehen. Obwohl sie häufig auf Desktop-Umgebungen ausgerichtet sind, werden Datenvisualisierungen heutzutage auch auf den allseits präsenten Mobilgeräten wie Smartphones und Tablets eingesetzt. Die meisten Mobilgeräte sind jedoch persönliche digitale Begleiter, die in der Regel mäßig komplexe Daten (z.B. Fitness-, Gesundheits-, Finanz-, Wetter-, Nahverkehrsdaten) auf einem einzigen und sehr kompakten Display visualisieren, wodurch es grundsätzlich schwierig ist, die gesamte Bandbreite von bzw. gleichzeitig mehrere Blickwinkel auf Daten darzustellen. Die in dieser Arbeit vorgestellte Forschung greift diese Aspekte auf und versucht, neuartige Mobilgeräte-Interaktionen zu untersuchen, die eine Datenanalyse mit mehr als nur einem Gerät, mehr als nur einer Visualisierung und mehr als nur einem Benutzer ermöglichen. Im Mittelpunkt dieser Dissertation stehen vier durchgeführte Projekte, die sich anhand der folgenden Forschungsziele miteinander verbinden lassen: (i) Datenvisualisierung jenseits der einfachen Exploration persönlicher Daten ermöglichen, (ii) Mobilgeräte für InfoVis in geräteübergreifende Umgebungen einbinden und (iii) die Beweglichkeit und Räumlichkeit von Mobilgeräten für InfoVis ausnutzen. Um auf das erste Ziel hinzuarbeiten, liegt der Schwerpunkt meiner Forschung auf der Interaktion mit multivariaten Daten, die in mehreren miteinander verknüpften Visualisierungen (engl. multiple coordinated views, kurz MCV) abgebildet werden. Um das zweite Ziel zu adressieren, werden in meiner Arbeit zwei grundlegend unterschiedliche Gerätekonfigurationen behandelt: Der eine Teil befasst sich mit Szenarien, in denen eine oder mehrere Personen an einem Tisch sitzen, um Daten mit MCV zu analysieren, wobei die Ansichten auf mehrere Mobilgeräte verteilt sind (Mobilgeräte auf einem Tisch). Der andere Teil beschäftigt sich mit Szenarien, in denen ein wandgroßes Display eine große Anzahl von MCV anzeigt, während Mobilgeräte die Interaktion mit diesen Ansichten aus unterschiedlichen Positionen und Entfernungen ermöglichen (Mobilgeräte im 3D-Raum). Die Gerätekonfigurationen erlauben es zudem, verschiedene Einsatzzwecke und Rollen von mobilen Geräten während der Datenexploration zu untersuchen. Um auf das dritte Ziel hinzuwirken, untersuche ich mehrere räumliche Geräteinteraktionen. Dies umfasst die Platzierung und Anordnung mehrerer Mobilgeräte in sinnvollen räumlichen Konstellationen sowie Pointing-Interaktion die Touch- und räumliche Geräteeingaben miteinander kombiniert. Allgemein betrachtet wende ich in meiner Forschung einen explorativen Ansatz an. Ich entwickle eine Reihe von Techniken und führe Untersuchungen durch, die zu einem besseren Verständnis beitragen, wie Mobilgeräte nicht nur für typische persönliche Visualisierungen, sondern auch in einem eher professionellen Umfeld als Teil neuartiger InfoVis-Umgebungen jenseits klassischer Desktop-Arbeitsplätze eingesetzt werden können.:Publications ... ix List of Figures ... xix List of Tables ... xx 1. Introduction ... 1 1.1. Research Objectives and Questions ... 5 1.2. Methodological Approach ... 8 1.3. Scope of the Thesis ... 10 1.4. Thesis Outline & Contributions ... 13 2. Background & Related Work ... 15 2.1. Data Visualization on a Mobile Device ... 16 2.1.1. Revisiting Differences of Data Visualization for Desktops and Mobiles ... 16 2.1.2. Visualization on Handheld Devices: PDAs to Smartphones ... 18 2.1.3. Visualization on Tablet Computers ... 20 2.1.4. Visualization on Smartwatches and Fitness Trackers ... 21 2.1.5. Mobile Data Visualization and Adjacent Topics ... 22 2.2. Cross-Device Data Visualization ... 24 2.2.1. General Components of Cross-Device Interaction ... ... 24 2.2.2. Cross-Device Settings with Large Displays ... 26 2.2.3. Cross-Device Settings with Several Mobile Devices ... 27 2.2.4. Augmented Displays ... 29 2.2.5. Collaborative Data Analysis ... 30 2.2.6. Technological Aspects ... 31 2.3. Interaction for Visualization ... 32 2.3.1. Touch Interaction for InfoVis ... 33 2.3.2. Spatial Interaction for InfoVis ... 36 2.4. Summary ... 38 3. VisTiles: Combination & Spatial Arrangement of Mobile Devices ... 41 3.1. Introduction ... 43 3.2. Dynamic Layout and Coordination ... 45 3.2.1. Design Space: Input and Output ... 46 3.2.2. Tiles: View Types and Distribution ... 46 3.2.3. Workspaces: Coordination of Visualizations ... 47 3.2.4. User-defined View Layout ... 49 3.3. Smart Adaptations and Combinations ... 49 3.3.1. Expanded Input Design Space ... 50 3.3.2. Use of Side-by-Side Arrangements ... 50 3.3.3. Use of Continuous Device Movements ... 53 3.3.4. Managing Adaptations and Combinations ... 54 3.4. Realizing a Working Prototype of VisTiles ... 55 3.4.1. Phase I: Proof of Concept ... 55 3.4.2. Phase II: Preliminary User Study ... 56 3.4.3. Phase III: Framework Revision and Final Prototype ... 59 3.5. Discussion ... 63 3.5.1. Limitations of the Technical Realization ... 63 3.5.2. Understanding the Use of Space and User Behavior ... 64 3.5.3. Divide and Conquer: Single-Display or Multi-Display? ... 64 3.5.4. Space to Think: Physical Tiles or Virtual Tiles? ... 65 3.6. Chapter Summary & Conclusion ... 66 4. Marvis: Mobile Devices and Augmented Reality ... 69 4.1. Introduction ... 71 4.2. Related Work: Augmented Reality for Information Visualization ... 74 4.3. Design Process & Design Rationale ... 75 4.3.1. Overview of the Development Process ... 75 4.3.2. Expert Interviews in the Design Phase ... 76 4.3.3. Design Choices & Rationales ... 78 4.4. Visualization and Interaction Concepts ... 79 4.4.1. Single Mobile Device with Augmented Reality ... 79 4.4.2. Two and More Mobile Devices with Augmented Reality ... 83 4.5. Prototype Realization ... 86 4.5.1. Technical Implementation and Setup ... 87 4.5.2. Implemented Example Use Cases ... 88 4.6. Discussion ... 94 4.6.1. Expert Reviews ... 94 4.6.2. Lessons Learned ... 95 4.7. Chapter Summary & Conclusion ... 98 5. FlowTransfer: Content Sharing Between Phones and a Large Display ... 101 5.1. Introduction ... 103 5.2. Related Work ... 104 5.2.1. Interaction with Large Displays ... 104 5.2.2. Interactive Cross-Device Data Transfer ... 105 5.2.3. Distal Pointing ... 106 5.3. Development Process and Design Goals ... 106 5.4. FlowTransfer’s Pointing Cursor and Transfer Techniques ... 108 5.4.1. Distance-dependent Pointing Cursor ... 109 5.4.2. Description of Individual Transfer Techniques ... 110 5.5. Technical Implementation and Setup ... 115 5.6. User Study ... 115 5.6.1. Study Design and Methodology ... 115 5.6.2. General Results ... 117 5.6.3. Results for Individual Techniques ... 117 5.7. Design Space for Content Sharing Techniques ... 119 5.8. Discussion ... 120 5.8.1. Design Space Parameters and Consequences ... 121 5.8.2. Interaction Design ... 121 5.8.3. Content Sharing-inspired Techniques for Information Visual- ization ... 122 5.9. Chapter Summary & Conclusion ... 123 6. Divico: Touch and Pointing Interaction for Multiple Coordinated Views ... 125 6.1. Introduction ... 127 6.2. Bringing Large-Scale MCV to Wall-Sized Displays ... 129 6.3. Interaction Design for Large-Scale MCV ... 130 6.3.1. Interaction Style and Vocabulary ... 131 6.3.2. Interaction with Visual Elements of Views ... 132 6.3.3. Control of Analysis Tools ... 134 6.3.4. Interaction with Visualization Views ... 134 6.4. Data Set and Prototype Implementation ... 135 6.5. User Study: Goals and Methodology ... 136 6.5.1. Participants ... 137 6.5.2. Apparatus ... 137 6.5.3. Procedure and Tasks ... 138 6.5.4. Collected and Derived Data ... 139 6.6. Results: User Behavior and Usage Patterns ... 140 6.6.1. Data Analysis Method ... 140 6.6.2. Analysis of User Behavior and Movement ... 140 6.6.3. Analysis of Collaboration Aspects ... 142 6.6.4. Analysis of Application Usage ... 145 6.7. Discussion ... 146 6.7.1. Setup ... 146 6.7.2. Movement ... 147 6.7.3. Distance and Interaction Modality ... 147 6.7.4. Device Usage ... 148 6.7.5. MCV Aspects ... 149 6.8. Chapter Summary & Conclusion ... 149 7. Discussion and Conclusion ... 151 7.1. Summary of the Chapters ... 151 7.2. Contributions ... 152 7.2.1. Beyond Casual Exploration of Personal Data ... 153 7.2.2. Multi-Device Settings ... 154 7.2.3. Spatial Interaction ... 156 7.3. Facets of Mobile Device Interaction for InfoVis ... 157 7.3.1. Mobile Devices ... 158 7.3.2. Interaction ... 160 7.3.3. Data Visualization ... 161 7.3.4. Situation ... 162 7.4. Limitations, Open Questions, and Future Work ... 162 7.4.1. Technical Realization ... 163 7.4.2. Extent of Visual Data Analysis ... 164 7.4.3. Natural Movement in the Spectrum of Explicit and Implicit User Input ... 165 7.4.4. Novel Setups & Future Devices ... 166 7.5. Closing Remarks ... 167 Bibliography ... 169 A. Appendix for ViTiles ... 219 A.1. Examples of Early Sketches and Notes ... 219 A.2. Color Scheme for Visualizations ... 220 A.3. Notes Sheet with Interview Procedure ... 221 A.4. Demographic Questionaire ... 222 A.5. Examplary MCV Images for Explanation ... 223 B. Appendix for Marvis ... 225 B.1. Participants’ Expertise ... 225 B.2. Notes Sheet with Interview Procedure ... 226 B.3. Sketches of Ideas by the Participants ... 227 B.4. Grouped Comments from Expert Interviews (Design Phase) ... 228 C. Appendix for FlowTransfer ... 229 C.1. State Diagram for the LayoutTransfer Technique ... 229 C.2. User Study: Demographic Questionnaire ... 230 C.3. User Study: Techniques Questionnaire ... 231 D. Appendix for Divico ... 235 D.1. User Study: Demographic Information ... 235 D.2. User Study: Expertise Information ... 237 D.3. User Study: Training Questionnaire ... 239 D.4. User Study: Final Questionnaire ... 241 D.5. Study Tasks ... 245 D.5.1. Themed Exploration Phase ... 245 D.5.2. Open Exploration Phase ... 246 D.6. Grouping and Categorization of Protocol Data ... 246 D.7. Usage of Open-Source Tool GIAnT for Video Coding Analysis ... 248 D.8. Movement of Participants (Themed Exploration Phase) ... 250 D.9. Movement of Participants (Open Exploration Phase) ... 254 E. List of Co-supervised Student Theses ... 259
57

Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA / with Applications for QuantNet 2.0 and GitHub

Borke, Lukas 08 September 2017 (has links)
Mit der wachsenden Popularität von GitHub, dem größten Online-Anbieter von Programm-Quellcode und der größten Kollaborationsplattform der Welt, hat es sich zu einer Big-Data-Ressource entfaltet, die eine Vielfalt von Open-Source-Repositorien (OSR) anbietet. Gegenwärtig gibt es auf GitHub mehr als eine Million Organisationen, darunter solche wie Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly und viele mehr. GitHub verfügt über eine umfassende REST API, die es Forschern ermöglicht, wertvolle Informationen über die Entwicklungszyklen von Software und Forschung abzurufen. Unsere Arbeit verfolgt zwei Hauptziele: (I) ein automatisches OSR-Kategorisierungssystem für Data Science Teams und Softwareentwickler zu ermöglichen, das Entdeckbarkeit, Technologietransfer und Koexistenz fördert. (II) Visuelle Daten-Exploration und thematisch strukturierte Navigation innerhalb von GitHub-Organisationen für reproduzierbare Kooperationsforschung und Web-Applikationen zu etablieren. Um Mehrwert aus Big Data zu generieren, ist die Speicherung und Verarbeitung der Datensemantik und Metadaten essenziell. Ferner ist die Wahl eines geeigneten Text Mining (TM) Modells von Bedeutung. Die dynamische Kalibrierung der Metadaten-Konfigurationen, TM Modelle (VSM, GVSM, LSA), Clustering-Methoden und Clustering-Qualitätsindizes wird als "Smart Clusterization" abgekürzt. Data-Driven Documents (D3) und Three.js (3D) sind JavaScript-Bibliotheken, um dynamische, interaktive Datenvisualisierung zu erzeugen. Beide Techniken erlauben Visuelles Data Mining (VDM) in Webbrowsern, und werden als D3-3D abgekürzt. Latent Semantic Analysis (LSA) misst semantische Information durch Kontingenzanalyse des Textkorpus. Ihre Eigenschaften und Anwendbarkeit für Big-Data-Analytik werden demonstriert. "Smart clusterization", kombiniert mit den dynamischen VDM-Möglichkeiten von D3-3D, wird unter dem Begriff "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA" zusammengefasst. / With the growing popularity of GitHub, the largest host of source code and collaboration platform in the world, it has evolved to a Big Data resource offering a variety of Open Source repositories (OSR). At present, there are more than one million organizations on GitHub, among them Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly and many more. GitHub provides an extensive REST API, which enables scientists to retrieve valuable information about the software and research development life cycles. Our research pursues two main objectives: (I) provide an automatic OSR categorization system for data science teams and software developers promoting discoverability, technology transfer and coexistence; (II) establish visual data exploration and topic driven navigation of GitHub organizations for collaborative reproducible research and web deployment. To transform Big Data into value, in other words into Smart Data, storing and processing of the data semantics and metadata is essential. Further, the choice of an adequate text mining (TM) model is important. The dynamic calibration of metadata configurations, TM models (VSM, GVSM, LSA), clustering methods and clustering quality indices will be shortened as "smart clusterization". Data-Driven Documents (D3) and Three.js (3D) are JavaScript libraries for producing dynamic, interactive data visualizations, featuring hardware acceleration for rendering complex 2D or 3D computer animations of large data sets. Both techniques enable visual data mining (VDM) in web browsers, and will be abbreviated as D3-3D. Latent Semantic Analysis (LSA) measures semantic information through co-occurrence analysis in the text corpus. Its properties and applicability for Big Data analytics will be demonstrated. "Smart clusterization" combined with the dynamic VDM capabilities of D3-3D will be summarized under the term "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA".

Page generated in 0.053 seconds