Spelling suggestions: "subject:"4digital tabletops"" "subject:"4digital tabletop's""
1 |
Designing Discoverable Digital Tabletop Menus for Public SettingsSeto, Amanda Mindy January 2012 (has links)
Ease of use with digital tabletops in public settings is contingent on how well the system invites and guides interaction. The same can be said for the interface design and individual graphical user interface elements of these systems. One such interface element is menus. Prior to a menu being used however, it must first be discovered within the interface. Existing research pertaining to digital tabletop menu design does not address this issue of discovering or opening a menu. This thesis investigates how the interface and interaction of digital tabletops can be designed to encourage menu discoverability in the context of public settings.
A set of menu invocation designs varying on the invocation element and use of animation are proposed. These designs are then evaluated through an observational study at a museum to observe users interactions in a realistic public setting. Findings from this study propose the use of discernible and recognizable interface elements – buttons – supported by the use of animation to attract and guide users as a discoverable menu invocation design. Additionally, findings posit that when engaging with a public digital tabletop display, users transition through exploration and discovery states before becoming competent with the system. Finally, insights from this study point to a set of design recommendations for improving menu discoverability.
|
2 |
Group reaching over digital tabletops with digital arm embodiments2014 August 1900 (has links)
In almost all collaborative tabletop tasks, groups require coordinated access to the shared objects on the table’s surface. The physical social norms of close-proximity interactions built up over years of interacting around other physical bodies cause people to avoid interfering with other people (e.g., avoiding grabbing the same object simultaneously). However, some digital tabletop situations require the use of indirect input (e.g., when using mice, and when supporting remote users). With indirect input, people are no longer physically embodied during their reaching gestures, so most systems provide digital embodiments – visual representations of each person – to provide feedback to both the person who is reaching and to the other group members. Tabletop arm embodiments have been shown to better support group interactions than simple visual designs, providing awareness of actions to the group. However, researchers and digital tabletop designers know little of how the design of digital arm embodiments affects the fundamental group tabletop interaction of reaching for objects. Therefore, in this thesis, we evaluate how people coordinate their interactions over digital tabletops when using different types of embodiments. Specifically, in a series of studies, we investigate how the visual design (what they look like) and interaction design (how they work) of digital arm embodiments affects a group’s coordinative behaviours in an open- ended parallel tabletop task. We evaluated visual factors of size, transparency, and realism (through pictures and videos of physical arms), as well as interaction factors of input and augmentations (feedback of interactions), in both a co-located and distributed environment. We found that the visual design had little effect on a group’s ability to coordinate access to shared tabletop items, that embodiment augmentations are useful to support group coordinative actions, and that there are large differences when the person is not physically co-present. Our results demonstrate an initial exploration into the design of digital arm embodiments, providing design guidelines for future researchers and designers to use when designing the next generation of shared digital spaces.
|
3 |
Designing Discoverable Digital Tabletop Menus for Public SettingsSeto, Amanda Mindy January 2012 (has links)
Ease of use with digital tabletops in public settings is contingent on how well the system invites and guides interaction. The same can be said for the interface design and individual graphical user interface elements of these systems. One such interface element is menus. Prior to a menu being used however, it must first be discovered within the interface. Existing research pertaining to digital tabletop menu design does not address this issue of discovering or opening a menu. This thesis investigates how the interface and interaction of digital tabletops can be designed to encourage menu discoverability in the context of public settings.
A set of menu invocation designs varying on the invocation element and use of animation are proposed. These designs are then evaluated through an observational study at a museum to observe users interactions in a realistic public setting. Findings from this study propose the use of discernible and recognizable interface elements – buttons – supported by the use of animation to attract and guide users as a discoverable menu invocation design. Additionally, findings posit that when engaging with a public digital tabletop display, users transition through exploration and discovery states before becoming competent with the system. Finally, insights from this study point to a set of design recommendations for improving menu discoverability.
|
4 |
Designing for Collaborative Turn-Taking at the Digital Tabletop / Design för kollaborativt turtagande runt det digitala arbetsbordetRybing, Jonas January 2011 (has links)
Collaboration technologies are difficult to design due to the complex myr-iad of social, cognitive, and communicative aspects of group interactions. New interaction technologies like multitouch sharable interfaces, such asdigital tabletops, have lead to a renewed interest in designing collaborativetechnologies. This thesis focuses on turn-taking protocols as a coordinat-ing mechanism during collaborative work with digital tabletops. The goalwas to develop new conceptual designs and interactive mechanisms to sup-port face-to-face collaborations of small groups. Inspired by ethnographicalstudies of collaborative work and theories in distributed cognition and re-lated theories of language and action a model of collaborative turn-takingwas developed. Moreover, the thesis presents five design concepts and in-teraction components for the digital tabletop that exemplifies the differentproperties of the model.
|
5 |
The Effects of Digitization and Automation on Board Games for Digital TabletopsPape, Joseph A. 09 January 2012 (has links)
Digital tabletop computers are an ideal platform for games with the social advantages of traditional tabletop games, such as board games and card games, combined with the more streamlined and automated gameplay of video games. Implementing a board game digitally allows for aspects of the game, such as routine in-game activities, rule enforcement and game progression, to be automated. However, the effect of this automation on the players’ social experience and enjoyment is poorly understood. To explore this question, a mixed-method study was carried out in which 24 groups of participants played either the abstract strategy board game Checkers or the cooperative board game Pandemic using three different interfaces: the original physical game; a digital tabletop interface which provided minimal automation in an attempt to replicate play of the original game; and a digital tabletop interface which automated many in-game activities, enforced the rules and managed the progression of the game. The study revealed that while automation does have the potential to reduce the overhead to play the game, it can lead to player frustration in several ways. Automating routine in-game activities and game progression can lead to severe awareness deficits. Automation of rule enforcement and management of the game state can streamline gameplay, but can lead to scenarios where players would prefer more control over the game. The negative space around the active game area is important to consider for storage of digital artifacts and physical objects above the table. Finally the digitization and automation of the games did not reduce social interaction, making digital tabletops a promising platform for social games. / Thesis (Master, Computing) -- Queen's University, 2012-01-08 10:32:08.405
|
6 |
Tangible Displays: Interacting with Spatially Aware Handheld Displays above a TabletopSpindler, Martin 18 February 2019 (has links)
The success of smartphones and tablets clearly shows that the fusion of input and output within one device can lead to a more direct and natural interaction. While a bigger part of previous research was devoted to the development of techniques for such touch-sensitive displays, this dissertation goes beyond the limitations of an interactive surface and extends the interaction to the physical space above a digital table by means of handheld spatially aware displays. By incorporating their spatial position and orientation, those displays add a further major input channel to the interaction. Even though this idea is not entirely new, the potential of using spatially aware displays (Tangible Displays) above a digital tabletop has rarely been used and requires systematic examination. In pursuit of lessening this gap, this dissertation makes three major contributions:
(1) The conceptual framework has been developed as a guide for the design of Tangible Display applications. It offers a systematic description and analysis of the design space under investigation and its basic interaction principles. This includes a detailed overview of the general system components and underlying types of input as well as a categorization of common interaction and usage patterns. Based on that, a classification of four common types of information spaces is provided along with a set of novel techniques for their spatial exploration in midair above a tabletop. On an empirical level, the framework is supported by two comprehensive studies that investigate key aspects of spatial interaction.
(2) To facilitate the rapid prototyping of interactive Tangible Display applications, a unifying technological framework has been designed and implemented that integrates the necessary sensor and display hardware and provides simple access to it through an easy-to-use API. Along with a modular architectural design, the API does not only encapsulate the complexity of the underlying input and output technologies, but also allows for their seamless substitution by alternative approaches.
(3) On a practical level, the conceptual and technological framework have been validated by four comprehensive interactive systems. Those systems served as a testbed for the iterative development and formative assessment of various novel interaction techniques tailored to address basic tasks in common fields of application. The gathered insights helped refine the conceptual and technological framework and are a valuable starting point for the development of future systems.:Abstract iii
Zusammenfassung v
Acknowledgements vii
Publications ix
Supervised Student Theses xiii
Acronyms xv
Contents xix
1 Introduction 1
1.1 Goals of this Thesis
1.1.1 Research Challenges
1.1.2 Research Objectives
1.2 Scope of this Thesis
1.3 Methodological Approach
1.4 Contributions and Thesis Outline
PART I: Conceptual Framework
2 Research Background
2.1 General Research Context
2.1.1 Post-WIMP & Reality-Based Interaction
2.1.2 Ubiquitous Computing
2.1.3 Augmented Reality and Environments
2.2 Interactive Surfaces
2.2.1 Advanced Form Factors
2.2.2 Interaction Beyond the Surface
2.2.3 Multi-display Environments
2.3 Tangible User Interfaces
2.3.1 Basic TUI Genres
2.3.2 Contributions and Qualities of TUI
2.4 Spatially Aware Displays
2.4.1 Potential Benefits
2.4.2 Spatially Aware Displays in Multi-Display Environments
2.4.3 A TUI Genre of its Own:Tangible Displays
2.5 Summary
3 Studying Spatial Input-based Zoom & Pan on Handheld Displays
3.1 Goals and Scope of the Study
3.1.1 Factors of Influence
3.1.2 Hypotheses
3.1.3 Scope of the Study
3.2 Design Rationale
3.2.1 Mapping the Physical to the Virtual World
3.2.2 Clutching and Relative Mode
3.2.3 Zooming and Panning
3.2.4 Responsiveness of the Prototype
3.3 Method
3.3.1 Study Design
3.3.2 Participants
3.3.3 Apparatus
3.3.4 Scenario and Task Design
3.3.5 Procedure
3.4 Results
3.4.1 Statistical Methodology & Collected Performance Data
3.4.2 Analysis of Completion Times
3.4.3 Analysis of Discrete Actions
3.4.4 Utilized Motor Space (Spatial Condition Only)
3.4.5 User Feedback & Fatigue
3.5 Discussion
3.5.1 Verification of Hypotheses
3.5.2 Further Observations
3.5.3 Explaining the Effects
3.5.4 Limitations
3.6 Future Generations of Mobile Displays
3.6.1 Device-Intrinsic Spatial Tracking
3.6.2 A Built-in Tactile Clutch
3.7 Summary
4 Design Space & Interaction Framework
4.1 Design Dimensions
4.1.1 Principle Setup & System Components
4.1.2 Basic Types of Input
4.1.3 Spatial Zones
4.2 Interaction Vocabulary
4.2.1 Vocabulary Based on Spatial Input
4.2.2 Vocabulary Based on Head Input
4.2.3 Vocabulary Based on Surface Input
4.2.4 Vocabulary Inspired by the Representation Aspect
4.3 Topologies for Representing Virtual Spaces
4.3.1 3D Volumes
4.3.2 Zoom Pyramids
4.3.3 Multi-layer Stacks
4.4 Classes of Explorable Information Spaces
4.4.1 Volumetric Information Spaces
4.4.2 Zoomable Information Spaces
4.4.3 Layered Information Spaces
4.4.4 Temporal Information Spaces
4.5 Summary
5 Studying Multi-layer Interaction Above a Tabletop
5.1 Goals and Scope of the Study
5.1.1 Basic Interaction Tasks
5.1.2 Previous Evaluations
5.1.3 Scope of the Study
5.2 Method
5.2.1 Participants
5.2.2 Study Design & Tasks
5.2.3 Procedure
5.2.4 Apparatus
5.3 Results
5.3.1 Collected Performance Data & Statistical Methodology
5.3.2 Basic Analysis Concerning the Three Interaction Tasks
5.3.3 Further Analysis Regarding Interaction Zones
5.3.4 Questionnaires & User Preferences
5.4 Discussion
5.4.1 Layer Thicknesses & Accuracy
5.4.2 Physical Interaction Space & Number of Layers
5.4.3 Design Recommendations & Further Observations
5.5 Summary
PART II: Technological Framework
6 Technological Background 101
6.1 Basic Display Approaches
6.1.1 Projective Displays
6.1.2 Active Displays
6.2 Tracking Technologies for Handheld Displays
6.2.1 Infrared Marker Systems
6.2.2 Depth Sensor Systems
6.3 Technologies for Sensing Surface Input
6.3.1 Sensing Touch Input
6.3.2 Digital Pens and Paper
6.4 Summary
7 A Tangible Display Toolkit for Research Labs
7.1 Toolkit Architecture
7.1.1 History and General Overview
7.1.2 Lab and Mobile Setup
7.2 Toolkit Subsystems
7.2.1 Projection Subsystem
7.2.2 Spatial Input Subsystem
7.2.3 Surface Input Subsystem
7.3 Toolkit Interfaces
7.3.1 Inter-process Communication
7.3.2 Servers & UI Tools
7.3.3 Application Programming Interface
7.4 Summary
8 Towards a Tangible Display Ecosystem for Everyone
8.1 Motivation and Vision
8.1.1 Envisioned Hardware Setup
8.1.2 Proactive Cooperation Among Devices
8.2 Revision of the Previous Lab Setup
8.3 Case Study: Tracking via a Low-cost Depth Sensor
8.3.1 Implemented Tracking Algorithm
8.3.2 Evaluation
8.3.3 Areas of Improvement
8.4 Summary
PART III: Tangible Display Systems
9 Tangible Lenses for Multimedia Information Spaces
9.1 Case Studies
9.1.1 Volume Slicer
9.1.2 Picture Zoomer
9.1.3 Layer Explorer
9.1.4 Video Browser
9.2 Evaluation
9.2.1 Study Design
9.2.2 Findings
9.3 Improved Navigation Techniques
9.3.1 Navigating through Information Layers
9.3.2 Navigational Aids
9.4 Annotating the Space Above the Tabletop
9.4.1 Creation of Annotations
9.4.2 Guided Exploration of Annotations
9.5 Summary
10 Tangible Views for Information Visualization
10.1 Background and Motivation
10.1.1 Conventional Interactive Visualization
10.1.2 Towards More Direct Interaction in InfoVis
10.1.3 Narrowing the Gap
10.2 The Tangible Views Concept
10.3 Case Studies
10.3.1 Graph Visualization
10.3.2 Scatter Plot
10.3.3 Parallel Coordinates Plot
10.3.4 Matrix Visualization
10.3.5 Space-Time-Cube Visualization
10.4 Initial User Experience & Discussion
10.4.1 Observations
10.4.2 Limitations
10.5 Potential Future Directions
10.5.1 Technology Gap
10.5.2 Integration Gap
10.5.3 Guidelines Gap
10.6 Summary
11 Tangible Palettes for Graphical Applications
11.1 Background and Motivation
11.2 The Tangible Palettes Concept
11.2.1 Spatial Work Zones
11.2.2 Previous Techniques Revisited
11.2.3 Allocating GUI Palettes to Tangible Displays
11.3 Interactive Prototype
11.3.1 Basic Functionality: Drawing and Document Navigation
11.3.2 Inter-Display Transfer of Palettes
11.3.3 Temporary Fade-out of Tool Palettes
11.3.4 Quick Access to Tool Palettes via Spatial Work Zones
11.3.5 Handling of Stacked Graphics Layers
11.4 Initial User Experience & Discussion
11.4.1 General Impression and Limitations
11.4.2 Document Navigation with Handheld Displays
11.4.3 Tool Organization with Spatial Work Zones
11.5 Summary
12 Tangible Windows for 3D Virtual Reality 193 12.1 Background and Motivation
12.1.1 Basic 3D Display Approaches
12.1.2 Seminal 3D User Interfaces
12.2 TheTangibleWindowsConcept
12.2.1 Windows into Virtuality
12.2.2 Head-coupled Perspectives
12.2.3 Tangible Windows Above a Digital Workbench
12.3 Interaction Techniques
12.3.1 Global Viewpoint Control on the Tabletop
12.3.2 Scene Exploration
12.3.3 Object Selection
12.3.4 Object Manipulation
12.3.5 Object Inspection
12.3.6 Global Scene Navigation on the Tabletop
12.4 Application Scenarios & Case Studies
12.4.1 Virtual Sandbox
12.4.2 Interior Designer
12.4.3 Medical Visualization
12.5 Initial User Experience & Discussion
12.5.1 Limitations
12.5.2 Precision and Constraints
12.5.3 More Permanent Representations
12.5.4 Head-coupled Perspectives and Head Input
12.6 Summary
13 Conclusion
13.1 Summary of Contributions
13.1.1 Major Contributions
13.1.2 Minor Contributions
13.2 Critical Reflection
13.2.1 General Limitations due to the Dissertation Scope
13.2.2 Limitations of the Techniques
13.2.3 Limitations of the Studies
13.3 Directions for Future Work
13.3.1 Adaptation to other Settings and Domains
13.3.2 Further Development of the Techniques
13.3.3 Current Developments
13.4 Closing Remarks
A Appendix
A.1 Materials for Chapter3 (Zoom & Pan Study)
A.1.1 List of the 128 Zoom and Pan Tasks
A.1.2 Usability Questionnaires
A.2 Questionnaires for Chapter 5 (Multi-layer Stack Study)
A.3 Materials for Section 9.2 (Evaluation of Tangible Lenses)
A.3.1 Scratchpad for Study Leader
A.3.2 Usability Questionnaires
Bibliography
List of Figures
List of Tables / Der Erfolg von Smartphones und Tablets hat deutlich gezeigt, dass die Verschmelzung von Ein- und Ausgabe im selben Gerät zu einer direkteren und als natürlicher empfundenen Interaktion führen kann. Während sich ein Großteil bisheriger Forschung der Entwicklung von Touchtechniken auf derartigen berührungsempfindlichen Displays widmet, löst sich diese Dissertation von den Beschränkungen der Interaktion auf Oberflächen und erweitert diese auf den physischen Raum oberhalb eines digitalen Tisches mittels handgehaltener, lagebewusster Displays. Durch die Einbeziehung der räumlichen Position und Orientierung solcher „Tangible Displays” steht ein vielversprechender, zusätzlicher Eingabekanal zur Verfügung. Wenngleich diese Idee nicht vollständig neu ist, wurden die vielfältigen Möglichkeiten, die sich durch ihren Einsatz über einem digitalen Tisch ergeben, bisher wenig genutzt. Im Bestreben diese Lücke zu verringern, leistet diese Dissertation drei wesentliche Beträge:
(1) Das Konzeptuelle Rahmenwerk wurde als Leitfaden für den Entwurf von Tangible Display- Anwendungen entwickelt. Es bietet eine systematische Beschreibung und Analyse des zu untersuchenden Entwurfsraums und seiner grundlegenden Interaktionsprinzipien. Neben einer detaillierten Übersicht aller Systemkomponenten und Eingabearten beinhaltet dies vor allem eine Kategorisierung von typischen Interaktions- und Nutzungsmustern. Darauf basierend wird ein neuartiger vereinheitlichender Ansatz zur räumlichen Interaktion mit verschiedenen gängigen Klassen von Informationsräumen über einem Tabletop vorgestellt. Auf empirischer Ebene wird das Konzeptuelle Rahmenwerk durch zwei umfangreiche Studien gestützt, in denen Kernaspekte der räumlichen Interaktion mit handgehaltenen Displays untersucht wurden.
(2) Um die Entwicklung von interaktiven Anwendungen zu ermöglichen, wurde ein Technisches Rahmenwerk entworfen und umgesetzt, das die Sensor- und Displayhardware zusammenfasst und einfachen Zugriff darauf mittels eines API bietet. Im Zusammenspiel mit der modularen Software-Architektur kapselt das API nicht nur die Komplexität der verwendeten Ein- und Ausgabetechnologien, sondern ermöglicht auch deren nahtlosen Austausch durch alternative Lösungsansätze.
(3) Die Tauglichkeit des Konzeptuellen und des Technischen Rahmenwerkes wird durch vier umfangreiche interaktive Systeme demonstriert. Diese Systeme dienten als Testumgebung für die iterative Entwicklung und formative Bewertung einer Reihe von neuartigen Interaktionstechniken, die gängige Basisaufgaben in verschiedenen Anwendungsbereichen adressieren. Die dabei gewonnenen Erkenntnisse halfen, das Konzeptuelle und das Technische Rahmenwerk zu verfeinern, welche einen wertvollen Ausgangspunkt für die Entwicklung von zukünftigen interaktiven Tangible Display-Systemen bilden.:Abstract iii
Zusammenfassung v
Acknowledgements vii
Publications ix
Supervised Student Theses xiii
Acronyms xv
Contents xix
1 Introduction 1
1.1 Goals of this Thesis
1.1.1 Research Challenges
1.1.2 Research Objectives
1.2 Scope of this Thesis
1.3 Methodological Approach
1.4 Contributions and Thesis Outline
PART I: Conceptual Framework
2 Research Background
2.1 General Research Context
2.1.1 Post-WIMP & Reality-Based Interaction
2.1.2 Ubiquitous Computing
2.1.3 Augmented Reality and Environments
2.2 Interactive Surfaces
2.2.1 Advanced Form Factors
2.2.2 Interaction Beyond the Surface
2.2.3 Multi-display Environments
2.3 Tangible User Interfaces
2.3.1 Basic TUI Genres
2.3.2 Contributions and Qualities of TUI
2.4 Spatially Aware Displays
2.4.1 Potential Benefits
2.4.2 Spatially Aware Displays in Multi-Display Environments
2.4.3 A TUI Genre of its Own:Tangible Displays
2.5 Summary
3 Studying Spatial Input-based Zoom & Pan on Handheld Displays
3.1 Goals and Scope of the Study
3.1.1 Factors of Influence
3.1.2 Hypotheses
3.1.3 Scope of the Study
3.2 Design Rationale
3.2.1 Mapping the Physical to the Virtual World
3.2.2 Clutching and Relative Mode
3.2.3 Zooming and Panning
3.2.4 Responsiveness of the Prototype
3.3 Method
3.3.1 Study Design
3.3.2 Participants
3.3.3 Apparatus
3.3.4 Scenario and Task Design
3.3.5 Procedure
3.4 Results
3.4.1 Statistical Methodology & Collected Performance Data
3.4.2 Analysis of Completion Times
3.4.3 Analysis of Discrete Actions
3.4.4 Utilized Motor Space (Spatial Condition Only)
3.4.5 User Feedback & Fatigue
3.5 Discussion
3.5.1 Verification of Hypotheses
3.5.2 Further Observations
3.5.3 Explaining the Effects
3.5.4 Limitations
3.6 Future Generations of Mobile Displays
3.6.1 Device-Intrinsic Spatial Tracking
3.6.2 A Built-in Tactile Clutch
3.7 Summary
4 Design Space & Interaction Framework
4.1 Design Dimensions
4.1.1 Principle Setup & System Components
4.1.2 Basic Types of Input
4.1.3 Spatial Zones
4.2 Interaction Vocabulary
4.2.1 Vocabulary Based on Spatial Input
4.2.2 Vocabulary Based on Head Input
4.2.3 Vocabulary Based on Surface Input
4.2.4 Vocabulary Inspired by the Representation Aspect
4.3 Topologies for Representing Virtual Spaces
4.3.1 3D Volumes
4.3.2 Zoom Pyramids
4.3.3 Multi-layer Stacks
4.4 Classes of Explorable Information Spaces
4.4.1 Volumetric Information Spaces
4.4.2 Zoomable Information Spaces
4.4.3 Layered Information Spaces
4.4.4 Temporal Information Spaces
4.5 Summary
5 Studying Multi-layer Interaction Above a Tabletop
5.1 Goals and Scope of the Study
5.1.1 Basic Interaction Tasks
5.1.2 Previous Evaluations
5.1.3 Scope of the Study
5.2 Method
5.2.1 Participants
5.2.2 Study Design & Tasks
5.2.3 Procedure
5.2.4 Apparatus
5.3 Results
5.3.1 Collected Performance Data & Statistical Methodology
5.3.2 Basic Analysis Concerning the Three Interaction Tasks
5.3.3 Further Analysis Regarding Interaction Zones
5.3.4 Questionnaires & User Preferences
5.4 Discussion
5.4.1 Layer Thicknesses & Accuracy
5.4.2 Physical Interaction Space & Number of Layers
5.4.3 Design Recommendations & Further Observations
5.5 Summary
PART II: Technological Framework
6 Technological Background 101
6.1 Basic Display Approaches
6.1.1 Projective Displays
6.1.2 Active Displays
6.2 Tracking Technologies for Handheld Displays
6.2.1 Infrared Marker Systems
6.2.2 Depth Sensor Systems
6.3 Technologies for Sensing Surface Input
6.3.1 Sensing Touch Input
6.3.2 Digital Pens and Paper
6.4 Summary
7 A Tangible Display Toolkit for Research Labs
7.1 Toolkit Architecture
7.1.1 History and General Overview
7.1.2 Lab and Mobile Setup
7.2 Toolkit Subsystems
7.2.1 Projection Subsystem
7.2.2 Spatial Input Subsystem
7.2.3 Surface Input Subsystem
7.3 Toolkit Interfaces
7.3.1 Inter-process Communication
7.3.2 Servers & UI Tools
7.3.3 Application Programming Interface
7.4 Summary
8 Towards a Tangible Display Ecosystem for Everyone
8.1 Motivation and Vision
8.1.1 Envisioned Hardware Setup
8.1.2 Proactive Cooperation Among Devices
8.2 Revision of the Previous Lab Setup
8.3 Case Study: Tracking via a Low-cost Depth Sensor
8.3.1 Implemented Tracking Algorithm
8.3.2 Evaluation
8.3.3 Areas of Improvement
8.4 Summary
PART III: Tangible Display Systems
9 Tangible Lenses for Multimedia Information Spaces
9.1 Case Studies
9.1.1 Volume Slicer
9.1.2 Picture Zoomer
9.1.3 Layer Explorer
9.1.4 Video Browser
9.2 Evaluation
9.2.1 Study Design
9.2.2 Findings
9.3 Improved Navigation Techniques
9.3.1 Navigating through Information Layers
9.3.2 Navigational Aids
9.4 Annotating the Space Above the Tabletop
9.4.1 Creation of Annotations
9.4.2 Guided Exploration of Annotations
9.5 Summary
10 Tangible Views for Information Visualization
10.1 Background and Motivation
10.1.1 Conventional Interactive Visualization
10.1.2 Towards More Direct Interaction in InfoVis
10.1.3 Narrowing the Gap
10.2 The Tangible Views Concept
10.3 Case Studies
10.3.1 Graph Visualization
10.3.2 Scatter Plot
10.3.3 Parallel Coordinates Plot
10.3.4 Matrix Visualization
10.3.5 Space-Time-Cube Visualization
10.4 Initial User Experience & Discussion
10.4.1 Observations
10.4.2 Limitations
10.5 Potential Future Directions
10.5.1 Technology Gap
10.5.2 Integration Gap
10.5.3 Guidelines Gap
10.6 Summary
11 Tangible Palettes for Graphical Applications
11.1 Background and Motivation
11.2 The Tangible Palettes Concept
11.2.1 Spatial Work Zones
11.2.2 Previous Techniques Revisited
11.2.3 Allocating GUI Palettes to Tangible Displays
11.3 Interactive Prototype
11.3.1 Basic Functionality: Drawing and Document Navigation
11.3.2 Inter-Display Transfer of Palettes
11.3.3 Temporary Fade-out of Tool Palettes
11.3.4 Quick Access to Tool Palettes via Spatial Work Zones
11.3.5 Handling of Stacked Graphics Layers
11.4 Initial User Experience & Discussion
11.4.1 General Impression and Limitations
11.4.2 Document Navigation with Handheld Displays
11.4.3 Tool Organization with Spatial Work Zones
11.5 Summary
12 Tangible Windows for 3D Virtual Reality 193 12.1 Background and Motivation
12.1.1 Basic 3D Display Approaches
12.1.2 Seminal 3D User Interfaces
12.2 TheTangibleWindowsConcept
12.2.1 Windows into Virtuality
12.2.2 Head-coupled Perspectives
12.2.3 Tangible Windows Above a Digital Workbench
12.3 Interaction Techniques
12.3.1 Global Viewpoint Control on the Tabletop
12.3.2 Scene Exploration
12.3.3 Object Selection
12.3.4 Object Manipulation
12.3.5 Object Inspection
12.3.6 Global Scene Navigation on the Tabletop
12.4 Application Scenarios & Case Studies
12.4.1 Virtual Sandbox
12.4.2 Interior Designer
12.4.3 Medical Visualization
12.5 Initial User Experience & Discussion
12.5.1 Limitations
12.5.2 Precision and Constraints
12.5.3 More Permanent Representations
12.5.4 Head-coupled Perspectives and Head Input
12.6 Summary
13 Conclusion
13.1 Summary of Contributions
13.1.1 Major Contributions
13.1.2 Minor Contributions
13.2 Critical Reflection
13.2.1 General Limitations due to the Dissertation Scope
13.2.2 Limitations of the Techniques
13.2.3 Limitations of the Studies
13.3 Directions for Future Work
13.3.1 Adaptation to other Settings and Domains
13.3.2 Further Development of the Techniques
13.3.3 Current Developments
13.4 Closing Remarks
A Appendix
A.1 Materials for Chapter3 (Zoom & Pan Study)
A.1.1 List of the 128 Zoom and Pan Tasks
A.1.2 Usability Questionnaires
A.2 Questionnaires for Chapter 5 (Multi-layer Stack Study)
A.3 Materials for Section 9.2 (Evaluation of Tangible Lenses)
A.3.1 Scratchpad for Study Leader
A.3.2 Usability Questionnaires
Bibliography
List of Figures
List of Tables
|
Page generated in 0.0467 seconds