Spelling suggestions: "subject:"bimanual interaction"" "subject:"unimanual interaction""
1 |
A Symmetric Interaction Model for Bimanual InputLatulipe, Celine January 2006 (has links)
People use both their hands together cooperatively in many everyday activities. The modern computer interface fails to take advantage of this basic human ability, with the exception of the keyboard. However, the keyboard is limited in that it does not afford continuous spatial input. The computer mouse is perfectly suited for the point and click tasks that are the major method of manipulation within graphical user interfaces, but standard computers have a single mouse. A single mouse does not afford spatial coordination between the two hands within the graphical user interface. Although the advent of the Universal Serial Bus has made it possible to easily plug in many peripheral devices, including a second mouse, modern operating systems work on the assumption of a single spatial input stream. Thus, if a second mouse is plugged into a Macintosh computer, a Windows computer or a UNIX computer, the two mice control the same cursor. <br /><br /> Previous work in two-handed or bimanual interaction techniques has often followed the asymmetric interaction guidelines set out by Yves Guiard's Kinematic Chain Model. In asymmetric interaction, the hands are assigned different tasks, based on hand dominance. I show that there is an interesting class of desktop user interface tasks which can be classified as symmetric. A symmetric task is one in which the two hands contribute equally to the completion of a unified task. I show that dual-mouse symmetric interaction techniques outperform traditional single-mouse techniques as well as dual-mouse asymmetric techniques for these symmetric tasks. I also show that users prefer the symmetric interaction techniques for these naturally symmetric tasks.
|
2 |
A Symmetric Interaction Model for Bimanual InputLatulipe, Celine January 2006 (has links)
People use both their hands together cooperatively in many everyday activities. The modern computer interface fails to take advantage of this basic human ability, with the exception of the keyboard. However, the keyboard is limited in that it does not afford continuous spatial input. The computer mouse is perfectly suited for the point and click tasks that are the major method of manipulation within graphical user interfaces, but standard computers have a single mouse. A single mouse does not afford spatial coordination between the two hands within the graphical user interface. Although the advent of the Universal Serial Bus has made it possible to easily plug in many peripheral devices, including a second mouse, modern operating systems work on the assumption of a single spatial input stream. Thus, if a second mouse is plugged into a Macintosh computer, a Windows computer or a UNIX computer, the two mice control the same cursor. <br /><br /> Previous work in two-handed or bimanual interaction techniques has often followed the asymmetric interaction guidelines set out by Yves Guiard's Kinematic Chain Model. In asymmetric interaction, the hands are assigned different tasks, based on hand dominance. I show that there is an interesting class of desktop user interface tasks which can be classified as symmetric. A symmetric task is one in which the two hands contribute equally to the completion of a unified task. I show that dual-mouse symmetric interaction techniques outperform traditional single-mouse techniques as well as dual-mouse asymmetric techniques for these symmetric tasks. I also show that users prefer the symmetric interaction techniques for these naturally symmetric tasks.
|
3 |
Immersive Virtual Reality and 3D Interaction for Volume Data AnalysisLaha, Bireswar 04 September 2014 (has links)
This dissertation provides empirical evidence for the effects of the fidelity of VR system components, and novel 3D interaction techniques for analyzing volume datasets. It provides domain-independent results based on an abstract task taxonomy for visual analysis of scientific datasets. Scientific data generated through various modalities e.g. computed tomography (CT), magnetic resonance imaging (MRI), etc. are in 3D spatial or volumetric format. Scientists from various domains e.g., geophysics, medical biology, etc. use visualizations to analyze data. This dissertation seeks to improve effectiveness of scientific visualizations.
Traditional volume data analysis is performed on desktop computers with mouse and keyboard interfaces. Previous research and anecdotal experiences indicate improvements in volume data analysis in systems with very high fidelity of display and interaction (e.g., CAVE) over desktop environments. However, prior results are not generalizable beyond specific hardware platforms, or specific scientific domains and do not look into the effectiveness of 3D interaction techniques.
We ran three controlled experiments to study the effects of a few components of VR system fidelity (field of regard, stereo and head tracking) on volume data analysis. We used volume data from paleontology, medical biology and biomechanics. Our results indicate that different components of system fidelity have different effects on the analysis of volume visualizations. One of our experiments provides evidence for validating the concept of Mixed Reality (MR) simulation.
Our approach of controlled experimentation with MR simulation provides a methodology to generalize the effects of immersive virtual reality (VR) beyond individual systems. To generalize our (and other researchers') findings across disparate domains, we developed and evaluated a taxonomy of visual analysis tasks with volume visualizations. We report our empirical results tied to this taxonomy.
We developed the Volume Cracker (VC) technique for improving the effectiveness of volume visualizations. This is a free-hand gesture-based novel 3D interaction (3DI) technique. We describe the design decisions in the development of the Volume Cracker (with a list of usability criteria), and provide the results from an evaluation study. Based on the results, we further demonstrate the design of a bare-hand version of the VC with the Leap Motion controller device. Our evaluations of the VC show the benefits of using 3DI over standard 2DI techniques.
This body of work provides the building blocks for a three-way many-many-many mapping between the sets of VR system fidelity components, interaction techniques and visual analysis tasks with volume visualizations. Such a comprehensive mapping can inform the design of next-generation VR systems to improve the effectiveness of scientific data analysis. / Ph. D.
|
4 |
A body-centric framework for generating and evaluating novel interaction techniquesWagner, Julie 06 December 2012 (has links) (PDF)
This thesis introduces BodyScape, a body-centric framework that accounts for how users coordinate their movements within and across their own limbs in order to interact with a wide range of devices, across multiple surfaces. It introduces a graphical notation that describes interaction techniques in terms of (1) motor assemblies responsible for performing a control task (input motor assembly) or bringing the body into a position to visually perceive output (output motor assembly), and (2) the movement coordination of motor assemblies, relative to the body or fixed in the world, with respect to the interactive environment. This thesis applies BodyScape to 1) investigate the role of support in a set of novel bimanual interaction techniques for hand-held devices, 2) analyze the competing effect across multiple input movements, and 3) compare twelve pan-and-zoom techniques on a wall-sized display to determine the roles of guidance and interference on performance. Using BodyScape to characterize interaction clarifies the role of device support on the user's balance and subsequent comfort and performance. It allows designers to identify situations in which multiple body movements interfere with each other, with a corresponding decrease in performance. Finally, it highlights the trade-offs among different combinations of techniques, enabling the analysis and generation of a variety of multi-surface interaction techniques. I argue that including a body-centric perspective when defining interaction techniques is essential for addressing the combinatorial explosion of interactive devices in multi-surface environments.
|
5 |
A body-centric framework for generating and evaluating novel interaction techniques / Un espace de conception centré sur les fonctions corporelles pour la génération et l'évaluation de nouvelles techniques d'interactionWagner, Julie 06 December 2012 (has links)
Cette thèse présente BodyScape, un espace de conception prenant en compte l’engagement corporel de l’utilisateur dans l’interaction. BodyScape décrit la façon dont les utilisateurs coordonnent les mouvements de, et entre leurs membres, lorsqu’ils interagissent avec divers dispositifs d’entrée et entre plusieurs surfaces d’affichage. Il introduit une notation graphique pour l’analyse des techniques d’interaction en termes (1) d’assemblages de moteurs, qui accomplissent une tâche d’interaction atomique (assemblages de moteurs d’entrée), ou qui positionnent le corps pour percevoir les sorties du système (assemblages de moteurs de sortie); (2) de coordination des mouvements de ces assemblages de moteurs, relativement au corps de l’utilisateur ou à son environnement interactif.Nous avons appliqué BodyScape à : 1) la caractérisation du rôle du support dans l’étude de nouvelles interactions bimanuelles pour dispositifs mobiles; 2) l’analyse des effets de mouvements concurrents lorsque l’interaction et son support impliquent le même membre; et 3) la comparaison de douze techniques d’interaction multi-échelle afin d’évaluer le rôle du guidage et des interférences sur la performance.La caractérisation des interaction avec BodyScape clarifie le rôle du support des dispositifs d’interaction sur l’équilibre de l’utilisateur, et donc sur le confort d’utilisation et la performance qui en découlent. L’espace de conception permet aussi aux concepteurs d’interactions d’identifier des situations dans lesquelles des mouvements peuvent interférer entre eux et donc diminuer performance et confort. Enfin, BodyScape révèle les compromis à considérer a priori lors de la combinaison de plusieurs techniques d’interaction, permettant l’analyse et la génération de techniques d’interaction variées pour les environnements multi-surfaces.Plus généralement, cette thèse défend l’idée qu’en adoptant une approche centrée sur les fonctions corporelles engagées au cours de l’interaction, il est possible de maîtriser la complexité de la conception de techniques d’interaction dans les environnements multi-surfaces, mais aussi dans un cadre plus général. / This thesis introduces BodyScape, a body-centric framework that accounts for how users coordinate their movements within and across their own limbs in order to interact with a wide range of devices, across multiple surfaces. It introduces a graphical notation that describes interaction techniques in terms of (1) motor assemblies responsible for performing a control task (input motor assembly) or bringing the body into a position to visually perceive output (output motor assembly), and (2) the movement coordination of motor assemblies, relative to the body or fixed in the world, with respect to the interactive environment. This thesis applies BodyScape to 1) investigate the role of support in a set of novel bimanual interaction techniques for hand-held devices, 2) analyze the competing effect across multiple input movements, and 3) compare twelve pan-and-zoom techniques on a wall-sized display to determine the roles of guidance and interference on performance. Using BodyScape to characterize interaction clarifies the role of device support on the user's balance and subsequent comfort and performance. It allows designers to identify situations in which multiple body movements interfere with each other, with a corresponding decrease in performance. Finally, it highlights the trade-offs among different combinations of techniques, enabling the analysis and generation of a variety of multi-surface interaction techniques. I argue that including a body-centric perspective when defining interaction techniques is essential for addressing the combinatorial explosion of interactive devices in multi-surface environments.
|
Page generated in 0.124 seconds