• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 8
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Increasing Selection Accuracy and Speed through Progressive Refinement

Bacim de Araujo e Silva, Felipe 21 July 2015 (has links)
Although many selection techniques have been proposed and developed over the years, selection by pointing is perhaps the most popular approach for selection. In 3D interfaces, the laser-pointer metaphor is commonly used, since users only have to point to their target from a distance. However, the task of selecting objects that have a small visible area or that are in highly cluttered environments is hard when using pointing techniques. With both indirect and direct pointing techniques in 3D interfaces, smaller targets require higher levels of pointing precision from the user. In addition, issues such as target occlusion as well as hand and tracker jitter negatively affect user performance. Therefore, requiring the user to perform selection in a single precise step may result in users spending more time to select targets so that they can be more accurate (effect known as the speed-accuracy trade-off). We describe an approach to address this issue, called Progressive Refinement. Instead of performing a single precise selection, users gradually reduce the set of selectable objects to reduce the required precision of the task. This approach, however, has an inherent trade-off when compared to immediate selection techniques. Progressive refinement requires a gradual process of selection, often using multiple steps, although each step can be fast, accurate, and nearly effortless. Immediate techniques, on the other hand, involve a single-step selection that requires effort and may be slower and more error-prone. Therefore, the goal of this work was to explore this trade-off. The research includes the design and evaluation of progressive refinement techniques for 3D interfaces, using both pointing- and gesture-based interfaces for single-object selection and volume selection. Our technique designs and other existing selection techniques that can be classified as progressive refinement were used to create a design space. We designed eight progressive refinement techniques and compared them to the most commonly used techniques (for a baseline comparison) and to other state-of-the-art selection techniques in a total of four empirical studies. Based on the results of the studies, we developed a set of design guidelines that will help other researchers design and use progressive refinement techniques. / Ph. D.
2

How Does Interaction Fidelity Influence User Experience in VR Locomotion?

Nabiyouni, Mahdi 06 February 2017 (has links)
It is often assumed that more realism is always desirable. In particular, many techniques for locomotion in Virtual Reality (VR) attempt to approximate real-world walking. However, it is not yet fully understood how the design of more realistic locomotion techniques influences effectiveness and user experience. In the previous VR studies, the effects of interaction fidelity have been coarse-grained, considering interaction fidelity as a single construct. We argue that interaction fidelity consists of various independent components, and each component can have a different effect on the effectiveness of the interface. Moreover, the designer's intent can influence the effectiveness of an interface and needs to be considered in the design. Semi-natural locomotion interfaces can be difficult to use at first, due to a lack of interaction fidelity, and effective training would help users understand the forces they were feeling and better control their movements. Another method to improve locomotion interaction is to develop a more effective interface or improve the existing techniques. A detailed taxonomy of walking-based locomotion techniques would be beneficial to better understand, analyze, and design walking techniques for VR. We conducted four user studies and performed a meta-analysis on the literature to have a more in-depth understanding of the effects of interaction fidelity on effectiveness. We found that for the measures dependent on proprioceptive sensory information, such as orientation estimation, cognitive load, and sense of presence, the level of effectiveness increases with increasing levels of interaction fidelity. Other measures which depend more on the ease of learning and ease of use, such as completion time, movement accuracy, and subjective evaluation, form a u-shape uncanny valley. For such measures, moderate-fidelity interfaces are often outperformed by low- and high-fidelity interfaces. In our third user study, we further investigated the effects of components of interaction fidelity, biomechanics and transfer function, as well as designers' intent. We learned that the biomechanics of walking are more sensitive to changes and that the effects of these changes were mostly negative for hyper-natural techniques. Changes in the transfer function component were easier for the user to learn and to adapt to. Suitable transfer functions were able to improve some locomotion features but at the cost of accuracy. To improve the level of effectiveness in moderate-fidelity locomotion interfaces we employed an effective training method. We learned that providing a visual cue during the acclimation phase can help users better understand their walking in moderate-fidelity interfaces and improve their effectiveness. To develop a design space and classification of locomotion techniques, we designed a taxonomy for walking- based locomotion techniques. With this taxonomy, we extract and discuss various characteristics of locomotion interaction. Researchers can create novel locomotion techniques by making choices from the components of this taxonomy, they can analyze and improve existing techniques, or perform experiments to evaluate locomotion techniques in detail using the presented organization. As an example of using this taxonomy, we developed a novel locomotion interface by choosing a new combination of characteristics from the taxonomy. / Ph. D.
3

Bridging The Gap Between Fun And Fitness: Instructional Techniques And Real-world Applications For Full-body Dance Games

Charbonneau, Emiko 01 January 2013 (has links)
Full-body controlled games offer the opportunity for not only entertainment, but education and exercise as well. Refined gameplay mechanics and content can boost intrinsic motivation and keep people playing over a long period of time, which is desirable for individuals who struggle with maintaining a regular exercise program. Within this gameplay genre, dance rhythm games have proven to be popular with game console owners. Yet, while other types of games utilize story mechanics that keep players engaged for dozens of hours, motion-controlled dance games are just beginning to incorporate these elements. In addition, this control scheme is still young, only becoming commercially available in the last few years. Instructional displays and clear real-time feedback remain difficult challenges. This thesis investigates the potential for full-body dance games to be used as tools for entertainment, education, and fitness. We built several game prototypes to investigate visual, aural, and tactile methods for instruction and feedback. We also evaluated the fitness potential of the game Dance Central 2 both by itself and with extra game content which unlocked based on performance. Significant contributions include a framework for running a longitudinal video game study, results indicating high engagement with some fitness potential, and informed discussion of how dance games could make exertion a more enjoyable experience.
4

Realnav: Exploring Natural User Interfaces For Locomotion In Video Games

Williamson, Brian 01 January 2009 (has links)
We present an exploration into realistic locomotion interfaces in video games using spatially convenient input hardware. In particular, we use Nintendo Wii Remotes to create natural mappings between user actions and their representation in a video game. Targeting American Football video games, we used the role of the quarterback as an exemplar since the game player needs to maneuver effectively in a small area, run down the field, and perform evasive gestures such as spinning, jumping, or the "juke". In our study, we developed three locomotion techniques. The first technique used a single Wii Remote, placed anywhere on the user's body, using only the acceleration data. The second technique just used the Wii Remote's infrared sensor and had to be placed on the user's head. The third technique combined a Wii Remote's acceleration and infrared data using a Kalman filter. The Wii Motion Plus was also integrated to add the orientation of the user into the video game. To evaluate the different techniques, we compared them with a cost effective six degree of freedom (6DOF) optical tracker and two Wii Remotes placed on the user's feet. Experiments were performed comparing each to this technique. Finally, a user study was performed to determine if a preference existed among these techniques. The results showed that the second and third technique had the same location accuracy as the cost effective 6DOF tracker, but the first was too inaccurate for video game players. Furthermore, the range of the Wii remote infrared and Motion Plus exceeded the optical tracker of the comparison technique. Finally, the user study showed that video game players preferred the third method over the second, but were split on the use of the Motion Plus when the tasks did not require it.
5

Exploring and Evaluating Task Sequences for System Control Interfaces in Immersive Virtual Environments

McMahan, Ryan Patrick 17 June 2007 (has links)
System control—the issuing of commands—is a critical, but largely unexplored task in 3D user interfaces (3DUIs) for immersive virtual environments (IVEs). System control techniques are normally encompassed by complex interfaces that define how these interaction techniques fit together, which we call system control interfaces (SCIs). Creating a testbed to evaluate these SCIs would be beneficial to researchers and would lead to guidelines for choosing a SCI for particular application scenarios. Unfortunately, a major problem in creating such a testbed is the lack of a standard task sequence—the order of operations in a system control task. In this research, we identify various task sequences, such as the Action-Object and Object- Action task sequences, and evaluate the effects that these sequences have on usability, in hopes of establishing a standard task sequence. Two studies were used to estimate the cognitive effort induced by task sequences and, hence, the effects that these sequences have on user performance. We found that sequences similar to the Object-Action task sequence induce less cognitive time than sequences similar to the Action-Object task sequence. A longitudinal study was then used to analyze user preferences for task sequences as novices became experienced users with using an interior design application. We found that novices and experienced users alike prefer sequences like the Object-Action over sequences like the Action-Object task sequence. / Master of Science
6

Interfaces utilisateur 3D, des terminaux mobiles aux environnements virtuels immersifs

Hachet, Martin 03 December 2010 (has links) (PDF)
Améliorer l'interaction entre un utilisateur et un environnement 3D est un défi de recherche primordial pour le développement positif des technologies 3D interactives dans de nombreux domaines de nos sociétés, comme l'éducation. Dans ce document, je présente des interfaces utilisateur 3D que nous avons développées et qui contribuent à cette quête générale. Le premier chapitre se concentre sur l'interaction 3D pour des terminaux mobiles. En particulier, je présente des techniques dédiées à l'interaction à partir de touches, et à partir de gestes sur les écrans tactiles des terminaux mobiles. Puis, je présente deux prototypes à plusieurs degrés de liberté basés sur l'utilisation de flux vidéos. Dans le deuxième chapitre, je me concentre sur l'interaction 3D avec les écrans tactiles en général (tables, écrans interactifs). Je présente Navidget, un exemple de technique d'interaction dédié au controle de la caméra virtuelle à partir de gestes 2D, et je discute des défis de l'interaction 3D sur des écrans multi-points. Finalement, le troisième chapitre de ce document est dédié aux environnements virtuels immersifs, avec une coloration spéciale vers les interfaces musicales. Je présente les nouvelles directions que nous avons explorées pour améliorer l'interaction entre des musiciens, le public, le son, et les environements 3D interactifs. Je conclue en discutant du futur des interfaces utilisateur 3D.
7

Augmented virtuality:transforming real human activity into virtual environments

Pouke, M. (Matti) 11 August 2015 (has links)
Abstract The topic of this work is the transformation of real-world human activity into virtual environments. More specifically, the topic is the process of identifying various aspects of visible human activity with sensor networks and studying the different ways how the identified activity can be visualized in a virtual environment. The transformation of human activities into virtual environments is a rather new research area. While there is existing research on sensing and visualizing human activity in virtual environments, the focus of the research is carried out usually within a specific type of human activity, such as basic actions and locomotion. However, different types of sensors can provide very different human activity data, as well as lend itself to very different use-cases. This work is among the first to study the transformation of human activities on a larger scale, comparing various types of transformations from multiple theoretical viewpoints. This work utilizes constructs built for use-cases that require the transformation of human activity for various purposes. Each construct is a mixed reality application that utilizes a different type of source data and visualizes human activity in a different way. The constructs are evaluated from practical as well as theoretical viewpoints. The results imply that different types of activity transformations have significantly different characteristics. The most distinct theoretical finding is that there is a relationship between the level of detail of the transformed activity, specificity of the sensors involved and the extent of world knowledge required to transform the activity. The results also provide novel insights into using human activity transformations for various practical purposes. Transformations are evaluated as control devices for virtual environments, as well as in the context of visualization and simulation tools in elderly home care and urban studies. / Tiivistelmä Tämän väitöskirjatyön aiheena on ihmistoiminnan muuntaminen todellisesta maailmasta virtuaalitodellisuuteen. Työssä käsitellään kuinka näkyvästä ihmistoiminnasta tunnistetaan sensoriverkkojen avulla erilaisia ominaisuuksia ja kuinka nämä ominaisuudet voidaan esittää eri tavoin virtuaaliympäristöissä. Ihmistoiminnan muuntaminen virtuaaliympäristöihin on kohtalaisen uusi tutkimusalue. Olemassa oleva tutkimus keskittyy yleensä kerrallaan vain tietyntyyppisen ihmistoiminnan, kuten perustoimintojen tai liikkumisen, tunnistamiseen ja visualisointiin. Erilaiset anturit ja muut datalähteet pystyvät kuitenkin tuottamaan hyvin erityyppistä dataa ja siten soveltuvat hyvin erilaisiin käyttötapauksiin. Tämä työ tutkii ensimmäisten joukossa ihmistoiminnan tunnistamista ja visualisointia virtuaaliympäristössä laajemmassa mittakaavassa ja useista teoreettisista näkökulmista tarkasteltuna. Työssä hyödynnetään konstrukteja jotka on kehitetty eri käyttötapauksia varten. Konstruktit ovat sekoitetun todellisuuden sovelluksia joissa hyödynnetään erityyppistä lähdedataa ja visualisoidaan ihmistoimintaa eri tavoin. Konstrukteja arvioidaan sekä niiden käytännön sovellusalueen, että erilaisten teoreettisten viitekehysten kannalta. Tulokset viittaavat siihen, että erilaisilla muunnoksilla on selkeästi erityyppiset ominaisuudet. Selkein teoreettinen löydös on, että mitä yksityiskohtaisemmasta toiminnasta on kyse, sitä vähemmän tunnistuksessa voidaan hyödyntää kontekstuaalista tietoa tai tavanomaisia datalähteitä. Tuloksissa tuodaan myös uusia näkökulmia ihmistoiminnan visualisoinnin hyödyntämisestä erilaisissa käytännön sovelluskohteissa. Sovelluskohteina toimivat ihmiskehon käyttäminen ohjauslaitteena sekä ihmistoiminnan visualisointi ja simulointi kotihoidon ja kaupunkisuunnittelun sovellusalueilla.
8

3D Navigation with Six Degrees-of-Freedom using a Multi-Touch Display

Ortega, Francisco Raul 07 November 2014 (has links)
With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.

Page generated in 0.0865 seconds