• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Integral Perception in Augmented Reality

McGee, Michael K. 24 April 2000 (has links)
Augmented reality, the superimposing of graphics or text onto an actual visual scene, is intended to enhance a user's understanding of the real world. This research examines the perceptual, cognitive, and human factors implications of combining integrally designed computer-generated imagery with real world scenes. Three experiments were conducted to test the theoretical and practical consequences of integral perception in augmented reality. The first experiment was a psychophysical study that had participants subjectively assess the integrality of 32 scenes comprising four different augmented reality object environments (computer, brain, text, and liquid dynamic model), projected at two transparency levels (opaque, and semi-transparent), and presented with four different graphic textures (color, grayscale, white, and wireframe). The second experiment expanded the psychophysical integrality assessment of augmented scenes to 32 different images composed of four new environments (housing development, computer lab, planetary photo, and trees in countryside), with multiple computer-generated graphics (two, four, six, and eight), at two levels of integrality as defined by experiment one (high, low). The third experiment was an applied study that had two phases: 1) learning tasks using three augmented environments; and, 2) assembly tasks using eight augmented video instructions. The computer-generated graphics for each phase of experiment three were presented at two levels of integrality (high, low) as defined by experiment one. The primary results of the three experiments show that augmented reality scenes with computer-generated imagery presented transparently and in color were perceived most integrally; increasing the number of graphics from two to eight decreased integral perception; and, high integral graphics aided performance in learning and real assembly tasks. From the statistical results and experimenter observation of the three experiments, guidelines for designing integrally perceived graphics in augmented environments were compiled based on principles of human factors, perception, and graphic design. The key themes of the design guidelines were: 1) maintaining true shape information in the computer-generated graphics 2) using highly realistic graphics for naturalistic augmented settings; 3) considering the hardware limitations of the augmented system, particularly the display; and, 4) designing appropriately for the task (simple, complex, hands-on, cognitive, dynamic, static, etc.). / Ph. D.
2

Modelo abrangente e reconhecimento de gestos com as mãos livres para ambientes 3D. / Comprehensive model and gesture recognition with free hands for 3d environments.

Bernardes Júnior, João Luiz 18 November 2010 (has links)
O principal objetivo deste trabalho é possibilitar o reconhecimento de gestos com as mãos livres, para uso em interação em ambientes 3D, permitindo que gestos sejam selecionados, para cada contexto de interação, dentre um grande conjunto de gestos possíveis. Esse grande conjunto deve aumentar a probabilidade de que se possa selecionar gestos já existentes no domínio de cada aplicação ou com associações lógicas claras com as ações que comandam e, assim, facilitar o aprendizado, memorização e uso dos gestos. Estes são requisitos importantes para aplicações em entretenimento e educação, que são os principais alvos deste trabalho. Propõe-se um modelo de gestos que, baseado em uma abordagem linguística, os divide em três componentes: postura e movimento da mão e local onde se inicia. Combinando números pequenos de cada um destes componentes, este modelo permite a definição de dezenas de milhares de gestos, de diferentes tipos. O reconhecimento de gestos assim modelados é implementado por uma máquina de estados finitos com regras explícitas que combina o reconhecimento de cada um de seus componentes. Essa máquina só utiliza a hipótese que os gestos são segmentados no tempo por posturas conhecidas e nenhuma outra relacionada à forma como cada componente é reconhecido, permitindo seu uso com diferentes algoritmos e em diferentes contextos. Enquanto este modelo e esta máquina de estados são as principais contribuições do trabalho, ele inclui também o desenvolvimento de algoritmos simples mas inéditos para reconhecimento de doze movimentos básicos e de uma grande variedade de posturas usando equipamento bastante acessível e pouca preparação. Inclui ainda um framework modular para reconhecimento de gestos manuais em geral, que também pode ser aplicado a outros domínios e com outros algoritmos. Além disso, testes realizados com usuários levantam diversas questões relativas a essa forma de interação. Mostram também que o sistema satisfaz os requisitos estabelecidos. / This work\'s main goal is to make possible the recognition of free hand gestures, for use in interaction in 3D environments, allowing the gestures to be selected, for each interaction context, from a large set of possible gestures. This large set must increase the probability of selecting a gesture which already exists in the application\'s domain or with clear logic association with the actions they command and, thus, to facilitate the learning, memorization and use of these gestures. These requirements are important to entertainment and education applications, this work\'s main targets. A gesture model is proposed that, based on a linguistic approach, divides them in three components: hand posture and movement and the location where it starts. Combining small numbers for each of these components, this model allows the definition of tens of thousands of gestures, of different types. The recognition of gestures so modeled is implemented by a finite state machine with explicit rules which combines the recognition of each of its components. This machine only uses the hypothesis that gestures are segmented in time by known posture, and no other related to the way in which each component is recognized, allowing its use with different algorithms and in different contexts. While this model and this finite state machine are this work\'s main contributions, it also includes the development of simple but novel algorithms for the recognition of twelve basic movements and a large variety of postures requiring highly accessible equipment and little setup. It likewise includes the development of a modular framework for the recognition of hand gestures in general, that may also be applied to other domains and algorithms. Beyond that, tests with users raise several questions about this form of interaction. They also show that the system satisfies the requirements set for it.
3

Modelo abrangente e reconhecimento de gestos com as mãos livres para ambientes 3D. / Comprehensive model and gesture recognition with free hands for 3d environments.

João Luiz Bernardes Júnior 18 November 2010 (has links)
O principal objetivo deste trabalho é possibilitar o reconhecimento de gestos com as mãos livres, para uso em interação em ambientes 3D, permitindo que gestos sejam selecionados, para cada contexto de interação, dentre um grande conjunto de gestos possíveis. Esse grande conjunto deve aumentar a probabilidade de que se possa selecionar gestos já existentes no domínio de cada aplicação ou com associações lógicas claras com as ações que comandam e, assim, facilitar o aprendizado, memorização e uso dos gestos. Estes são requisitos importantes para aplicações em entretenimento e educação, que são os principais alvos deste trabalho. Propõe-se um modelo de gestos que, baseado em uma abordagem linguística, os divide em três componentes: postura e movimento da mão e local onde se inicia. Combinando números pequenos de cada um destes componentes, este modelo permite a definição de dezenas de milhares de gestos, de diferentes tipos. O reconhecimento de gestos assim modelados é implementado por uma máquina de estados finitos com regras explícitas que combina o reconhecimento de cada um de seus componentes. Essa máquina só utiliza a hipótese que os gestos são segmentados no tempo por posturas conhecidas e nenhuma outra relacionada à forma como cada componente é reconhecido, permitindo seu uso com diferentes algoritmos e em diferentes contextos. Enquanto este modelo e esta máquina de estados são as principais contribuições do trabalho, ele inclui também o desenvolvimento de algoritmos simples mas inéditos para reconhecimento de doze movimentos básicos e de uma grande variedade de posturas usando equipamento bastante acessível e pouca preparação. Inclui ainda um framework modular para reconhecimento de gestos manuais em geral, que também pode ser aplicado a outros domínios e com outros algoritmos. Além disso, testes realizados com usuários levantam diversas questões relativas a essa forma de interação. Mostram também que o sistema satisfaz os requisitos estabelecidos. / This work\'s main goal is to make possible the recognition of free hand gestures, for use in interaction in 3D environments, allowing the gestures to be selected, for each interaction context, from a large set of possible gestures. This large set must increase the probability of selecting a gesture which already exists in the application\'s domain or with clear logic association with the actions they command and, thus, to facilitate the learning, memorization and use of these gestures. These requirements are important to entertainment and education applications, this work\'s main targets. A gesture model is proposed that, based on a linguistic approach, divides them in three components: hand posture and movement and the location where it starts. Combining small numbers for each of these components, this model allows the definition of tens of thousands of gestures, of different types. The recognition of gestures so modeled is implemented by a finite state machine with explicit rules which combines the recognition of each of its components. This machine only uses the hypothesis that gestures are segmented in time by known posture, and no other related to the way in which each component is recognized, allowing its use with different algorithms and in different contexts. While this model and this finite state machine are this work\'s main contributions, it also includes the development of simple but novel algorithms for the recognition of twelve basic movements and a large variety of postures requiring highly accessible equipment and little setup. It likewise includes the development of a modular framework for the recognition of hand gestures in general, that may also be applied to other domains and algorithms. Beyond that, tests with users raise several questions about this form of interaction. They also show that the system satisfies the requirements set for it.
4

Using Graphical Context to Reduce the Effects of Registration Error in Augmented Reality

Robertson, Cindy Marie 09 November 2007 (has links)
An ongoing research focus in Augmented Reality (AR) is to improve tracking and display technology in order to minimize registration errors between the graphical display and the physical world. However, registration is not always necessary for users to understand the intent of an augmentation, especially in situations where the user and the system have shared semantic knowledge of the environment. I hypothesize that adding appropriate graphical context to an augmentation can ameliorate the effects of registration errors. I establish a theoretical basis supporting the use of context based on perceptual and cognitive psychology. I introduce the notion of Adaptive Intent-Based Augmented Reality (i.e. augmented reality systems that adapt their augmentations to convey the correct intent in a scene based on an estimate of the registration error in the system.) I extend the idea of communicative intent, developed for desktop graphical explanation systems by Seligmann and Feiner (Seligmann &Feiner, 1991), to include graphical context cues, and use this as the basis for the design of a series of example augmentations demonstrating the concept. I show how semantic knowledge of a scene and the intent of an augmentation can be used to generate appropriate graphical context that counters the effects of registration error. I evaluate my hypothesis in two user studies based on a Lego block-placement task. In both studies, a virtual block rendered on a head-worn display shows where to place the next physical block. In the first study, I demonstrate that a user can perform the task effectively in the presence of registration error when graphical context is included. In the second, I demonstrate that a variety of approaches to displaying graphics outside the task space are possible when sufficient graphical context is added.
5

Peripheral visual cues and their effect on the perception of egocentric depth in virtual and augmented environments

Jones, James Adam 09 December 2011 (has links)
The underestimation of depth in virtual environments at mediumield distances is a well studied phenomenon. However, the degree by which underestimation occurs varies widely from one study to the next, with some studies reporting as much as 68% underestimation in distance and others with as little as 6% (Thompson et al. [38] and Jones et al. [14]). In particular, the study detailed in Jones et al. [14] found a surprisingly small underestimation effect in a virtual environment (VE) and no effect in an augmented environment (AE). These are highly unusual results when compared to the large body of existing work in virtual and augmented distance judgments [16, 31, 36–38, 40–43]. The series of experiments described in this document attempted to determine the cause of these unusual results. Specifically, Experiment I aimed to determine if the experimental design was a factor and also to determine if participants were improving their performance throughout the course of the experiment. Experiment II analyzed two possible sources of implicit feedback in the experimental procedures and identified visual information available in the lower periphery as a key source of feedback. Experiment III analyzed distance estimation when all peripheral visual information was eliminated. Experiment IV then illustrated that optical flow in a participant’s periphery is a key factor in facilitating improved depth judgments in both virtual and augmented environments. Experiment V attempted to further reduce cues in the periphery by removing a strongly contrasting white surveyor’s tape from the center of the hallway, and found that participants continued to significantly adapt even when given very sparse peripheral cues. The final experiment, Experiment VI, found that when participants’ views are restricted to the field-of-view of the screen area on the return walk, adaptation still occurs in both virtual and augmented environments.
6

Creating Good User Experience in a Hand-Gesture-Based Augmented Reality Game / Användbarhet i ett handgestbaserat AR-spel

Lam, Benny, Nilsson, Jakob January 2019 (has links)
The dissemination of new innovative technology requires feasibility and simplicity. The problem with marker-based augmented reality is similar to glove-based hand gesture recognition: they both require an additional component to function. This thesis investigates the possibility of combining markerless augmented reality together with appearance-based hand gesture recognition by implementing a game with good user experience. The methods employed in this research consist of a game implementation and a pre-study meant for measuring interactive accuracy and precision, and for deciding upon which gestures should be utilized in the game. A test environment was realized in Unity using ARKit and Manomotion SDK. Similarly, the implementation of the game used the same development tools. However, Blender was used for creating the 3D models. The results from 15 testers showed that the pinching gesture was the most favorable one. The game was evaluated with a System Usability Scale (SUS) and received a score of 70.77 among 12 game testers, which indicates that the augmented reality game, which interaction method is solely based on bare-hands, can be quite enjoyable.

Page generated in 0.1029 seconds