• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 7
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 39
  • 20
  • 15
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Understanding and Improving Distal Pointing Interaction

Kopper, Regis Augusto Poli 04 August 2011 (has links)
Distal pointing is the interaction style defined by directly pointing at targets from a distance. It follows a laser pointer metaphor and the position of the cursor is determined by the intersection of a vector extending the pointing device with the display surface. Distal pointing as a basic interaction style poses several challenges for the user, mainly because of the lack of precision humans have when using it. The focus of this thesis is to understand and improve distal pointing, making it a viable interaction metaphor to be used in a wide variety of applications. We achieve this by proposing and validating a predictive model of distal pointing that is inspired by Fitts' law, but which contains some unique features. The difficulty of a distal pointing task is best described by the angular size of the target and the angular distance that the cursor needs to go across to reach the target from the input device perspective. The practical impact of this is that the user's relative position to the target should be taken into account. Based on the model we derived, we proposed a set of design guidelines for high-precision distal pointing techniques. The main guideline from the model is that increasing the target size is much more important than reducing the distance to the target. In order to improve distal pointing, we followed the model guidelines and designed interaction techniques that aim at improving the precision of distal pointing tasks. Absolute and Relative Mapping (ARM) distal pointing increases precision by offering the user a toggle which changes the control/display (CD) ratio such that a large movement of the input device is mapped to a small movement of the cursor. Dynamic Control Display Ratio (DyCoDiR) automatically increases distal pointing precision, as the user needs it. DyCoDiR takes into account the user distance to the interaction area and the speed at which the user moves the input device to dynamically calculate an increased CD ratio, making the action more precise the steadier the user tries to be. We performed an evaluation of ARM and DyCoDiR comparing them to basic distal pointing in a realistic context. In this experiment, we also provided variations of the techniques which increased the visual perception of targets through zooming in the area around the cursor when precision was needed. Results from the study show that ARM and DyCoDiR are significantly faster and more accurate than basic distal pointing with tasks that require very high precision. We analyzed user navigation strategies and found that the high precision techniques afford users to remain stationary while performing interactions. However, we also found that individual differences have a strong impact on the decision to walk or not, and that, sometimes, is more important than the technique affordance. We provided a validation for the distal pointing model through the analysis of expected difficulty of distal pointing tasks in light of each technique tested. We propose selection by progressive refinement, a new design concept for distal pointing selection techniques, whose goal is to offer the ability to achieve near perfect accuracy in selection at very cluttered environments. The idea of selection by progressive refinement is to gradually eliminate possible targets from the set of selectable objects until only one object is available for selection. We implemented SQUAD, a selection by progressive refinement distal pointing technique, and performed a controlled experiment comparing it to basic distal pointing. We found that there is a clear tradeoff between immediate selections that require high precision and selections by progressive refinement which always require low precision. We validated the model by fitting the distal pointing data and proposed a new model, which has a linear growth in time, for SQUAD selection. / Ph. D.
22

Walk-Centric User Interfaces for Mixed Reality

Santos Lages, Wallace 31 July 2018 (has links)
Walking is a natural part of our lives and is also becoming increasingly common in mixed reality. Wireless headsets and improved tracking systems allow us to easily navigate real and virtual environments by walking. In spite of the benefits, walking brings challenges to the design of new systems. In particular, designers must be aware of cognitive and motor requirements so that walking does not negatively impact the main task. Unfortunately, those demands are not yet fully understood. In this dissertation, we present new scientific evidence, interaction designs, and analysis of the role of walking in different mixed reality applications. We evaluated the difference in performance of users walking vs. manipulating a dataset during visual analysis. This is an important task, since virtual reality is increasingly being used as a way to make sense of progressively complex datasets. Our findings indicate that neither option is absolutely better: the optimal design choice should consider both user's experience with controllers and user's inherent spatial ability. Participants with reasonable game experience and low spatial ability performed better using the manipulation technique. However, we found that walking can still enable higher performance for participants with low spatial ability and without significant game experience. In augmented reality, specifying points in space is an essential step to create content that is registered with the world. However, this task can be challenging when information about the depth or geometry of the target is not available. We evaluated different augmented reality techniques for point marking that do not rely on any model of the environment. We found that triangulation by physically walking between points provides higher accuracy than purely perceptual methods. However, precision may be affected by head pointing tremors. To increase the precision, we designed a new technique that uses multiple samples to obtain a better estimate of the target position. This technique can also be used to mark points while walking. The effectiveness of this approach was demonstrated with a controlled augmented reality simulation and actual outdoor tests. Moving into the future, augmented reality will eventually replace our mobile devices as the main method of accessing information. Nonetheless, to achieve its full potential, augmented reality interfaces must support the fluid way we move in the world. We investigated the potential of adaptation in achieving this goal. We conceived and implemented an adaptive workspace system, based in the study of the design space and through user contextual studies. Our final design consists in a minimum set of techniques to support mobility and integration with the real world. We also identified a set of key interaction patterns and desirable properties of adaptation-based techniques, which can be used to guide the design of the next-generation walking-centered workspaces. / Ph. D. / Until recently, walking with virtual and augmented reality headsets was restricted by issues such as excessive weight, cables, tracking limitations, etc. As those limits go away, walking is becoming more common, making the user experience closer to the real world. If well explored, walking can also make some tasks easier and more efficient. Unfortunately, walking reduces our mental and motor performance and its consequences in interface design are not fully understood. In this dissertation, we present studies of the role of walking in three areas: scientific visualization in virtual reality, marking points in augmented reality, and accessing information in augmented reality. We show that although walking reduces our ability to perform those tasks, careful design can reduce its impact in a meaningful way.
23

[en] A 3D INTERACTION TOOL FOR ENGINEERING VIRTUAL ENVIRONMENTS USING MOBILE DEVICES / [pt] UMA FERRAMENTA DE INTERAÇÃO 3D PARA AMBIENTES VIRTUAIS DE ENGENHARIA UTILIZANDO DISPOSITIVOS MÓVEIS

DANIEL PIRES DE SA MEDEIROS 24 June 2014 (has links)
[pt] A interação em ambientes virtuais de engenharia se caracteriza pelo alto grau de precisão necessário para a realização de tarefas típicas desse ipo de ambiente. Para isso, normalmente são utilizados dispositivos de interação específícos que possuem 4 graus de liberdade ou mais. As atuais aplicações envolvendo interação 3D utilizam dispositivos de interação para a modelagem de objetos ou para a implementação de técnicas de navegação, seleção e manipulação de objetos em um ambiente virtual. Um problema relacionado é a necessidade de controlar tarefas naturalmente não-imersivas, como a entrada de símbolos (e.g., texto, fotos).Outro problema é a grande curva de aprendizado necessária para manusear tais dispositivos não convencionais. A adição de sensores popularização os smartphones e tablets possibilitaram a utilização desses dispositivos em ambientes virtuais de engenharia. Esses dispostitivos se diferenciam, além da popularidade e presença de sensores, pela possibilidade de inclusão de informações adicionais e a realização de tarefas naturalmente não-imersivas. Neste trabalho é apresentada uma ferramenta de interação 3D para tablets, que permite agregar todas as principais técnicas de interação 3D como navegação, seleção, manipulação, controle de sistema e entrada simbólica. Para avaliar a ferramenta proposta foi utilizada aplicação SimUEOP-Ambsim, um simulador de treinamento em plataformas de óleo e guias que tem a complexidade necessária e permite o uso de todas as técnicas implementadas. / [en] Interaction in engineering virtual environments is characterized by the necessity of the high precision level needed for the execution of specic tasks for this kind of environment. Generally this kind of task uses specicinteraction devices with 4 or more degrees of freedom (DOF). Current applications involving 3D interaction use interaction devices for object modelling or for the implementation of navigation, selection and manipulation tecniques in a virtual environment. A related problem is the necessity of controlling tasks that are naturally non-immersive, such as symbolic input (e.g., text, photos). Another problem is the large learning curve to handle such non-conventional devices. The addition of sensors and the popularization of smartphones and tablets, allowed the use of such devices in virtual engineering environments. These devices, besides their popularity and sensors, dier by the possibility of including additional information and performing naturally non-immersive tasks. This work presents a 3D interaction tablet-based tool, which allows the aggregation of all major 3D interaction tasks, such as navigation, selection, manipulation, system control and symbolic input. To evaluate the proposed tool we used the SimUEP-Ambsim application, a training simulator for oil and gas platforms that has the complexity needed and allows the use of all techniques implemented.
24

Collaboration interactive 3D en réalité virtuelle pour supporter des scénarios aérospatiaux / 3D collaborative interaction in virtual reality for aerospace scenarii

Clergeaud, Damien 17 October 2017 (has links)
De nos jours, l’industrie aérospatiale est composée d’acteurs internationaux.Due à la complexité de leur production (taille, nombre de composants,variété des systèmes, ...), la conception d’un avion ou d’un lanceurnécessite un grand nombre d’ingénieurs avec des domaines d’expertises variés.De plus, les industries aérospatiales sont distribuées de par le monde. Dans cecontexte complexe, il est nécessaire d’utiliser la réalité virtuelle afin de proposerdes environnements virtuels accessibles depuis des sites distants afin departager une expérience collaborative. Des problèmes particuliers, alors, surviennent,particulièrement par rapport à la perception des autres utilisateursimmergés.Cette thèse, en partenariat avec Airbus Group, se concentre sur la conceptionde techniques d’interaction collaboratives efficaces. Ces sessions collaborativespermettent de connecter plusieurs sites distants via le même environnementvirtuel. Ainsi, des experts de domaines variés peuvent travailler ensembleen étant immergés simultanément. Par exemple, si un problème survient durantles dernières étapes d’assemblage d’un lanceur, il peut être nécessairede rassembler des experts qui étaient impliqués en amont du projet (bureaud’étude pour la conception, usine pour la fabrication des systèmes), afin deconcevoir des solutions pour résoudre le problème.Dans cette thèse, nous proposons différentes techniques d’interactions afinde faciliter la collaboration à différents niveaux d’un projet industriel. Nousnous sommes intéressés à la communication et la perception entre collaborateursimmergés, à la prise d’annotation et la circulation de ces annotations età la collaboration asymétrique entre une salle de réunion et un environnementvirtuel à l’aide d’outil de réalité mixte. / The aerospace industry is no longer composed of local and individualbusinesses. Due to the complexity of the products (their size, the numberof components, the variety of systems, etc.), the design of an aircraft or alauncher involves a considerable number of engineers with various fields of expertise.Furthermore, aerospace companies often have industrial facilities allover the world. In such a complex setting, it is necessary to build virtual experimentsthat can be shared between different remote sites. Specific problemsthen arise, particularly in terms of the perception of other immersed users.We are working with Airbus Group in order to design efficient collaborativeinteraction methods. These collaborative sessions allow multiple sites to beconnected within the same virtual experiment and enable experts from differentfields to be immersed simultaneously. For instance, if a problem occurs duringthe final stages of a launcher assembly, it may be necessary to bring togetherexperts on different sites who were involved in previous steps ( initial design,manufacturing processes). In this thesis, we propose various interaction technique in order to ease thecollaboration at different moments of an industrial process. We contributedin the context of communication between immersed users, taking notes inthe virtual environment and sharing it outside virtual reality and asymmetriccollaboration between a physical meeting room and a virtual environment.
25

Immunology Virtual Reality (VR): Exploring Educational VR Experience Design for Science Learning

Zhang, Lei 14 May 2018 (has links)
Immunology Virtual Reality (VR) project is an immersive educational virtual reality experience that intends to provide an informal learning experience of specific immunology concepts to college freshmen in the Department of Biological Sciences at Virginia Tech (VT). The project is an interdisciplinary endeavor between my collaboration between people from different domain areas at VT: Creative Technologies, Education, Biological Sciences, and Computer Sciences. This thesis elaborates on the whole design process of how I created a working prototype of the project demo and shares insights from my design experience. / Master of Fine Arts / Immunology Virtual Reality is an immersive educational virtual reality experience in which a user takes on the role of an immune cell and migrates to fight off pathogen invasions at an infection site in the human body. It explores levels of interactivity and storytelling in educational VR and their impact on learning.
26

Interfaces utilisateur 3D, des terminaux mobiles aux environnements virtuels immersifs

Hachet, Martin 03 December 2010 (has links) (PDF)
Améliorer l'interaction entre un utilisateur et un environnement 3D est un défi de recherche primordial pour le développement positif des technologies 3D interactives dans de nombreux domaines de nos sociétés, comme l'éducation. Dans ce document, je présente des interfaces utilisateur 3D que nous avons développées et qui contribuent à cette quête générale. Le premier chapitre se concentre sur l'interaction 3D pour des terminaux mobiles. En particulier, je présente des techniques dédiées à l'interaction à partir de touches, et à partir de gestes sur les écrans tactiles des terminaux mobiles. Puis, je présente deux prototypes à plusieurs degrés de liberté basés sur l'utilisation de flux vidéos. Dans le deuxième chapitre, je me concentre sur l'interaction 3D avec les écrans tactiles en général (tables, écrans interactifs). Je présente Navidget, un exemple de technique d'interaction dédié au controle de la caméra virtuelle à partir de gestes 2D, et je discute des défis de l'interaction 3D sur des écrans multi-points. Finalement, le troisième chapitre de ce document est dédié aux environnements virtuels immersifs, avec une coloration spéciale vers les interfaces musicales. Je présente les nouvelles directions que nous avons explorées pour améliorer l'interaction entre des musiciens, le public, le son, et les environements 3D interactifs. Je conclue en discutant du futur des interfaces utilisateur 3D.
27

REAL-TIME CAPTURE AND RENDERING OF PHYSICAL SCENE WITH AN EFFICIENTLY CALIBRATED RGB-D CAMERA NETWORK

Su, Po-Chang 01 January 2017 (has links)
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. With the recent explosive growth of Augmented Reality (AR) and Virtual Reality (VR) platforms, utilizing camera RGB-D camera networks to capture and render dynamic physical space can enhance immersive experiences for users. To maximize coverage and minimize costs, practical applications often use a small number of RGB-D cameras and sparsely place them around the environment for data capturing. While sparse color camera networks have been studied for decades, the problems of extrinsic calibration of and rendering with sparse RGB-D camera networks are less well understood. Extrinsic calibration is difficult because of inappropriate RGB-D camera models and lack of shared scene features. Due to the significant camera noise and sparse coverage of the scene, the quality of rendering 3D point clouds is much lower compared with synthetic models. Adding virtual objects whose rendering depend on the physical environment such as those with reflective surfaces further complicate the rendering pipeline. In this dissertation, I propose novel solutions to tackle these challenges faced by RGB-D camera systems. First, I propose a novel extrinsic calibration algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Second, I propose a novel rendering pipeline that can capture and render, in real-time, dynamic scenes in the presence of arbitrary-shaped reflective virtual objects. Third, I have demonstrated a teleportation application that uses the proposed system to merge two geographically separated 3D captured scenes into the same reconstructed environment. To provide a fast and robust calibration for a sparse RGB-D camera network, first, the correspondences between different camera views are established by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic using rigid transformation that is optimal only for pinhole cameras, different view transformation functions including rigid transformation, polynomial transformation, and manifold regression are systematically tested to determine the most robust mapping that generalizes well to unseen data. Third, the celebrated bundle adjustment procedure is reformulated to minimize the global 3D projection error so as to fine-tune the initial estimates. To achieve a realistic mirror rendering, a robust eye detector is used to identify the viewer's 3D location and render the reflective scene accordingly. The limited field of view obtained from a single camera is overcome by our calibrated RGB-D camera network system that is scalable to capture an arbitrarily large environment. The rendering is accomplished by raytracing light rays from the viewpoint to the scene reflected by the virtual curved surface. To the best of our knowledge, the proposed system is the first to render reflective dynamic scenes from real 3D data in large environments. Our scalable client-server architecture is computationally efficient - the calibration of a camera network system, including data capture, can be done in minutes using only commodity PCs.
28

Contribution aux techniques pour enrichir l'espace moteur et l'espace visuel des dispositifs d'interaction bureautique

Almeida, Rodrigo Andrade Botelho de 06 November 2009 (has links)
De nombreux travaux ont montré que, à l’origine des limitations de l’interaction bureautique, il y a un manque à la fois d’espace moteur et d’espace visuel.Cette thèse explore des moyens pour optimiser l’usage de ces espaces.D’une part, à partir du constat que l’on contrôle la position et l’orientation d’un objet par un geste naturel, cette thèse étudie les bénéfices que peut offrir une souris dotée d’un capteur de rotation. Cette < souris rotative > permet à l’utilisateur de maîtriser avec aisance trois variables continues d’une tâche informatique. Un état de l’art présente des aspects perceptifs et moteurs des actions en question et les particularités ergonomiques et techniques d’un tel dispositif. Deux techniques d’interaction - visant à faciliter des tâches métier récurrentes - sont proposées :le réglage < quasi-intégral > et la < palette satellitaire >. Par ailleurs, une évaluation expérimentale compare la performance d’une souris rotative avec celle d’une souris traditionnelle.D’autre part, ce travail se penche sur les questions de la visualisation de documents dans le contexte des bibliothèques numériques. D’abord, il examine l’apport et la faisabilité technique de l’utilisation d’un dispositif d’affichage immersif pour la navigation dans un catalogue de titres virtuel. Puis, afin de faciliter l’inspection massive d’un lot de pages numérisées, il avance des techniques de visualisation zoomables et multi-focales. Ces dernières permettent, dans une recherche d’anomalies, de saisir vite les caractéristiques visuelles de quelques centaines de pages. Et cela grâce à un va-et-vient entre la vue d’ensemble et la navigation panoramique des détails. / Past research has suggested that among the reasons for the limitations of present desktop interaction style is the lack of both motor and visual space. The goal ofthis thesis is to optimize the use of such spaces. Based on the fact that one can control an object’s position and orientation through a natural movement, the first main contributioin of this thesis is to explorethe advantages of enhancing the sensing of the standard mouse througha rotation sensor. This < rotary mouse > allows one to easily control three continuous variables of a computer task. A survey presents the perceptual and motorissues of some rotary manipulations and also the technical and ergonomic requirements of such device. Two interaction techniques, aimed to simplify repetitive tasks, are proposed : the < nearly-integral selection > and the < satellite palette >.Furthermore, an experimental evaluation compares the performance of the rotarymouse with that of a standard one. The other main contribution of this work is to investigate document visualization issues in the context of digital libraries. First, it analyses the advantages and the technical feasibility of integrating an immersive display to an interface aimed to support navigation in a virtual catalog. Second, in order to inspect the quality of a batch of digitized pages, it explores some zoomable and multi-focal visualization techniques. The overview and the panoramic detail browsing enabled by such techniques try to help users, which have to identify the flaws resulted from the digitization process, to quickly grasp the visual characteristics of a large set of pages.
29

Interação com objetos digitais 3D em estúdios virtuais / Interaction with 3D objects in virtual studios

Pedroso, Rafael Guimarães [UNESP] 05 February 2016 (has links)
Submitted by RAFAEL GUIMARÃES PEDROSO null (rafael@tvu.unesp.br) on 2016-04-05T20:10:33Z No. of bitstreams: 1 Dissertação - Interação com objetos digitais 3D.pdf: 3377799 bytes, checksum: 711e5e4a1ddfb59c95c882bd1c4d136f (MD5) / Approved for entry into archive by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br) on 2016-04-07T16:16:41Z (GMT) No. of bitstreams: 1 pedroso_rg_me_bauru.pdf: 3377799 bytes, checksum: 711e5e4a1ddfb59c95c882bd1c4d136f (MD5) / Made available in DSpace on 2016-04-07T16:16:41Z (GMT). No. of bitstreams: 1 pedroso_rg_me_bauru.pdf: 3377799 bytes, checksum: 711e5e4a1ddfb59c95c882bd1c4d136f (MD5) Previous issue date: 2016-02-05 / O estúdio virtual é um sistema para a criação de cenas com objetos virtuais integrados digitalmente – e em tempo real – a imagens capturadas em estúdio. Seu emprego flexibiliza a produção audiovisual, permitindo a utilização de objetos e efeitos difíceis de serem recriados fisicamente. Diferente da forma clássica de produção, em que os efeitos são inseridos somente na pós-produção, o estúdio virtual insere o conteúdo digital na fase on set, facilitando direção e fotografia. Considerando este contexto, o objeto deste trabalho é a interação de atores com objetos virtuais utilizando técnicas de Realidade Aumentada em estúdios virtuais. O objetivo principal deste trabalho consiste na verificação da interação de atores com elementos virtuais 3D sob dois aspectos: o primeiro trata da aplicação de técnicas baseadas em interfaces tangíveis associadas ao uso de marcadores fiduciais; e o segundo foca-se na utilização da interação via gestos por meio de dispositivo de detecção de profundidade (Kinect). Para a comprovação do segundo aspecto procedeu-se na implementação, na forma de um protótipo, de um Módulo de Interação para o ambiente do ARSTUDIO, um ambiente de estúdio virtual que está em desenvolvimento na Unesp/Bauru, o qual permite a geração de cenas com Realidade Aumentada e associação de objetos virtuais por meio de marcadores fiduciais. / The virtual studio is a system for creating scenes with digitally integrated virtual objects - in real time - to images captured in the studio. Its use eases the audiovisual production, allowing the use of difficult physically rebuilt objects and effects. Unlike the classic production process, in which inserted effects just in post-production, the virtual studio inserts the digital content in the on set phase, facilitating direction and photography. Considering this context, the object of this research is the interaction of actors with virtual objects using Augmented Reality techniques in virtual studios. The main goal is to verify interaction between actors and 3D elements under two aspects: the first is about the application of based tangible interfaces and markers techniques; and the second is about gesture interaction by depth camera device (Kinect). To prove the second aspect it was made the implementation, in prototype, of a Interaction Module for the ARSTUDIO, a virtual studio environment under development in Unesp/Bauru, that allows generating of Augmented Reality scenes and association of virtual objects with markers.
30

Range imaging based obstacle detection for virtual environment systems and interactive metaphor based signalization / Détection d'obstacles basée sur l'imagerie de distance pour systèmes d'environnement virtuel et signalisation interactive basée sur des métaphores

Wozniak, Peter 27 June 2019 (has links)
Avec cette génération d'appareils, la réalité virtuelle (RV) s'est réellement installée dans les salons des utilisateurs finaux. Ces appareils disposent de 6 degrés de liberté de suivi, ce qui leur permet de se déplacer naturellement dans les mondes virtuels. Cependant, pour une locomotion naturelle dans le virtuel, il faut un espace libre correspondant dans l'environnement réel. L'espace disponible est souvent limité. Les objets de la vie quotidienne peuvent rapidement devenir des obstacles pour les utilisateurs de RV s'ils ne sont pas éliminés. Les systèmes actuellement disponibles n'offrent qu'une aide rudimentaire pour résoudre ce problème. Il n'y a pas de détection d'objets potentiellement dangereux. Cette thèse montre comment les obstacles peuvent être détectés automatiquement avec des caméras d'imagerie à distance et comment les utilisateurs peuvent être avertis efficacement de leur présence dans l'environnement virtuel. 4 métaphores visuelles ont été évaluées à l'aide d'une étude des utilisateurs. / With this generation of devices, virtual reality (VR) has actually made it into the living rooms of end-users. These devices feature 6 degrees of freedom tracking, allowing them to move naturally in virtual worlds. However, for a natural locomotion in the virtual, one needs a corresponding free space in the real environment. The available space is often limited. Objects of daily life can quickly become obstacles for VR users if they are not cleared away. The currently available systems offer only rudimentary assistance for this problem. There is no detection of potentially dangerous objects. This thesis shows how obstacles can be detected automatically with range imaging cameras and how users can be effectively warned about them in the virtual environment. 4 visual metaphors were evaluated with the help of a user study.

Page generated in 0.4307 seconds