Spelling suggestions: "subject:"3d used interface"" "subject:"3d use interface""
1 |
AN INITIAL PROTOTYPE FOR CURVED LIGHT IN AUGMENTED REALITYZhong, Ning 23 April 2015 (has links)
No description available.
|
2 |
Hybrid and Coordinated 3D Interaction in Immersive Virtual EnvironmentsWang, Jia 29 April 2015 (has links)
Through immersive stereoscopic displays and natural user interfaces, virtual reality (VR) is capable of offering the user a sense of presence in the virtual space, and has been long expected to revolutionize how people interact with virtual content in various application scenarios. However, with many technical challenges solved over the last three decades to bring low cost and high fidelity to VR experiences, we still do not see VR technology used frequently in many seemingly suitable applications. Part of this is due to the lack of expressiveness and efficiency of traditional “simple and reality-based� 3D user interfaces (3DUIs). The challenge is especially obvious when complex interaction tasks with diverse requirements are involved, such as editing virtual objects from multiple scales, angles, perspectives, reference frames, and dimensions. A common approach to overcome such problems is through hybrid user interface (HUI) systems that combine complementary interface elements to leverage their strengths. Based on this method, the first contribution of this dissertation is the proposal of Force Extension, an interaction technique that seamlessly integrates position-controlled touch and rate-controlled force input for efficient multi-touch interaction in virtual environments. Using carefully designed mapping functions, it is capable of offering fluid transitions between the two contexts, as well as simulating shear force input realistically for multi-touch gestures. The second contribution extends the HUI concept into immersive VR by introducing a Hybrid Virtual Environment (HVE) level editing system that combines a tablet and a Head-Mounted Display (HMD). The HVE system improves user performance and experience in complex high-level world editing tasks by using a “World-In-Miniature� and 2D GUI rendered on a multi-touch tablet device to compensate for the interaction limitations of a traditional HMD- and wand-based VR system. The concept of Interaction Context (IC) is introduced to explain the relationship between tablet interaction and the immersive interaction, and four coordination mechanisms are proposed to keep the perceptual, functional, and cognitive flow continuous during IC transitions. To offer intuitive and realistic interaction experiences, most immersive 3DUIs are centered on the user’s virtual avatar, and obey the same physics rules of the real world. However, this design paradigm also employs unnecessary limitations that hinders the performance of certain tasks, such as selecting objects in cluttered space, manipulating objects in six degrees of freedom, and inspecting remote spaces. The third contribution of this dissertation proposes the Object Impersonation technique, which breaks the common assumption that one can only immerse in the VE from a single avatar, and allows the user to impersonate objects in the VE and interact from their perspectives and reference frames. This hybrid solution of avatar- and object-based interaction blurs the line between travel and object selection, creating a unique cross-task interaction experience in the immersive environment. Many traditional 3DUIs in immersive VR use simple and intuitive interaction paradigms derived from real world metaphors. But they can be just as limiting and ineffective as in the real world. Using the coordinated HUI or HVE systems presented in this dissertation, one can benefit from the complementary advantages of multiple heterogeneous interfaces (Force Extension), VE representations (HVE Level Editor), and interaction techniques (Object Impersonation). This advances traditional 3D interaction into the more powerful hybrid space, and allows future VR systems to be applied in more application scenarios to provide not only presence, but also improved productivity in people’s everyday tasks.
|
3 |
Feed Me: an in-situ Augmented Reality Annotation Tool for Computer VisionIlo, Cedrick K. 02 July 2019 (has links)
The power of today's technology has enabled the combination of Computer Vision (CV) and Augmented Reality (AR) to allow users to interface with digital artifacts between indoor and outdoor activities. For example, AR systems can feed images of the local environment to a trained neural network for object detection. However, sometimes these algorithms can misclassify an object. In these cases, users want to correct the model's misclassification by adding labels to unrecognized objects, or re-classifying recognized objects. Depending on the number of corrections, an in-situ annotation may be a tedious activity for the user. This research will focus on how in-situ AR annotation can aid CV classification and what combination of voice and gesture techniques are efficient and usable for this task. / Master of Science / The power of today’s technology has allowed the ability of new inventions such as computer vision and Augmented Reality to work together seamlessly. The reason why computer scientists rave so much about computer vision is that it can enable a computer to see the world as humans do. With the rising popularity of Niantic’s Pokemon Go, Augmented Reality has become a new research area that researchers around the globe have taken part in to make it more stable and as useful as its next of kin virtual reality. For example, Augmented Reality can support users in gaining a better understanding of their environment by overlaying digital content into their field of view. Combining Computer Vision with Augmented Reality could aid the user further by detecting, registering, and tracking objects in the environment. However, sometimes a Computer Vision algorithm can falsely detect an object in a scene. In such cases, we wish to use Augmented Reality as a medium to update the Computer Vision’s object detection algorithm in-situ, meaning in place. With this idea, a user will be able to annotate all the objects within the camera’s view that were not detected by the object detection model and update any in-accurate classification of the objects. This research will primarily focus on visual feedback for in-situ annotation and the user experience of the Feed Me voice and gesture interface.
|
4 |
Isometric versus Elastic Surfboard Interfaces for 3D Travel in Virtual RealityWang, Jia 31 May 2011 (has links)
"
Three dimensional travel in immersive virtual environments (IVE) has been a difficult problem since the beginning of virtual reality (VR), basically due to the difficulty of designing an intuitive, efficient, and precise three degrees of freedom (DOF) interface which can map the user's finite local movements in the real world to a potentially infinite virtual space. Inspired by the Silver Surfer Sci-Fi movie and the popularity of the Nintendo Wii Balance Board interface, a surfboard interface appears to be a good solution to this problem. Based on this idea, I designed and developed a VR Silver Surfer system which allows a user to surf in the sky of an infinite virtual environment, using either an isometric balance board or an elastic tilt board. Although the balance board is the industrial standard of board interface, the tilt board seems to provide the user more intuitive, realistic and enjoyable experiences, without any sacrifice of efficiency or precision.
To validate this hypothesis we designed and conducted a user study that compared the two board interfaces in three independent experiments that break the travel procedure into separate DOFs. The results showed that in all experiments, the tilt board was not only as efficient and precise as the balance board, but also more intuitive, realistic and fun. In addition, despite the popularity of the balance board in the game industry, most subjects in the study preferred the tilt board in general, and in fact complained that the balance board could have been the cause of possible motion sickness. "
|
5 |
Trimatės vartotojo sąsajos, kuriamos atkuriamosios grafikos priemonėmis / Establishing of three-dimensional connections using graphic communicationsVižinienė, Asta 16 August 2007 (has links)
Atkuriamosios grafikos priemonės leidžia sukurti trimatę vartotojo sąsają, panaudojant įvarius trimačius modelius. Darbo tikslas: Trimačių modelių aprašymo kalbų analizė ir trimatės vartotojo sąsajos modelio sukūrimas. Darbo uždaviniai: išnagrinėti literatūrą apie trimatės vartotojo sąsajos kūrimą, atlikti atkuriamosios grafikos priemonių, skirtų kurti trimačius objektus, analizę bei pagrindinių virtualios realybės aprašymo kalbų analizę, pasinaudojant atkuriamosios grafikos priemonėmis sukurti trimatė vartotojo sąsajos modelį – trimatį tinklalapį. Trimatė vartotojo sąsaja gali būti kuriama pasinaudojant X3D priemonėmis arba naudojantis atskiromis programomis, skirtomis kurti trimačius objektus (pavyzdžiui, CAD, 3ds max, Maya) juos vėliau apjungiant. Virtuali realybė aprašoma VRML bei X3D kalbomis. / Objectives of the work: Literature analysis on use and construction of the three-dimensional connections, analysis of graphic tools used for construction of the three dimensional objects, the analysis of main computer languages for building the virtual reality, and establishing a model/web site of three dimensional connections by implementing investigated tools for three dimensional graphics. Reference base: Different tools for construction, modulation and data processing of the three dimensional environments, literature references. Three dimensional connections could be established by implementing X3D tools or by first using the programs for construction of three dimensional objects (for example CAD, 3ds max, Maya) and connecting them later. Virtual reality could be described using VRML and X3D languages.
|
6 |
Tangible User Interface for CAVE based on Augmented Reality TechniqueKim, Ji-Sun 20 January 2006 (has links)
This thesis presents a new 3-dimensional (3D) user interface system for a Cave Automated Virtual Environment (CAVE) application, based on Virtual Reality (VR), Augmented Reality (AR), and Tangible User Interface (TUI). We explore fundamental 3D interaction tasks with our user interface for the CAVE system. User interface (UI) is comprised of a specific set of components, including input/output devices and interaction techniques. Our approach is based on TUIs using ARToolKit, which is currently the most popular toolkit for use in AR projects. Physical objects (props) are used as input devices instead of any tethered electromagnetic trackers. An off-the-shelf webcam is used to get tracking input data. A unique pattern marker is attached to the prop, which is easily and simply tracked by ARToolKit. Our interface system is developed on CAVE infrastructure, which is a semi-immersive environment. All virtual objects are directly manipulated with props, each of which corresponds to a certain virtual object. To navigate, the user can move the background itself, while virtual objects remain in place. The user can actually feel the prop's movement through the virtual space. Thus, fundamental 3D interaction tasks such as object selection, object manipulation, and navigation are performed with our interface. To feel immersion, the user is allowed to wear stereoscopic glasses with a head tracker. This is the only tethered device for our work. Since our interface is based on tangible input tools, seamless transition between one and two-handed operation is provided. We went through three design phases to achieve better task performance. In the first phase, we conducted the pilot study, focusing on the question whether or not this approach is applicable to 3D immersive environments. After the pilot study, we redesigned props and developed ARBox. ARBox is used for as interaction space while the CAVE system is only used for display space. In this phase, we also developed interaction techniques for fundamental 3D interaction tasks. Our summative user evaluation was conducted with ARDesk, which is redesigned after our formative user evaluation. Two user studies aim to get user feedback and to improve interaction techniques as well as interface tools' design. The results from our user studies show that our interface can be intuitively and naturally applied to 3D immersive environments even though there are still some issues with our system design. This thesis shows that effective interactions in a CAVE system can be generated using AR technique and tangible objects. / Master of Science
|
7 |
NeuroGaze in Virtual Reality: Assessing an EEG and Eye Tracking Interface against Traditional Virtual Reality Input DevicesBarbel, Wanyea 01 January 2024 (has links) (PDF)
NeuroGaze is a novel Virtual Reality (VR) interface that integrates electroencephalogram (EEG) and eye tracking technologies to enhance user interaction within virtual environments (VEs). Diverging from traditional VR input devices, NeuroGaze allows users to select objects in a VE through gaze direction and cognitive intent captured via EEG signals. The research assesses the performance of the NeuroGaze system against conventional input devices such as VR controllers and eye gaze combined with hand gestures. The experiment, conducted with 20 participants, evaluates task completion time, accuracy, cognitive load through the NASA-TLX surveys, and user preference through a post-evaluation survey. Results indicate that while NeuroGaze presents a learning curve, evidenced by longer average task durations, it potentially offers a more accurate selection method with lower cognitive load, as suggested by its lower error rate and significant differences in the physical demand and temporal NASA-TLX subscale scores. This study highlights the viability of incorporating biometric inputs for more accessible and less demanding VR interactions. Future work aims to explore a multimodal EEG-Functional near-infrared spectroscopy (fNIRS) input device, further develop machine learning models for EEG signal classification, and extend system capabilities to dynamic object selection, highlighting the progressive direction for the use of Brain-Computer Interfaces (BCI) in virtual environments.
|
8 |
A Natural User Interface for Virtual Object Modeling for Immersive GamingXu, Siyuan 01 October 2013 (has links)
"
We designed an interactive 3D user interface system to perform object modeling in virtual environments. Expanding on existing 3D user interface techniques, we integrate low-cost human gesture recognition that endows the user with powerful abilities to perform complex virtual object modeling tasks in an immersive game setting.
Much research has been done to explore the possibilities of developing biosensors for Virtual Reality (VR) use. In the game industry, even though full body interaction techniques are involved in modern game consoles, most of the utilizations, in terms of game control, are still simple. In this project, we extended the use of motion tracking and gesture recognition techniques to create a new 3D UI system to support immersive gaming. We set a goal for the usability, which is virtual object modeling, and finally developed a game application to test its performance. "
|
9 |
Uživatelské rozhraní pro práci s počítačem ve virtuální realitě / User Interface for Work with Computer in Virtual RealityPazdera, Michal January 2016 (has links)
This thesis explores various ways of controlling computer in virtual reality. The aim of this thesis is to create a user interface which would allow the user to control computer in virtual reality. First it explores the possibilities of detecting users actions using sensors and its usability for controls and various interaction techniques in virtual reality. Based on the information gathered about the described topics it focuses on various ways of interactions using hands as controllers. This thesis tackles the issue of selecting and manipulating virtual objects. It introduces the design of three interaction techniques, which are then tested and evaluated.
|
10 |
Manipulation de contenu 3D sur des surfaces tactilesCohé, Aurélie 13 December 2012 (has links)
Les surfaces tactiles ayant connu un grand essor ces dernières années, le grand public les utilise quotidiennement pour de multiples tâches, telles que la consultation d'e-mail, la manipulation de photos, etc. En revanche, très peu d'applications 3D existent sur ces dispositifs, alors que de telles applications pourraient avoir un grand potentiel dans des domaines variés, telles que la culture, l'architecture, ou encore l'archéologie. La difficulté majeure pour ce type d'applications est d'interagir avec un espace défini en trois dimensions à partir d'une modalité d'interaction définie en deux dimensions. Les travaux effectués dans cette thèse explorent l'association entre surfaces tactiles et manipulation de contenu 3D pour le grand public. Les premières études ont été réalisées afin de comprendre comment l'utilisateur réagit pour manipuler un objet virtuel 3D avec une surface tactile sans lui imposer de techniques d'interaction particulières. De par les connaissances acquises sur les utilisateurs, les travaux suivants présentent l'élaboration de nouvelles techniques d'interaction ainsi que leur évaluation. / Since the emergence of tactile surfaces in recent years, the general public uses them every day for multiple tasks, such as checking email, photo manipulation, and so on. However, very few 3D applications on these devices exist, although such applications may have great potential in various fields, such as culture, architecture, or archeology. The major difficulty for such applications is to interact with a defined space in three dimensions from an interaction modality defined in two dimensions. Work in this thesis explores the association between tactile surfaces and manipulation of 3D content for the general public. The first studies were conducted to understand how the user tends to manipulate a 3D virtual object with a touch surface without imposing specific interaction techniques. Throughout knowledge gained by users, the following works are developing new interaction techniques and their evaluation.
|
Page generated in 0.0821 seconds