• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 15
  • 10
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 70
  • 63
  • 63
  • 42
  • 38
  • 37
  • 33
  • 31
  • 27
  • 18
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Taxonomía de aplicaciones y videojuegos de realidad mixta / Taxonomy of applications and video games of mixed reality

Sánchez Requejo, Luis Felipe, Ramirez Reyes, Jam Carlo 22 September 2020 (has links)
La realidad mixta, unificación de la realidad virtual y la realidad aumentada, posee muchas expectativas debido a las grandes tendencias que han surgido desde su creación y la forma en que ha fusionado casi en su totalidad a nuestro mundo real con el mundo digital, con la proyección de objetos digitales que estimulan los sentidos, logrando obtener una percepción similar a los objetos del entorno real y llevando su uso a múltiples posibilidades. En la investigación se identificó la problemática que aborda la necesidad de profundizar sobre las propiedades de la realidad mixta y los objetivos a trazar para cubrir con dicha necesidad. Durante el proyecto se logró recolectar información y crear un catálogo sobre los distintos tipos de aplicaciones, dispositivos y soluciones tecnológicas implementadas referente a la tecnología. Se investigó acerca de la parte teórica de la realidad mixta en base a las distintas definiciones del autor seminal y de expertos en la materia, además de las definiciones sobre las tecnologías con funcionalidades similares. Además, se procesó la información obtenida, identificando los rubros de negocio en donde se desempeña la realidad Mixta. Finalmente, se logró crear una taxonomía de realidad mixta y un gráfico estadístico de la participación de cada rubro de negocio en el mercado, con el fin de poder tener un panorama claro de la adopción y el valor comercial de cada rubro en donde se ejerce dicha tecnología y que pueda ser utilizado como referencia para la creación de proyectos de tecnología. / Mixed reality, the union of virtual reality and augmented reality, has high expectations due to the biggest trends that have emerged since its inception and the way in which it has almost entirely merged the real world with the digital world with the projection of digital objects that stimulate the senses, achieving a perception similar to the objects in the real environment and taking their use to multiple possibilities. The research identified the problem that addresses the need to delve into the properties of mixed reality and the objectives to be set in order to meet this need. During the project, information was collected in order to create a catalog on the different types of applications, devices and technological solutions implemented regarding that technology. The theoretical part of mixed reality was researched based on the different definitions given by authors and experts in the field, in addition to the definitions of technologies with similar functionalities. Moreover, the information obtained was processed, identifying the business areas where the mixed reality operates. Finally, it was possible to create a mixed reality taxonomy and a statistical graph of the participation of each business area in the market, for the purpose of being able to have a clear overview of the adoption and commercial value of each area where said technology is used and that can be employed as a reference for the creation of future technology projects. / Tesis
72

Interaction Design Meets Marine Sustainability – Mixed Reality Tour Near the Oresund Bridge

Ustinova, Valentina January 2020 (has links)
This study investigates an Interaction Design approach as a tool to unfold complex topics. By making a bridge between IxD and AR/MR, it addresses a field of Marine Sustainability to find a way how IxD can contribute to it while coinciding with its values and goals. The design process results with the development of Mixed Reality tour where following the narrative, users move across three locations investigating past, present and possible futures of the Oresund strait. The final concept contributes to the discussion about the role of IxD in addressing experiential qualities of AR/MR and demonstrates how Interaction Design can contribute to Marine Sustainability in a way that is different from a technology-driven approach.
73

COMPUTATIONAL THINKING FOR ADULTS- DESIGNING AN IMMERSIVE MULTI-MODAL LEARNING EXPERIENCE USING MIXED REALITY

George, Lenard January 2018 (has links)
No description available.
74

From TeachLivE™ to the Classroom: Building Preservice Special Educators’ Proficiency with Essential Teaching Skills

Dawson, Melanie Rees 01 May 2016 (has links)
Preservice special education teachers need to develop essential teaching skills to competently address student academics and behavior in the classroom. TeachLivETM is a sophisticated virtual simulation that has recently emerged in teacher preparation programs to supplement traditional didactic instruction and field experiences. Teacher educators can engineer scenarios in TeachLivETM to cumulatively build in complexity, allowing preservice teachers to incrementally interleave target skills in increasingly difficult situations. The purpose of this study was to investigate the effectiveness of TeachLivETM on preservice special education teachers’ delivery of error correction, specific praise, and praise around in the virtual environment and in authentic classroom settings. Four preservice special educators who were teaching on provisional licenses in upper elementary language arts classrooms participated in this multiple baseline study across target skills. Participants attended weekly TeachLivETM sessions as a group, where they engaged in three short teaching turns followed by structured feedback. Participants’ proficiency with the target skills was analyzed on three weekly assessments. First, participants’ mastery of current and previous target skills was measured during their third teaching turn of the intervention session (i.e., TeachLivETM training assessment). Next, participants’ proficiency with all skills, including those that had not been targeted yet in intervention, were measured immediately following intervention sessions (i.e., TeachLivETM comprehensive assessment). Finally, teachers submitted a weekly video recording of a lesson in their real classroom (i.e. classroom generalization assessment). Repeated practice and feedback in TeachLivETM promoted participants’ mastery of essential target skills. Specifically, all participants demonstrated proficiency with error correction, specific praise, and praise around on both the TeachLivETM training assessment and the more complex TeachLivETM comprehensive assessment, with a strong pattern of generalized performance to authentic classroom settings. Participants maintained proficiency with the majority of the target skills in both environments when assessed approximately one month after intervention was discontinued. Implications of the study are discussed, including the power of interleaved practice in TeachLivETM and how generalization and maintenance may be impacted by the degree of alignment between virtual and real teaching scenarios.
75

IIoT based Augmented Reality for Factory Data Collection and Visualization

Rosales Vizuete, Jonathan P. 15 June 2020 (has links)
No description available.
76

HUMAN POINT-TO-POINT REACHING AND SWARM-TEAMING PERFORMANCE IN MIXED REALITY

Zhao, Chen 22 January 2021 (has links)
No description available.
77

Mixed Reality for Gripen Flight Simulators

Olsson, Tobias, Ullberg, Oscar January 2021 (has links)
This thesis aims to evaluate how different mixed reality solutions can be built and whetheror not it could be used for flight simulators. A simulator prototype was implemented usingUnreal Engine 4 with Varjo’s Unreal Engine plugin giving the foundation for the evaluations done through user studies. Three user studies were performed to test subjectivelatency with Varjo XR-1 in a mixed reality environment, test hand-eye coordination withVarjo XR-1 in a video see-through environment, and test the sense of immersion betweenan IR depth sensor and chroma key flight simulator prototype. The evaluation was seenfrom several perspectives, consisting of: an evaluation from a latency perspective on howa mixed reality solution would compare to an existing dome projector solution, how wellthe masking could be done when using either chroma keying or IR depth sensors, andlastly, which of the two evaluated mixed reality techniques are preferred to use in a senseof immersion and usability.The investigation conducted during the thesis showed that while using a mixed realityenvironment had a minimal impact on system latency compared to using a monitor setup.However, the precision in hand-eye coordination while using VST-mode was evaluated tohave a decreased interaction accuracy while conducting tasks. The comparison betweenthe two mixed reality techniques presented in which areas the techniques excel and wherethey are lacking, therefore, a decision needs to be made to what is more important for eachindividual use case while developing a mixed reality simulator.
78

Near-Field Depth Perception in Optical See-Though Augmented Reality

Singh, Gurjot 17 August 2013 (has links)
Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality.
79

ManiLoco: A Locomotion Method to Aid Concurrent Object Manipulation in Virtual Reality

Dayu Wan (13104111) 15 July 2022 (has links)
<p>In Virtual Reality (VR), users often need to explore a large virtual space within a limited physical space. However, as one of the most popular and commonly-used methods for such room-scale problems, teleport always relies on hand-based controllers. In applications that require consistent hand interaction, such teleport methods may conflict with the users' hand operation, and make them uncomfortable, thus affecting their experience. </p> <p>To alleviate these limitations, this research designs and implements a new interactive object-based VR locomotion method, ManiLoco, as an eye- and foot-based low-cost method. This research also evaluates ManiLoco and compares it with state-of-the-art Point & Teleport and Gaze Teleport methods in a within-subject experiment with 14 participants.</p> <p>The results confirm the viability of the method and its possibility in such applications. ManiLoco makes the users feel much more comfortable with their hands and focus more on the hand interaction in the application while maintaining efficiency and presence. Further, the users' trajectory maps indicate that ManiLoco, despite the introduction of walking, can be applicable to room-scale tracking space. Finally, as a locomotion method only relied on VR head-mounted display (HMD) and software detection, ManiLoco can be easily applied to any VR applications as a plugin.</p>
80

Teleoperation Interfaces in Human-Robot Teams / Benutzerschnittstellen für Teleoperation in Mensch-Roboter Teams

Driewer, Frauke January 2008 (has links) (PDF)
Diese Arbeit beschäftigt sich mit der Verbesserung von Mensch-Roboter Interaktion in Mensch-Roboter Teams für Teleoperation Szenarien, wie z.B. robotergestützte Feuerwehreinsätze. Hierbei wird ein Konzept und eine Architektur für ein System zur Unterstützung von Teleoperation von Mensch-Roboter Teams vorgestellt. Die Anforderungen an Informationsaustausch und -verarbeitung, insbesondere für die Anwendung Rettungseinsatz, werden ausgearbeitet. Weiterhin wird das Design der Benutzerschnittstellen für Mensch-Roboter Teams dargestellt und Prinzipien für Teleoperation-Systeme und Benutzerschnittstellen erarbeitet. Alle Studien und Ansätze werden in einem Prototypen-System implementiert und in verschiedenen Benutzertests abgesichert. Erweiterungsmöglichkeiten zum Einbinden von 3D Sensordaten und die Darstellung auf Stereovisualisierungssystemen werden gezeigt. / This work deals with teams in teleoperation scenarios, where one human team partner (supervisor) guides and controls multiple remote entities (either robotic or human) and coordinates their tasks. Such a team needs an appropriate infrastructure for sharing information and commands. The robots need to have a level of autonomy, which matches the assigned task. The humans in the team have to be provided with autonomous support, e.g. for information integration. Design and capabilities of the human-robot interfaces will strongly influence the performance of the team as well as the subjective feeling of the human team partners. Here, it is important to elaborate the information demand as well as how information is presented. Such human-robot systems need to allow the supervisor to gain an understanding of what is going on in the remote environment (situation awareness) by providing the necessary information. This includes achieving fast assessment of the robot´s or remote human´s state. Processing, integration and organization of data as well as suitable autonomous functions support decision making and task allocation and help to decrease the workload in this multi-entity teleoperation task. Interaction between humans and robots is improved by a common world model and a responsive system and robots. The remote human profits from a simplified user interface providing exactly the information needed for the actual task at hand. The topic of this thesis is the investigation of such teleoperation interfaces in human-robot teams, especially for high-risk, time-critical, and dangerous tasks. The aim is to provide a suitable human-robot team structure as well as analyze the demands on the user interfaces. On one side, it will be looked on the theoretical background (model, interactions, and information demand). On the other side, real implementations for system, robots, and user interfaces are presented and evaluated as testbeds for the claimed requirements. Rescue operations, more precisely fire-fighting, was chosen as an exemplary application scenario for this work. The challenges in such scenarios are high (highly dynamic environments, high risk, time criticality etc.) and it can be expected that results can be transferred to other applications, which have less strict requirements. The present work contributes to the introduction of human-robot teams in task-oriented scenarios, such as working in high risk domains, e.g. fire-fighting. It covers the theoretical background of the required system, the analysis of related human factors concepts, as well as discussions on implementation. An emphasis is placed on user interfaces, their design, requirements and user testing, as well as on the used techniques (three-dimensional sensor data representation, mixed reality, and user interface design guidelines). Further, the potential integration of 3D sensor data as well as the visualization on stereo visualization systems is introduced.

Page generated in 0.0454 seconds