• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Video See-Through Augmented Reality Application on a Mobile Computing Platform Using Position Based Visual POSE Estimation

Fischer, Daniel 22 August 2013 (has links)
A technique for real time object tracking in a mobile computing environment and its application to video see-through Augmented Reality (AR) has been designed, verified through simulation, and implemented and validated on a mobile computing device. Using position based visual position and orientation (POSE) methods and the Extended Kalman Filter (EKF), it is shown how this technique lends itself to be flexible to tracking multiple objects and multiple object models using a single monocular camera on different mobile computing devices. Using the monocular camera of the mobile computing device, feature points of the object(s) are located through image processing on the display. The relative position and orientation between the device and the object(s) is determined recursively by an EKF process. Once the relative position and orientation is determined for each object, three dimensional AR image(s) are rendered onto the display as if the device is looking at the virtual object(s) in the real world. This application and the framework presented could be used in the future to overlay additional informational onto displays in mobile computing devices. Example applications include robotic aided surgery where animations could be overlaid to assist the surgeon, in training applications that could aid in operation of equipment or in search and rescue operations where critical information such as floor plans and directions could be virtually placed onto the display. Current approaches in the field of real time object tracking are discussed along with the methods used for video see-through AR applications on mobile computing devices. The mathematical framework for the real time object tracking and video see-through AR rendering is discussed in detail along with some consideration to extension to the handling of multiple AR objects. A physical implementation for a mobile computing device is proposed detailing the algorithmic approach along with design decisions. The real time object tracking and video see-through AR system proposed is verified through simulation and details around the accuracy, robustness, constraints, and an extension to multiple object tracking are presented. The system is then validated using a ground truth measurement system and the accuracy, robustness, and its limitations are reviewed. A detailed validation analysis is also presented showing the feasibility of extending this approach to multiple objects. Finally conclusions from this research are presented based on the findings of this work and further areas of study are proposed.
2

Video See-Through Augmented Reality Application on a Mobile Computing Platform Using Position Based Visual POSE Estimation

Fischer, Daniel 22 August 2013 (has links)
A technique for real time object tracking in a mobile computing environment and its application to video see-through Augmented Reality (AR) has been designed, verified through simulation, and implemented and validated on a mobile computing device. Using position based visual position and orientation (POSE) methods and the Extended Kalman Filter (EKF), it is shown how this technique lends itself to be flexible to tracking multiple objects and multiple object models using a single monocular camera on different mobile computing devices. Using the monocular camera of the mobile computing device, feature points of the object(s) are located through image processing on the display. The relative position and orientation between the device and the object(s) is determined recursively by an EKF process. Once the relative position and orientation is determined for each object, three dimensional AR image(s) are rendered onto the display as if the device is looking at the virtual object(s) in the real world. This application and the framework presented could be used in the future to overlay additional informational onto displays in mobile computing devices. Example applications include robotic aided surgery where animations could be overlaid to assist the surgeon, in training applications that could aid in operation of equipment or in search and rescue operations where critical information such as floor plans and directions could be virtually placed onto the display. Current approaches in the field of real time object tracking are discussed along with the methods used for video see-through AR applications on mobile computing devices. The mathematical framework for the real time object tracking and video see-through AR rendering is discussed in detail along with some consideration to extension to the handling of multiple AR objects. A physical implementation for a mobile computing device is proposed detailing the algorithmic approach along with design decisions. The real time object tracking and video see-through AR system proposed is verified through simulation and details around the accuracy, robustness, constraints, and an extension to multiple object tracking are presented. The system is then validated using a ground truth measurement system and the accuracy, robustness, and its limitations are reviewed. A detailed validation analysis is also presented showing the feasibility of extending this approach to multiple objects. Finally conclusions from this research are presented based on the findings of this work and further areas of study are proposed.
3

Night Vision Goggle Simulation in a Mixed Reality Flight Simulator with Seamless Integrated Real World

Sproge, Sofia January 2024 (has links)
Night vision goggles (NVGs) are optical devices used to enhance human vision at low light conditions such as nighttime. The image seen through the goggles is brightened but with the consequence of introduced visual limitations and illusions. Because of this, fighter pilots need to undergo proper training with such equipment before operating with them in real-life. An NVG simulation within a Mixed Reality (MR) flight simulator can in theory be used to build the skills needed and directly translate them into real life. In this thesis, an NVG effect was added to a video see-through camera feed(VST) such that a whole NVG simulation could be experienced in an MR flight simulator. Furthermore, a method to seamlessly integrate the VST into the nocturnal virtual world was proposed. By conducting a semi structured interview with an NVG expert, the experienced realism, presence, and training value of the implemented effects were measured. A thematic analysis of the gathered interview data provided insight into the most important themes regarding NVG simulations within an MR flight simulator. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
4

Local collaboration in a Mixed Reality environment : Adding virtual heads to improve social presence / Lokalt samarbete i en miljö med blandad verklighet : Att lägga till virtuella huvuden för att förbättra den sociala närvaron

Detto, Lucas January 2023 (has links)
This thesis investigates the use of virtual avatar heads to enhance local video see-through collaboration in a mixed reality environment. When users engage with each other using a head-mounted display, the device sits atop their head and obstructs their view, concealing their gaze and facial expressions while collaborating. This makes communication harder by removing non-verbal hints. The proposed solution aims to restore them by rendering virtual avatar heads on top of users heads. The study examines the impact of different avatar styles, including the potential effects of the uncanny valley, as well as the use of lip syncing versus facial tracking to animate avatar mouths. An application was developed using Unity3D to implement this solution, allowing two users to collaborate in a mixed reality environment with avatars on their heads. An experiment was conducted with 56 participants, where users collaborated in two tasks: the twenty question game and a collaborative object placement task. A between-subject design was used to compare with and without avatars, avatar rendering type and avatar lip animation. During the experiment, social presence, user experience and performance were measured through questionnaires (Networked Minds of social presence, NASA TLX and User Experience Questionnaire) and eye gaze data. The study found that although there was not always a strong difference between no avatar and avatars, the use of cartoon avatars with lip syncing was the most favorable option, enhancing the user’s comfort and facilitating interpretation of their partner’s emotions and feelings, as well as receiving more attention from them, which could be due to the Uncanny Valley. However, no evidence of performance improvement was found. The findings of this study have important implications for the design of collaborative mixed reality environments, highlighting the potential benefits of using virtual avatars to enhance communication and social presence. The study also underscores the importance of avatar style and facial animation as well as the potential impact of the uncanny valley on user experience. / Denna avhandling undersöker användningen av virtuella avatarhuvuden för att förbättra lokalt samarbete med genomskinlig video i en mixed reality-miljö. När användare interagerar med varandra med hjälp av en huvudmonterad display sitter enheten ovanpå huvudet och skymmer sikten, döljer blicken och ansiktsuttrycken medan de samarbetar. Detta försvårar kommunikationen genom att ta bort icke-verbala ledtrådar. Den föreslagna lösningen syftar till att återställa dem genom att rendera virtuella avatarhuvuden ovanpå användarnas huvuden. Studien undersöker effekterna av olika avatarstilar, inklusive de potentiella effekterna av uncanny valley, samt användningen av läppsynkronisering kontra ansiktsspårning för att animera avatarernas munnar. En applikation utvecklades med Unity3D för att implementera denna lösning, så att två användare kan samarbeta i en mixed reality-miljö med avatarer på sina huvuden. Ett experiment genomfördes med 56 deltagare, där användarna samarbetade i två uppgifter: spelet med tjugo frågor och en samarbetsuppgift för objektplacering. En mellanobjektsdesign användes för att jämföra med och utan avatarer, avatarens renderingstyp och avatarens läppanimering. Under experimentet mättes social närvaro, användarupplevelse och prestanda med hjälp av frågeformulär (Networked Minds of social presence, NASA TLX och User Experience Questionnaire) och ögonstyrningsdata. Studien visade att även om det inte alltid fanns en stark skillnad mellan ingen avatar och avatarer, var användningen av tecknade avatarer med läppsynkronisering det mest fördelaktiga alternativet, vilket förbättrade användarens komfort och underlättade tolkningen av deras partners känslor och sinnestämming, samt fick mer uppmärksamhet från dem, vilket kan bero på Uncanny Valley. Det fanns dock inga bevis för att prestandan förbättrades. Resultaten av denna studie har viktiga konsekvenser för utformningen av kollaborativa mixed reality-miljöer och belyser de potentiella fördelarna med att använda virtuella avatarer för att förbättra kommunikationen och den sociala närvaron. Studien understryker också vikten av avatarstil och ansiktsanimering samt den potentiella inverkan av uncanny valley på användarupplevelsen.

Page generated in 0.3631 seconds