• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Model-Based Approach to Hands Overlay for Augmented Reality

Adolfsson, Fredrik January 2021 (has links)
Augmented Reality is a technology where the user sees the environment mixed with a virtual reality containing things such as text, animations, pictures, and videos. Remote guidance is a sub-field of Augmented Reality where guidance is given remotely to identify and solve problems without being there in person. Using hands overlay, the guide can use his or her hand to point and show gestures in real-time. To do this one needs to track the hands and create a video stream that represents them. The video stream of the hands is then overlaid on top of the video from the individual getting help. A solution currently used in the industry is to use image segmentation, which is done by segmenting an image to foreground and background to decide what to include. This requires distinct differences between the pixels that should be included and the ones that should be discarded to work correctly. This thesis instead investigates a model-based approach to hand tracking, where one tracks points of interest on the hands to build a 3D model of them. A model-based solution is based on sensor data, meaning that it would not have the limitations that image segmentation has. A prototype is developed and integrated into the existing solution. The hand modeling is done in a Unity application and then transferred into the existing application. The results show that there is a clear but not too significant overhead, so it can run on a normal computer. The prototype works as a proof of concept and shows the potential of a model-based approach.
2

Augmented Remote Guidance in Final Assembly of Military Aircraft

Säll, Jakob January 2018 (has links)
With today’s smartphones and smart glasses, and the progression of augmented reality, the possibilities to interact over distance has made it feasible to guide one another in an intuitive and effective way. This combination of technology and software principles allows a local operator to record a scenario from his or her point of view and show this to a remotely located expert. An expert can, in turn, help the operator by interacting through that video feed by highlighting aspects and overlaying information for the operator to see. The aim of this study was to investigate how such a system should be configured if it were to be implemented in the context of final assembly of military aircraft. An understanding of the context and situations where external help might be needed was established through an ethnographical study. User tests were conducted with an existing system in comparable cases, inspired by results from the first study. This in order to evaluate the configuration of the hardware and interactivity. Results indicate that it is useful to implement a remote guidance system which allows augmented overlays in the context of final assembly. A greater need for such a system was found in situations in which a subject matter expert must investigate and assess issues and errors that has occurred. These scenarios are often characterized by differentiating environment, from cases where there is a good overview to situations in which the mirrors must be used to see from the right angle beyond one’s field of view. A remote guidance system should be able to support both cases and must, therefore, be modular in a way so that an external camera can be used to reach in while the screen can be seen simultaneously. Regarding the need for interaction between interactors in such situations are limited. The user studies indicate that simple referential gestures on frozen images of a video feed might be enough. / Global Assembly Instruction Strategies (GAIS) 2
3

Robust Background Segmentation For Use in Real-time Application : A study on using available foreground-background segmentation research for real-world application / Robust bakgrundssegmentering för använding i realtids-applikation

Brynielsson, Emil January 2023 (has links)
In a world reliant on big industries to produce large quantities of more or less every product used, it is of utmost importance that the machines in such industries continue to run with minimum amounts of downtime. One way more and more providers of such industrial machines try to help their customers reduce downtime when a machine stops working or needs maintenance is through the use of remote guidance; a way of knowledge transfer from a technician to a regular employee that aims to allow the regular employee to be guided in real-time by a technician to solve the task himself, thus, not needing the technician to travel to the factory.  One technology that may come to mind if you were to create such a guiding system is to use augmented reality and maybe have a technician record his or her hand and in real-time overlay this upon the videostream the onsite employee sees. This is something available today, however, to separate the hand of the technician from the background can be a complex task especially if the background is not a single colour or the hand has a similar colour to the background. These kinds of limitations to the background separation are what this thesis aims to find a solution to. This thesis addresses this challenge by creating a test dataset containing five different background scenarios that are deemed representative of what a person who would use the product most likely can find something similar to without going out of their way. In each of the five scenarios, there are two videos taken, one with a white hand and one with a hand wearing a black glove. Then a machine learning model is trained in a couple of different configurations and tested on the test scenarios. The best of the models is later also tried to run directly on a mobile phone. It was found that the machine learning model achieved rather promising background segmentation and running on the computer with a dedicated GPU real-time performance was achievable. However, running on the mobile device the processing time proved to be not sufficient.
4

Improving 3D Remote Guidance using Shared AR Spaces : Separating responsibility of tracking and rendering 3D AR‐objects / Förbättrande av avståndssamarbete i 3D via delade AR‐rymder

Mansén, Erik January 2022 (has links)
Two common problems in Remote Guidance applications include the remote guides lack of direct control over their view into the worker’s physical environment and the difficulties that arise with trying to place virtual 3D objects in a real 3D environment,via a moving, shaky, 2D image.The first issue can be called a lack of remote spatial awareness, the guide can see only what the worker enables them to see. In the worst case the guide is rendered blind to the task environment while the worker is unable to use their device. A common occurrence is tasks that require both hands.The second issue arises from the inherent difficulty present in trying to correctly place a 3D object using only a limited perspective. Camera shake and unreliable tracking of the physical environment being depicted only further add to this problem. Studies show that 3D annotations make for much more effective means of communication, especially in 3D task environments. Allowing the guide some measure of control over their own view has also been shown to improve the guides ability to aid their partner. This paper investigates a method of Remote Guidance where the task of environment tracking and object placement are separated. A prototype application is developed and tested against a baseline 2D-annotation Remote Guidance tool. The study finds the prototype to be an effective way of placing virtual 3D objects in a remote environment. Experimental results show that communication is indeed made better by the inclusion of 3D objects into Remote Guidance. This comes at the cost of a slight increase in the timetaken to complete a task as the complexity of the 3D tool is greater than the 2D one. Unfortunately, the experiment performed fails to properly account for remote spatial awareness.

Page generated in 0.0426 seconds