• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 115
  • 16
  • 10
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 201
  • 201
  • 71
  • 68
  • 68
  • 45
  • 42
  • 38
  • 34
  • 33
  • 27
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Teleoperation Interfaces in Human-Robot Teams

Driewer, Frauke January 1900 (has links)
Zugl.: Würzburg, Univ., Diss., 2009.
22

Facilitating Human-Robot Collaboration Using a Mixed-Reality Projection System

January 2017 (has links)
abstract: Human-Robot collaboration can be a challenging exercise especially when both the human and the robot want to work simultaneously on a given task. It becomes difficult for the human to understand the intentions of the robot and vice-versa. To overcome this problem, a novel approach using the concept of Mixed-Reality has been proposed, which uses the surrounding space as the canvas to augment projected information on and around 3D objects. A vision based tracking algorithm precisely detects the pose and state of the 3D objects, and human-skeleton tracking is performed to create a system that is both human-aware as well as context-aware. Additionally, the system can warn humans about the intentions of the robot, thereby creating a safer environment to work in. An easy-to-use and universal visual language has been created which could form the basis for interaction in various human-robot collaborations in manufacturing industries. An objective and subjective user study was conducted to test the hypothesis, that using this system to execute a human-robot collaborative task would result in higher performance as compared to using other traditional methods like printed instructions and through mobile devices. Multiple measuring tools were devised to analyze the data which finally led to the conclusion that the proposed mixed-reality projection system does improve the human-robot team's efficiency and effectiveness and hence, will be a better alternative in the future. / Dissertation/Thesis / Masters Thesis Computer Science 2017
23

Improving Performance of a Mixed Reality Application on the Edge with Hardware Acceleration

Eriksson, Jesper, Akouri, Christoffer January 2020 (has links)
Using specialized hardware to accelerate workloads have the potential to bring great performance lifts in various applications. Using specialized hardware to speed up the slowest executing component in an application will make the whole application execute faster, since it cannot be faster than it's slowest part. This work investigates two modifications to improve an existing virtual reality application with the help of more hardware support. The existing virtual reality application uses a server computer which handles virtual object rendering, these are later sent to the mobile phone, which is the end user. In this project the server part of the application, where the Simultaneous Localization And Mapping (SLAM) library is run was modified to use a Compute Unified Device Architecture (CUDA) accelerated variant. The software encoder and decoder used for the video streaming were modified to use specialized hardware. Small changes were made to the client-side application to allow the latency measurement to work when changing the server-side encoder. Accelerating SLAM with CUDA showed an increase in the number of processed frames each second, and frame processing time, at the cost of latency between the end and edge device. Using the hardware encoder and decoder resulted in no improvement considering latency or processed frames, in fact, the hardware encoders and decoder performed worse than the baseline configuration. The reduced frame processing time indicates that the CUDA platform is beneficial provided that the additional latency that occurred from the implementation is reduced or removed.
24

Exploring the Efficacy of Using Augmented Reality to Alleviate Common Misconceptions about Natural Selection

January 2019 (has links)
abstract: Evidence suggests that Augmented Reality (AR) may be a powerful tool for alleviating certain, lightly held scientific misconceptions. However, many misconceptions surrounding the theory of evolution are deeply held and resistant to change. This study examines whether AR can serve as an effective tool for alleviating these misconceptions by comparing the change in the number of misconceptions expressed by users of a tablet-based version of a well-established classroom simulation to the change in the number of misconceptions expressed by users of AR versions of the simulation. The use of realistic representations of objects is common for many AR developers. However, this contradicts well-tested practices of multimedia design that argue against the addition of unnecessary elements. This study also compared the use of representational visualizations in AR, in this case, models of ladybug beetles, to symbolic representations, in this case, colored circles. To address both research questions, a one-factor, between-subjects experiment was conducted with 189 participants randomly assigned to one of three conditions: non AR, symbolic AR, and representational AR. Measures of change in the number and types of misconceptions expressed, motivation, and time on task were examined using a pair of planned orthogonal contrasts designed to test the study’s two research questions. Participants in the AR-based condition showed a significantly smaller change in the number of total misconceptions expressed after the treatment as well as in the number of misconceptions related to intentionality; none of the other misconceptions examined showed a significant difference. No significant differences were found in the total number of misconceptions expressed between participants in the representative and symbolic AR-based conditions, or on motivation. Contrary to the expectation that the simulation would alleviate misconceptions, the average change in the number of misconceptions expressed by participants increased. This is theorized to be due to the juxtaposition of virtual and real-world entities resulting in a reduction in assumed intentionality. / Dissertation/Thesis / Doctoral Dissertation Educational Technology 2019
25

Visualisera och interagera med arbetsordrar i Mixed Reality

Wallentin, Viktor January 2021 (has links)
Mixed Reality(MR) tillåter en visuell representation av virtuella objekt i det mänskliga synfältet. Genom att visualisera virtuella objekt kan befintliga arbetsmetoder förenklas och även utvidgas med saker som annars inte vart möjligt. Examensarbetet kommer att titta på hur Mixed Reality kan tillämpas för en arbetsorder, där man med Mixed Reality visualiserar en arbetsorders position baserat på sfäriska koordinater. Examensarbetet kommer även att titta på hur ett befintligt arbetsflöde för en arbetsorder kan representeras och interageras med.För att uppnå detta behövs det särskilt utredas om hur man relativt en given position för arbetsorder tar fram en position för användaren och placerar ut objektet relativt användaren och arbetsordern.Särskilt måste även interaktionsmöjligheter undersökas där en arbetsorder skall presenteras för en användare, och även ge användaren möjlighet att uppdatera arbetsorderns status. För att uppnå ett mätbart resultat så tas en applikation fram för en HoloLens 2-enhet, där de metoder som implementeras demonstrerar om målen har uppnåtts.Applikationen skapas i Unity med hjälp av Mixed Reality ToolKit.Applikationen i HoloLensen inhämtar positionsdata från en Android-klient då HoloLensen saknar möjlighet att inhämta positionsdata baserat på sfäriska koordinater.En modul som läser in QR-koder för att skapa en ankarpunkt tillämpades, där ankarpunkten sedan används för att placera ut och spara markörobjekt relativt ankarpunktens position.En checklista tas fram för att visa på hur statusrapportering kan utföras i Mixed Reality utan att kräva textinmatning. Resultatet visar på att det är fullt möjligt att inhämta GPS-data till en HoloLens och rita ut objekt baserat på dess koordinat relativt användaren.Lämpligheten att enbart använda GPS-data för positionering anses inte lämpligt om målet är att representera arbetsorderns position mer utförligt än en grov uppskattning, och en metod bestående av GPS-data och Azure Spatial Anchors föreslås för en mer precis positionering.Resulterande applikation visar även på att man med HoloLens kan visa en arbetsorder och dess innehåll samtidigt som man med en checklista för arbetsordern markerar delmoment som utförda och på så sätt skapar ett exempel på ett arbetsflöde utan tidskrävande inmatningar. En meny för markörer tas fram som visar att det är möjligt att med hög precision markera objekt relativt arbetsorderns QR-kod. / Mixed Reality (MR) allows a visual representation of virtual objects in the human field of vision. By visualizing virtual objects, existing working methods can be simplified and even expanded with methods that would not otherwise be possible. The project will look at how Mixed Reality can be applied to a work order, where Mixed Reality visualizes the position of a work order based on cartesian coordinates. The project will also look at how an existing workflow for a work order can be represented and interacted with.To achieve this, it is necessary to investigate in particular how to receive the position for an user relative to a given position for a work order, and place the object relative to the user and the work order.In particular, the possibilities to interact must also be investigated. A user should be able to read a work order, and also have the ability to update the status of the work order. To achieve a measurable result, an application is developed for a HoloLens 2-device, where the methods selected to be implemented demonstrate if the goals have been achieved.The application is created in Unity using the Mixed Reality ToolKit.The application in HoloLensen gets position data from an Android client as the HoloLensen does not have the ability to collect position data based on cartesian coordinates.A module that reads QR codes to create an anchor point is also applied, where the anchor point is then used to place and save marker objects relative to the position of the anchor point.A checklist is created to show how status of the work order can be updated in Mixed Reality without requiring text input. The results show that it is possible to obtain GPS data for a HoloLens and visualize objects based on its coordinates relative to the user.The suitability to use only GPS data for positioning is not considered appropriate if the goal is to represent the position of the work order in more detailed manner than a rough estimate, and a method consisting of GPS data and Azure Spatial Anchors is proposed for a more precise positioning.The resulting application also shows that the HoloLens can display a work order and its content. By using a checklist to mark steps as completed, its possible to create a work flow without requiering time-consuming inputs. A menu for markers is also produced which shows that it is possible to mark objects with high precision relative to the QR code of the work order.
26

THE EFFECT OF DYNAMIC RIM LIGHTING ON USERS VISUAL ATTENTION IN THE VIRTUAL ENVIRONMENT

Siqi Guo (15341794) 24 April 2023 (has links)
<p>We conducted a study in the virtual environment to explore the influence of three types of lighting (dynamic rim lighting vs. static rim lighting vs. no rim lighting) on users’ visual attention, and the lighting’s potential effects on the users’ preference/choice-making. We recruited 40 participants to complete a virtual grocery shopping task in the experiments, and after the experiment, the participants were given a survey to self-report their experience. We found that (1) the users do not prefer to collect virtual objects with dynamic rim lighting than virtual objects with static rim lighting; (2) the users do not prefer to collect virtual objects with rim lighting than virtual objects without lighting; (3) if the virtual object has a warm-colored texture, it’s more likely to be chosen when it has dynamic rim lighting compared with static rim lighting or no rim lighting; and (4) properties of the dominant color on the texture of a virtual object, such as the B value is a good predictor in predicting if the user tends to choose the object with rim lighting or without rim lighting, while R, B and Lightness values are plausible in predicting if the user tends to choose the virtual objects with dynamic rim lighting or static rim lighting. </p>
27

Training Wayfinding: Natural Movement In Mixed Reality

Savage, Ruthann 01 January 2006 (has links)
The Army needs a distributed training environment that can be accessed whenever and wherever required for training and mission rehearsal. This paper describes an exploratory experiment designed to investigate the effectiveness of a prototype of such a system in training a navigation task. A wearable computer, acoustic tracking system, and see-through head mounted display (HMD) were used to wirelessly track users' head position and orientation while presenting a graphic representation of their virtual surroundings, through which the user walked using natural movement. As previous studies have shown that virtual environments can be used to train navigation, the ability to add natural movement to a type of virtual environment may enhance that training, based on the proprioceptive feedback gained by walking through the environment. Sixty participants were randomly assigned to one of three conditions: route drawing on printed floor plan, rehearsal in the actual facility, and rehearsal in a mixed reality (MR) environment. Participants, divided equally between male and female in each group, studied verbal directions of route, then performed three rehearsals of the route, with those in the map condition drawing it onto three separate printed floor plans, those in the practice condition walking through the actual facility, and participants in the MR condition walking through a three dimensional virtual environment, with landmarks, waypoints and virtual footprints. A scaling factor was used, with each step in the MR environment equal to three steps in the real environment, with the MR environment also broken into "tiles", like pages in an atlas, through which participant progressed, entering each tile in succession until they completed the entire route. Transfer of training testing that consisted of a timed traversal of the route through the actual facility showed a significant difference in route knowledge based on the total time to complete the route, and the number of errors committed while doing so, with "walkers" performing better than participants in the paper map or MR condition, although the effect was weak. Survey knowledge showed little difference among the three rehearsal conditions. Three standardized tests of spatial abilities did not correlate with route traversal time, or errors, or with 3 of the 4 orientation localization tasks. Within the MR rehearsal condition there was a clear performance improvement over the three rehearsal trials as measured by the time required to complete the route in the MR environment which was accepted as an indication that learning occurred. As measured using the Simulator Sickness Questionnaire, there were no incidents of simulator sickness in the MR environment. Rehearsal in the actual facility was the most effective training condition; however, it is often not an acceptable form of rehearsal given an inaccessible or hostile environment. Performance between participants in the other two conditions were indistinguishable, pointing toward continued experimentation that should include the combined effect of paper map rehearsal with mixed reality, especially as it is likely to be the more realistic case for mission rehearsal, since there is no indication that maps should be eliminated. To walk through the environment beforehand can enhance the Soldiers' understanding of their surroundings, as was evident through the comments from participants as they moved from MR to the actual space: "This looks like I was just here", and "There's that pole I kept having trouble with". Such comments lead one to believe that this is a tool to continue to explore and apply. While additional research on the scaling and tiling factors is likely warranted, to determine if the effect can be applied to other environments or tasks, it should be pointed out that this is not a new task for most adults who have interacted with maps, where a scaling factor of 1 to 15,000 is common in orienteering maps, and 1 to 25,000 in military maps. Rehearsal time spent in the MR condition varied widely, some of which could be blamed on an issue referred to as "avatar excursions", a system anomaly that should be addressed in future research. The proprioceptive feedback in MR was expected to positively impact performance scores. It is very likely that proprioceptive feedback is what led to the lack of simulator sickness among these participants. The design of the HMD may have aided in the minimal reported symptoms as it allowed participants some peripheral vision that provided orientation cues as to their body position and movement. Future research might include a direct comparison between this MR, and a virtual environment system through which users move by manipulating an input device such as a mouse or joystick, while physically remaining stationary. The exploration and confirmation of the training capabilities of MR as is an important step in the development and application of the system to the U.S. Army training mission. This experiment was designed to examine one potential training area in a small controlled environment, which can be used as the foundation for experimentation with more complex tasks such as wayfinding through an urban environment, and or in direct comparison to more established virtual environments to determine strengths, as well as areas for improvement, to make MR as an effective addition to the Army training mission.
28

Augmentation In Visual Reality (avr)

Zhang, Yunjun 01 January 2007 (has links)
Human eyes, as the organs for sensing light and processing visual information, enable us to see the real world. Though invaluable, they give us no way to "edit" the received visual stream or to "switch" to a different channel. The invention of motion pictures and computer technologies in the last century enables us to add an extra layer of modifications between the real world and our eyes. There are two major approaches to modifications that we consider here, offline augmentation and online augmentation. The movie industry has pushed offline augmentation to an extreme level; audiences can experience visual surprises that they have never seen in their real lives, even though it may take a few months or years for the production of the special visual effects. On the other hand, online augmentation requires that modifications be performed in real time. This dissertation addresses problems in both offline and online augmentation. The first offline problem addressed here is the generation of plausible video sequences after removing relatively large objects from the original videos. In order to maintain temporal coherence among the frames, a motion layer segmentation method is applied. From this, a set of synthesized layers is generated by applying motion compensation and a region completion algorithm. Finally, a plausibly realistic new video, in which the selected object is removed, is rendered given the synthesized layers and the motion parameters. The second problem we address is to construct a blue screen key for video synthesis or blending for Mixed Reality (MR) applications. As a well researched area, blue screen keying extracts a range of colors, typically in the blue spectrum, from a captured video sequence to enable the compositing of multiple image sources. Under ideal conditions with uniform lighting and background color, a high quality key can be generated through commercial products, even in real time. However, A Mixed Realty application typically involves a head-mounted display (HMD) with poor camera quality. This in turn requires the keying algorithm to be robust in the presence of noise. We conduct a three stage keying algorithm to reduce the noise in the key output. First a standard blue screen keying algorithm is applied to the input to get a noisy key; second the image gradient information and the corresponding region are compared with the result in the first step to remove noise in the blue screen area; and finally a matting approach is applied on the boundary of the key to improve the key quality. Another offline problem we address in this dissertation is the acquisition of correct transformation between the different coordinate frames in a Mixed Reality (MR) application. Typically an MR system includes at least one tracking system. Therefore the 3D coordinate frames that need to be considered include the cameras, the tracker, the tracker system and a world. Accurately deriving the transformation between the head-mounted display camera and the affixed 6-DOF tracker is critical for mixed reality applications. This transformation brings the HMD cameras into the tracking coordinate frame, which in turn overlaps with a virtual coordinate frame to create a plausible mixed visual experience. We carry out a non-linear optimization method to recover the camera-tracker transformation with respect to the image reprojection error. For online applications, we address a problem to extend the luminance range in mixed reality environments. We achieve this by introducing Enhanced Dynamic Range Video, a technique based on differing brightness settings for each eye of a video see-through head mounted display (HMD). We first construct a Video-Driven Time-Stamped Ball Cloud (VDTSBC), which serves as a guideline and a means to store temporal color information for stereo image registration. With the assistance of the VDTSBC, we register each pair of stereo images, taking into account confounding issues of occlusion occurring within one eye but not the other. Finally, we apply luminance enhancement on the registered image pairs to generate an Enhanced Dynamic Range Video.
29

Developing an Augmented Reality Visual Clutter Score Through Establishing the Applicability of Image Analysis Measures of Clutter and the Analysis of Augmented Reality User Interface Properties

Flittner, Jonathan Garth 05 September 2023 (has links)
Augmented reality (AR) is seeing a rapid expansion into several domains due to the proliferation of more accessible and powerful hardware. While augmented reality user interfaces (AR UIs) allow the presentation of information atop the real world, this extra visual data potentially comes at a cost of increasing the visual clutter of the users' field of view, which can increase visual search time, error rates, and have an overall negative effect on performance. Visual clutter has been studied for existing display technologies, but there are no established measures of visual clutter for AR UIs which precludes the study of the effects of clutter on performance in AR UIs. The first objective of this research is to determine the applicability of extant image analysis measures of feature congestion, edge density, and sub-band entropy for measuring visual clutter in the head-worn optical see-through AR space and establish a relationship between image analysis measures of clutter and visual search time. These image analysis measures are specifically chosen to quantify clutter, as they can be applied to complex and naturalistic scenes, as is common to experience while using an optical see-through AR UI. The second objective is to examine the effects of AR UIs comprised of multiple apparent depths on user performance through the metric of visual search time. The third objective is to determine the effects of other AR UI properties such as target clutter, target eccentricity, target apparent depth and target total distance on performance as measured through visual search time. These results will then be used to develop a visual clutter score, which will rate different AR UIs against each other. Image analysis measures for clutter of feature congestion, edge density, and sub-band entropy of clutter were correlated to visual search time when they were taken for the overall AR UI and when they were taken for a target object that a participant was searching for. In the case of an AR UI comprised of both projected and AR parts, image analysis measures were not correlated to visual search time for the constituent AR UI parts (projected or AR) but were still correlated to the overall AR UI clutter. Target eccentricity also had an effect on visual search time, while target apparent depth and target total distance from center did not. Target type and AR object percentage also had an effect on visual search time. These results were synthesized into a general model known as the "AR UI Visual Clutter Score Algorithm" using a multiple regression. This model can be used to compare different AR UIs to each other in order to identify the AR UI that is projected to have lower target visual search times. / Doctor of Philosophy / Augmented reality is a novel but growing technology. The ability to project visual information into the real-world comes with many benefits, but at the cost of increasing visual clutter. Visual clutter in existing displays has been shown to negatively affect visual search time, error rates, and general performance, but there are no established measures of visual clutter augmented reality displays, so it is unknown if visual clutter will have the same effects. The first objective of this research is to establish measures of visual clutter for augmented reality displays. The second objective is to better understand the unique properties of augmented reality displays, and how that may affect ease of use. Measures of visual clutter were correlated to visual search time when they were taken for the augmented reality user interface, and when they were taken for a given target object within that a participant was searching for. It was also found that as targets got farther from the center of the field of view, visual search time increased, while the depth of a target from the user and the total distance a target was from the user did not. Study 1 also showed that target type and AR object percentage also had an effect on visual search time. Combining these results gives a model that can be used to compare different augmented reality user interfaces to each other.
30

Moderne Methode zur manuellen und kollisionsfreien Telemanipulation von Industrierobotern basierend auf einem digitalen Zwilling

Pospiech, Th., Gysin, M. 12 February 2024 (has links)
Dieser Beitrag zeigt ein umgesetztes Gesamtkonzept für eine manuelle und kollisionsfreie Telemanipulation von Industrierobotern basierend auf einem digitalen Zwilling. Als Demonstrator dient hierfür ein Aufbau, mit dem eine manuelle Pick-and-Place-Applikation exemplarisch nachgebildet wird. Der manuell gesteuerte Industrieroboter soll kleine, mit Flüssigkeit befüllten Glasfläschchen (sogenannten Vials) aufnehmen, transportieren und an einen definierten Ablageort kollisionsfrei absetzten. In diesem Beitrag werden sämtliche notwendigen Schritte nachvollziehbar dargestellt und erläutert. Die Definition und Realisierung des digitalen Zwillings, die Gestaltungsmöglichkeiten des Arbeitsraums und dessen Überwachung sowie die Kollisionsüberprüfung sollen hierbei die Schwerpunkte darstellen. Außerdem werden die notwendigen Systemvoraussetzungen Umsetzung des Konzepts dargestellt. Die eigentliche Bewegungssteuerung des Industrieroboters wird mit unterschiedlichen Manipulatoren verifiziert.

Page generated in 0.0752 seconds