• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 15
  • 10
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 166
  • 166
  • 60
  • 59
  • 54
  • 33
  • 30
  • 29
  • 28
  • 28
  • 20
  • 17
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Exploring the Efficacy of Using Augmented Reality to Alleviate Common Misconceptions about Natural Selection

January 2019 (has links)
abstract: Evidence suggests that Augmented Reality (AR) may be a powerful tool for alleviating certain, lightly held scientific misconceptions. However, many misconceptions surrounding the theory of evolution are deeply held and resistant to change. This study examines whether AR can serve as an effective tool for alleviating these misconceptions by comparing the change in the number of misconceptions expressed by users of a tablet-based version of a well-established classroom simulation to the change in the number of misconceptions expressed by users of AR versions of the simulation. The use of realistic representations of objects is common for many AR developers. However, this contradicts well-tested practices of multimedia design that argue against the addition of unnecessary elements. This study also compared the use of representational visualizations in AR, in this case, models of ladybug beetles, to symbolic representations, in this case, colored circles. To address both research questions, a one-factor, between-subjects experiment was conducted with 189 participants randomly assigned to one of three conditions: non AR, symbolic AR, and representational AR. Measures of change in the number and types of misconceptions expressed, motivation, and time on task were examined using a pair of planned orthogonal contrasts designed to test the study’s two research questions. Participants in the AR-based condition showed a significantly smaller change in the number of total misconceptions expressed after the treatment as well as in the number of misconceptions related to intentionality; none of the other misconceptions examined showed a significant difference. No significant differences were found in the total number of misconceptions expressed between participants in the representative and symbolic AR-based conditions, or on motivation. Contrary to the expectation that the simulation would alleviate misconceptions, the average change in the number of misconceptions expressed by participants increased. This is theorized to be due to the juxtaposition of virtual and real-world entities resulting in a reduction in assumed intentionality. / Dissertation/Thesis / Doctoral Dissertation Educational Technology 2019
22

Visualisera och interagera med arbetsordrar i Mixed Reality

Wallentin, Viktor January 2021 (has links)
Mixed Reality(MR) tillåter en visuell representation av virtuella objekt i det mänskliga synfältet. Genom att visualisera virtuella objekt kan befintliga arbetsmetoder förenklas och även utvidgas med saker som annars inte vart möjligt. Examensarbetet kommer att titta på hur Mixed Reality kan tillämpas för en arbetsorder, där man med Mixed Reality visualiserar en arbetsorders position baserat på sfäriska koordinater. Examensarbetet kommer även att titta på hur ett befintligt arbetsflöde för en arbetsorder kan representeras och interageras med.För att uppnå detta behövs det särskilt utredas om hur man relativt en given position för arbetsorder tar fram en position för användaren och placerar ut objektet relativt användaren och arbetsordern.Särskilt måste även interaktionsmöjligheter undersökas där en arbetsorder skall presenteras för en användare, och även ge användaren möjlighet att uppdatera arbetsorderns status. För att uppnå ett mätbart resultat så tas en applikation fram för en HoloLens 2-enhet, där de metoder som implementeras demonstrerar om målen har uppnåtts.Applikationen skapas i Unity med hjälp av Mixed Reality ToolKit.Applikationen i HoloLensen inhämtar positionsdata från en Android-klient då HoloLensen saknar möjlighet att inhämta positionsdata baserat på sfäriska koordinater.En modul som läser in QR-koder för att skapa en ankarpunkt tillämpades, där ankarpunkten sedan används för att placera ut och spara markörobjekt relativt ankarpunktens position.En checklista tas fram för att visa på hur statusrapportering kan utföras i Mixed Reality utan att kräva textinmatning. Resultatet visar på att det är fullt möjligt att inhämta GPS-data till en HoloLens och rita ut objekt baserat på dess koordinat relativt användaren.Lämpligheten att enbart använda GPS-data för positionering anses inte lämpligt om målet är att representera arbetsorderns position mer utförligt än en grov uppskattning, och en metod bestående av GPS-data och Azure Spatial Anchors föreslås för en mer precis positionering.Resulterande applikation visar även på att man med HoloLens kan visa en arbetsorder och dess innehåll samtidigt som man med en checklista för arbetsordern markerar delmoment som utförda och på så sätt skapar ett exempel på ett arbetsflöde utan tidskrävande inmatningar. En meny för markörer tas fram som visar att det är möjligt att med hög precision markera objekt relativt arbetsorderns QR-kod. / Mixed Reality (MR) allows a visual representation of virtual objects in the human field of vision. By visualizing virtual objects, existing working methods can be simplified and even expanded with methods that would not otherwise be possible. The project will look at how Mixed Reality can be applied to a work order, where Mixed Reality visualizes the position of a work order based on cartesian coordinates. The project will also look at how an existing workflow for a work order can be represented and interacted with.To achieve this, it is necessary to investigate in particular how to receive the position for an user relative to a given position for a work order, and place the object relative to the user and the work order.In particular, the possibilities to interact must also be investigated. A user should be able to read a work order, and also have the ability to update the status of the work order. To achieve a measurable result, an application is developed for a HoloLens 2-device, where the methods selected to be implemented demonstrate if the goals have been achieved.The application is created in Unity using the Mixed Reality ToolKit.The application in HoloLensen gets position data from an Android client as the HoloLensen does not have the ability to collect position data based on cartesian coordinates.A module that reads QR codes to create an anchor point is also applied, where the anchor point is then used to place and save marker objects relative to the position of the anchor point.A checklist is created to show how status of the work order can be updated in Mixed Reality without requiring text input. The results show that it is possible to obtain GPS data for a HoloLens and visualize objects based on its coordinates relative to the user.The suitability to use only GPS data for positioning is not considered appropriate if the goal is to represent the position of the work order in more detailed manner than a rough estimate, and a method consisting of GPS data and Azure Spatial Anchors is proposed for a more precise positioning.The resulting application also shows that the HoloLens can display a work order and its content. By using a checklist to mark steps as completed, its possible to create a work flow without requiering time-consuming inputs. A menu for markers is also produced which shows that it is possible to mark objects with high precision relative to the QR code of the work order.
23

THE EFFECT OF DYNAMIC RIM LIGHTING ON USERS VISUAL ATTENTION IN THE VIRTUAL ENVIRONMENT

Siqi Guo (15341794) 24 April 2023 (has links)
<p>We conducted a study in the virtual environment to explore the influence of three types of lighting (dynamic rim lighting vs. static rim lighting vs. no rim lighting) on users’ visual attention, and the lighting’s potential effects on the users’ preference/choice-making. We recruited 40 participants to complete a virtual grocery shopping task in the experiments, and after the experiment, the participants were given a survey to self-report their experience. We found that (1) the users do not prefer to collect virtual objects with dynamic rim lighting than virtual objects with static rim lighting; (2) the users do not prefer to collect virtual objects with rim lighting than virtual objects without lighting; (3) if the virtual object has a warm-colored texture, it’s more likely to be chosen when it has dynamic rim lighting compared with static rim lighting or no rim lighting; and (4) properties of the dominant color on the texture of a virtual object, such as the B value is a good predictor in predicting if the user tends to choose the object with rim lighting or without rim lighting, while R, B and Lightness values are plausible in predicting if the user tends to choose the virtual objects with dynamic rim lighting or static rim lighting. </p>
24

Training Wayfinding: Natural Movement In Mixed Reality

Savage, Ruthann 01 January 2006 (has links)
The Army needs a distributed training environment that can be accessed whenever and wherever required for training and mission rehearsal. This paper describes an exploratory experiment designed to investigate the effectiveness of a prototype of such a system in training a navigation task. A wearable computer, acoustic tracking system, and see-through head mounted display (HMD) were used to wirelessly track users' head position and orientation while presenting a graphic representation of their virtual surroundings, through which the user walked using natural movement. As previous studies have shown that virtual environments can be used to train navigation, the ability to add natural movement to a type of virtual environment may enhance that training, based on the proprioceptive feedback gained by walking through the environment. Sixty participants were randomly assigned to one of three conditions: route drawing on printed floor plan, rehearsal in the actual facility, and rehearsal in a mixed reality (MR) environment. Participants, divided equally between male and female in each group, studied verbal directions of route, then performed three rehearsals of the route, with those in the map condition drawing it onto three separate printed floor plans, those in the practice condition walking through the actual facility, and participants in the MR condition walking through a three dimensional virtual environment, with landmarks, waypoints and virtual footprints. A scaling factor was used, with each step in the MR environment equal to three steps in the real environment, with the MR environment also broken into "tiles", like pages in an atlas, through which participant progressed, entering each tile in succession until they completed the entire route. Transfer of training testing that consisted of a timed traversal of the route through the actual facility showed a significant difference in route knowledge based on the total time to complete the route, and the number of errors committed while doing so, with "walkers" performing better than participants in the paper map or MR condition, although the effect was weak. Survey knowledge showed little difference among the three rehearsal conditions. Three standardized tests of spatial abilities did not correlate with route traversal time, or errors, or with 3 of the 4 orientation localization tasks. Within the MR rehearsal condition there was a clear performance improvement over the three rehearsal trials as measured by the time required to complete the route in the MR environment which was accepted as an indication that learning occurred. As measured using the Simulator Sickness Questionnaire, there were no incidents of simulator sickness in the MR environment. Rehearsal in the actual facility was the most effective training condition; however, it is often not an acceptable form of rehearsal given an inaccessible or hostile environment. Performance between participants in the other two conditions were indistinguishable, pointing toward continued experimentation that should include the combined effect of paper map rehearsal with mixed reality, especially as it is likely to be the more realistic case for mission rehearsal, since there is no indication that maps should be eliminated. To walk through the environment beforehand can enhance the Soldiers' understanding of their surroundings, as was evident through the comments from participants as they moved from MR to the actual space: "This looks like I was just here", and "There's that pole I kept having trouble with". Such comments lead one to believe that this is a tool to continue to explore and apply. While additional research on the scaling and tiling factors is likely warranted, to determine if the effect can be applied to other environments or tasks, it should be pointed out that this is not a new task for most adults who have interacted with maps, where a scaling factor of 1 to 15,000 is common in orienteering maps, and 1 to 25,000 in military maps. Rehearsal time spent in the MR condition varied widely, some of which could be blamed on an issue referred to as "avatar excursions", a system anomaly that should be addressed in future research. The proprioceptive feedback in MR was expected to positively impact performance scores. It is very likely that proprioceptive feedback is what led to the lack of simulator sickness among these participants. The design of the HMD may have aided in the minimal reported symptoms as it allowed participants some peripheral vision that provided orientation cues as to their body position and movement. Future research might include a direct comparison between this MR, and a virtual environment system through which users move by manipulating an input device such as a mouse or joystick, while physically remaining stationary. The exploration and confirmation of the training capabilities of MR as is an important step in the development and application of the system to the U.S. Army training mission. This experiment was designed to examine one potential training area in a small controlled environment, which can be used as the foundation for experimentation with more complex tasks such as wayfinding through an urban environment, and or in direct comparison to more established virtual environments to determine strengths, as well as areas for improvement, to make MR as an effective addition to the Army training mission.
25

Developing an Augmented Reality Visual Clutter Score Through Establishing the Applicability of Image Analysis Measures of Clutter and the Analysis of Augmented Reality User Interface Properties

Flittner, Jonathan Garth 05 September 2023 (has links)
Augmented reality (AR) is seeing a rapid expansion into several domains due to the proliferation of more accessible and powerful hardware. While augmented reality user interfaces (AR UIs) allow the presentation of information atop the real world, this extra visual data potentially comes at a cost of increasing the visual clutter of the users' field of view, which can increase visual search time, error rates, and have an overall negative effect on performance. Visual clutter has been studied for existing display technologies, but there are no established measures of visual clutter for AR UIs which precludes the study of the effects of clutter on performance in AR UIs. The first objective of this research is to determine the applicability of extant image analysis measures of feature congestion, edge density, and sub-band entropy for measuring visual clutter in the head-worn optical see-through AR space and establish a relationship between image analysis measures of clutter and visual search time. These image analysis measures are specifically chosen to quantify clutter, as they can be applied to complex and naturalistic scenes, as is common to experience while using an optical see-through AR UI. The second objective is to examine the effects of AR UIs comprised of multiple apparent depths on user performance through the metric of visual search time. The third objective is to determine the effects of other AR UI properties such as target clutter, target eccentricity, target apparent depth and target total distance on performance as measured through visual search time. These results will then be used to develop a visual clutter score, which will rate different AR UIs against each other. Image analysis measures for clutter of feature congestion, edge density, and sub-band entropy of clutter were correlated to visual search time when they were taken for the overall AR UI and when they were taken for a target object that a participant was searching for. In the case of an AR UI comprised of both projected and AR parts, image analysis measures were not correlated to visual search time for the constituent AR UI parts (projected or AR) but were still correlated to the overall AR UI clutter. Target eccentricity also had an effect on visual search time, while target apparent depth and target total distance from center did not. Target type and AR object percentage also had an effect on visual search time. These results were synthesized into a general model known as the "AR UI Visual Clutter Score Algorithm" using a multiple regression. This model can be used to compare different AR UIs to each other in order to identify the AR UI that is projected to have lower target visual search times. / Doctor of Philosophy / Augmented reality is a novel but growing technology. The ability to project visual information into the real-world comes with many benefits, but at the cost of increasing visual clutter. Visual clutter in existing displays has been shown to negatively affect visual search time, error rates, and general performance, but there are no established measures of visual clutter augmented reality displays, so it is unknown if visual clutter will have the same effects. The first objective of this research is to establish measures of visual clutter for augmented reality displays. The second objective is to better understand the unique properties of augmented reality displays, and how that may affect ease of use. Measures of visual clutter were correlated to visual search time when they were taken for the augmented reality user interface, and when they were taken for a given target object within that a participant was searching for. It was also found that as targets got farther from the center of the field of view, visual search time increased, while the depth of a target from the user and the total distance a target was from the user did not. Study 1 also showed that target type and AR object percentage also had an effect on visual search time. Combining these results gives a model that can be used to compare different augmented reality user interfaces to each other.
26

Moderne Methode zur manuellen und kollisionsfreien Telemanipulation von Industrierobotern basierend auf einem digitalen Zwilling

Pospiech, Th., Gysin, M. 12 February 2024 (has links)
Dieser Beitrag zeigt ein umgesetztes Gesamtkonzept für eine manuelle und kollisionsfreie Telemanipulation von Industrierobotern basierend auf einem digitalen Zwilling. Als Demonstrator dient hierfür ein Aufbau, mit dem eine manuelle Pick-and-Place-Applikation exemplarisch nachgebildet wird. Der manuell gesteuerte Industrieroboter soll kleine, mit Flüssigkeit befüllten Glasfläschchen (sogenannten Vials) aufnehmen, transportieren und an einen definierten Ablageort kollisionsfrei absetzten. In diesem Beitrag werden sämtliche notwendigen Schritte nachvollziehbar dargestellt und erläutert. Die Definition und Realisierung des digitalen Zwillings, die Gestaltungsmöglichkeiten des Arbeitsraums und dessen Überwachung sowie die Kollisionsüberprüfung sollen hierbei die Schwerpunkte darstellen. Außerdem werden die notwendigen Systemvoraussetzungen Umsetzung des Konzepts dargestellt. Die eigentliche Bewegungssteuerung des Industrieroboters wird mit unterschiedlichen Manipulatoren verifiziert.
27

Impact of Interactive Holographic Learning Environment for bridging Technical Skill Gaps of Future Smart Construction Engineering and Management Students

Ogunseiju, Omobolanle Ruth 25 July 2022 (has links)
The growth in the adoption of sensing technologies in the construction industry has triggered the need for graduating construction engineering students equipped with the necessary skills for deploying the technologies. For construction engineering students to acquire technical skills for implementing sensing technologies, it is pertinent to engage them in hands-on learning with the technologies. However, limited opportunities for hands-on learning experiences on construction sites and in some cases, high upfront costs of acquiring sensing technologies are encumbrances to equipping construction engineering students with the required technical skills. Inspired by opportunities offered by mixed reality, this study presents an interactive holographic learning environment that can afford learners an experiential opportunity to acquire competencies for implementing sensing systems on construction projects. Firstly, this study explores the required competencies for deploying sensing technologies on construction projects. The current state of sensing technologies in the industry and sensing technology education in construction engineering and management programs were investigated. The learning contents of the holographic learning environment were then driven by the identified competencies. Afterwards, a learnability study was conducted with industry practitioners already adopting sensing technologies to assess the learning environment. Feedback from the learnability study was implemented to further improve the learning environment after which a usability evaluation was conducted. To investigate the pedagogical value of the learning environment in construction education, a summative evaluation was conducted with construction engineering students. This research contributes to the definition of the domain-specific skills required of the future workforce for implementing sensing technologies in the construction industry and how such skills can be developed and enhanced within a mixed reality learning environment. Through concise outline and sequential design of the user interface, this study further revealed that knowledge scaffolding can improve task performance in a holographic learning environment. This study contributes to the body of knowledge by advancing immersive experiential learning discourses previously confined by technology. It opens a new avenue for both researchers and practitioners to further investigate the opportunities offered by mixed reality for future workforce development. / Doctor of Philosophy / The construction industry is getting technically advanced and adopting various sensing technologies for improving construction project performance, reducing cost, and mitigating health and safety hazards. As a result, there is a demand in the industry for graduates that can deploy these sensing technologies on construction projects. However, for construction engineering students to acquire the skills for deploying sensing technologies, it is necessary that they are trained through hands-on interactions with these technologies. It is also imperative to take these students to construction sites for experiential learning of sensing technologies. This is difficult because most institutions often experience barriers and hindrances like weather constraints, difficulty in accessing jobsites, and schedule constraints. Also, while some institutions can afford these sensing technologies, others cannot, making it difficult to train students adequately. Due to the benefits of virtual learning environments (such as mixed reality and virtual reality), this study investigates a mixed reality (holographic) environment that can allow learners an experiential opportunity to acquire competencies for implementing sensing systems on construction projects. To achieve this, this research first investigated the required competencies such as skills, knowledge, and abilities for implementing sensing technologies on construction projects. The current state of sensing technologies in the industry and sensing technology education in construction engineering and management programs were investigated. The results from the first study in this research informed the learning contents of the learning environment. Afterwards, a learnability study was conducted with industry practitioners already adopting sensing technologies to assess the learning environment. Feedback from the learnability study was implemented to further improve the learning environment after which a usability evaluation was conducted. To investigate the pedagogical value of the learning environment in construction education, a summative evaluation was conducted with construction engineering students. The research contributes to the definition of the domain-specific skills required of the future workforce for implementing sensing technologies in the construction industry and how such skills can be developed and enhanced within a mixed reality learning environment. The design features such as the concise outline and sequential design of the user interface, further revealed that knowledge scaffolding can improve task performance in a mixed reality environment. This research further contributes to the body of knowledge by promoting immersive hands-on learning discourses previously confined by technology. It opens a new avenue for both researchers and practitioners to further investigate the opportunities offered by mixed reality for future workforce development.
28

Towards a Unified Framework for Smart Built Environment Design: An Architectural Perspective

Dasgupta, Archi 07 May 2018 (has links)
Smart built environments (SBE) include fundamentally different and enhanced capabilities compared to the traditional built environments. Traditional built environments consist of basic building elements and plain physical objects. These objects offer primitive interactions, basic use cases and direct affordances. As a result, the traditional architectural process is completely focused on two dimensions of design, i.e., the physical environment based on context and functional requirements based on the users. Whereas, SBEs have a third dimension, computational and communication capabilities embedded with physical objects enabling enhanced affordance and multi-modal interaction with the surrounding environment. As a result of the added capability, there is a significant change in activity pattern/spatial use pattern in an SBE. So, the traditional architectural design process needs to be modified to meet the unique requirements of SBE design. The aim of this thesis is to modify the traditional architectural design process by introducing SBE requirements. Secondly, this thesis explores a reference implementation of immersive technology based SBE design framework. The traditional architectural design tools are not always enough to represent, visualize or model the vast amount of data and digital components of SBE. SBE empowered with IoT needs a combination of the virtual and real world to assist in the design, evaluation and interaction process. A detailed discussion explored the required capabilities for facilitating an MR-based SBE design approach. An immersive technology is particularly helpful for SBE design because SBEs offer novel interaction scenarios and complex affordance which can be tested using immersive techniques. / Master of Science / Smart built environments (SBE) are fundamentally different from our everyday built environments. SBEs have enhanced capabilities compared to the traditional built environments because computational and communication capabilities are embedded with everyday objects in case of SBEs. An wall or a table is no longer just a simple object rather an interactive component that can process information and communicate with people or other devices. The introduction of these smart capabilities in physical environment leads to change in user's everyday activity pattern. So the spatial design approach also needs to be reflect these changes. As a result, the traditional architectural design process needs to be modified for designing SBEs. The aim of this thesis is to introduce a modified SBE design process based on the traditional architectural design process. Secondly, this thesis explores an immersive technology (e.g.- mixed reality, virtual reality etc.) based SBE design framework. The traditional architectural design tools mostly provide two dimensional representations like sketches or renderings. But two dimensional drawings are not always enough to represent, visualize or model the vast amount of data and digital components associated with SBE. The SBE design process needs enhanced capabilities to represent the interdependency of connected devices and interaction scenarios with people. Immersive technology can be introduced to address this problem, to test the proposed SBE in a virtual/mixed reality environment and to test the proposed 'smartness' of the objects. This thesis explores the potentials of this type of immersive technology based SBE design approach.
29

An Evaluative Study on the Impact of Immersion and Presence for Flight Simulators in XR

Dahlkvist, Robin January 2023 (has links)
Flight simulators are a central training method for pilots and with the advances of human-computer interaction, new cutting-edge technology introduces a new type of simulator using extended reality (XR). XR is an umbrella term for many representative forms of realities, where physical reality (PR) and virtual reality (VR) are the endpoints of this spectrum, and any reality in between can be seen as mixed reality (MR). The purpose of this thesis was to investigate the applicabilities of XR and how they can be compared with each other in terms of usability, immersion, presence, and simulator sickness for flight simulators, respectively. To answer these questions, a MR and a VR version was implemented in Unity using the Varjo XR-3 head-mounted display based on the Framework for Immersive Virtual Environments (FIVE). To evaluate these aspects, a user study (N = 11) was conducted, focusing on quantitative and qualitative experimental research methods. Interaction with physical interfaces is a core procedure for pilots; thus, three reaction tests were conducted with the goal of pressing a random button that is lit green for a 3 x 3 Latin square layout for a given time to measure the efficiency of interaction for both versions. Reaction tests were conducted in different complexities: Simple (no flight), moderate (easy flight), and advanced (difficult flight). Participants experienced the MR and VR versions, and completed complementary questionnaires on immersion, presence, and simulator sickness while remaining in the simulation. The user study showed that the usability in MR is considerably higher, and more immersive than VR when incorporating interaction. However, excluding the interaction aspects showed that VR was more immersive. Overall, this work demonstrates how to achieve high levels of immersion, and a high elicitation of sense of presence, simultaneously while having minuscule levels of simulator sickness with a relatively realistic experience. / Flygsimulatorer är en central träningsmetod för piloter, och med framsteg inom människa-datorinteraktion introduceras ny, toppmodernt teknik som använder utökad verklighet (XR) för en ny typ av simulator. XR är ett samlingsbeteckning för många olika former av verkligheter, där den fysiska verkligheten (PR) och den virtuell verklighet (VR) är ändpunkterna på detta spektrum, och alla verkligheter däremellan kan ses som blandad verklighet (MR). Syftet med denna avhandling var att undersöka tillämpbarheten av XR och hur de kan jämföras med varandra när det gäller användbarhet, immersion, närvaro och simulatorsjuka för flygsimulatorer. För att besvara dessa frågor implementerades en MR- och en VR-version i Unity med hjälp av Varjo XR-3 huvudmonterad display baserat på ramverket för immersiva virtuella miljöer FIVE. För att utvärdera dessa aspekter genomfördes en användarstudie (N = 11), med fokus på kvantitativa och kvalitativa experimentella forskningsmetoder. Interaktion med fysiska gränssnitt är en kärnprocedur för piloter; Därför genomfördes tre reaktionstester med målet att trycka på en slumpmässig knapp som lyser grönt för en 3 x 3 latinsk kvadrat under en given tid för att mäta interaktionens användbarhet för båda versionerna. Reaktionstesterna genomfördes under olika komplexiteter: Enkel (utan flygning), måttlig (enkel flygning) och avancerad (svår flygning). Deltagarna upplevde MR- och VR-versionerna och fyllde i kompletterande enkäter om immersion, närvaro och simulatorsjuka medan de var kvar i simuleringen. Användarstudien visade att användbarheten i MR är betydligt högre och mer immersiv än i VR när man inkluderar interaktion. Exkluderar man interaktionsaspekter visade det sig att VR var mer immersiv. Sammantaget visar detta arbete hur man kan uppnå höga nivåer av immersion och hög framkallning av sinnesnärvaro samtidigt som man har minimala nivåer av simulatorsjuka med en relativt realistisk upplevelse.
30

Dynamic Mixed Reality AssemblyGuidance Using Optical Recognition Methods

Guðjónsdóttir, Harpa Hlíf, Ólafsson, Gestur Andrei January 2022 (has links)
Mixed Reality (MR) is an emerging paradigm in industry. While MR equipment and software have taken great technological strides in past years, standardized methods and workflows for developing MR systems for industry have not been widely adopted for many tasks. This thesis proposes a dynamic MR system for an assembly process. Optical recognition methods are explored to drive the application logic. The systemis developed using the Unity platform for the HoloLens 2. The software tools Vuforia Engine and Mixed Reality Toolkit (MRTK) are utilized. The project work concludes with an application capable of guiding users using graphics and audio. Successful methods are realized for calibrating the application logic for dynamic object positions,as well as for validating user actions. Experiments are conducted to validate the system. Subjects complete a different assembly process using paper instructions as guidance before using the MR application. Qualitative results regarding the MR experience are obtained through a questionnaire subjects answer, where the experience using paper instructions serves as a benchmark. Data obtained from an experienced user completing the assembly process is used as a quantitative benchmark for system performance measures. All subjects were able to complete the assembly tasks correctly using the MR application. Results show significantly better system performance for the experienced user compared to subjects unfamiliar with the MR system. Vuforia Engine recognition tools successfully tracked individual components that meet a specific criterion. Methods for validating user actions using Vuforia Engine software tools and the HoloLens’s internal hand tracking capabilities resulted in a high validation success rate. The thesis concludes effective training methods for the specific assembly scenario, although not robust for general implementation. / Mixed Reality (MR) är ett framväxande paradigm inom industrin. Medan tillbehör och programvara för MR har gjort enorma framsteg under det senaste decenniet, har standardiserade metoder och arbetsflöden för utveckling av MR applikationer i industriella kontexter inte använts i lika stor utsträckning. Det här examensarbetet utvecklar och proponerar en dynamisk MR applikation för en monteringsprocess. Optiska valideringsmetoder utforskas för att använda applikationen. Applikationen är utvecklad med hjälp av Unity game engine för HoloLens 2. Programvaran Vuforia Engine och MRTK är utnyttjad. Projektarbetet resulterade i en applikation som kan vägleda användare med hjälp av ljud och grafik. Framgångsrika metoder implementerades för att kalibrera applikationslogiken av dynamisk objektspositionering, samt för att validera användarens rörelser. Ett experiment utfördes för att validera MR applikationen där deltagare genomförde en monteringsprocess med hjälp av pappersinstruktioner, vilket används som ett kvalitativt riktmärke. Mätningar av en erfaren applikationsanvändare har använts som ett kvantitativt riktmärke för mätning av systemmässigt utförande. Alla deltagare kunde utföra monteringsuppgifterna korrekt med hjälp av MR applikationen. Resultaten visar betydligt bättre utförande för den erfarna användaren jämfört med personer som inte är bekanta med MR systemet. Spårning av enskilda objekt med hjälp av Vuforia Engine igenkänningsverktyg var framgångsrikt för komponenter som uppfyller ett specifikt kriterium. Metoder för att validera användarens rörelser med programvaran Vuforia Engine samt HoloLens interna handspårningsfunktion gav mycket framgångsrika resultat vid validering. Sammanfattningsvis kom studien fram till effektiva upplärningsmetoder för det här monteringsscenariot, även om de inte var robusta nog för generell implementering.

Page generated in 0.0555 seconds