• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 115
  • 15
  • 10
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 200
  • 200
  • 71
  • 68
  • 68
  • 45
  • 42
  • 38
  • 34
  • 33
  • 27
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Impact of Interactive Holographic Learning Environment for bridging Technical Skill Gaps of Future Smart Construction Engineering and Management Students

Ogunseiju, Omobolanle Ruth 25 July 2022 (has links)
The growth in the adoption of sensing technologies in the construction industry has triggered the need for graduating construction engineering students equipped with the necessary skills for deploying the technologies. For construction engineering students to acquire technical skills for implementing sensing technologies, it is pertinent to engage them in hands-on learning with the technologies. However, limited opportunities for hands-on learning experiences on construction sites and in some cases, high upfront costs of acquiring sensing technologies are encumbrances to equipping construction engineering students with the required technical skills. Inspired by opportunities offered by mixed reality, this study presents an interactive holographic learning environment that can afford learners an experiential opportunity to acquire competencies for implementing sensing systems on construction projects. Firstly, this study explores the required competencies for deploying sensing technologies on construction projects. The current state of sensing technologies in the industry and sensing technology education in construction engineering and management programs were investigated. The learning contents of the holographic learning environment were then driven by the identified competencies. Afterwards, a learnability study was conducted with industry practitioners already adopting sensing technologies to assess the learning environment. Feedback from the learnability study was implemented to further improve the learning environment after which a usability evaluation was conducted. To investigate the pedagogical value of the learning environment in construction education, a summative evaluation was conducted with construction engineering students. This research contributes to the definition of the domain-specific skills required of the future workforce for implementing sensing technologies in the construction industry and how such skills can be developed and enhanced within a mixed reality learning environment. Through concise outline and sequential design of the user interface, this study further revealed that knowledge scaffolding can improve task performance in a holographic learning environment. This study contributes to the body of knowledge by advancing immersive experiential learning discourses previously confined by technology. It opens a new avenue for both researchers and practitioners to further investigate the opportunities offered by mixed reality for future workforce development. / Doctor of Philosophy / The construction industry is getting technically advanced and adopting various sensing technologies for improving construction project performance, reducing cost, and mitigating health and safety hazards. As a result, there is a demand in the industry for graduates that can deploy these sensing technologies on construction projects. However, for construction engineering students to acquire the skills for deploying sensing technologies, it is necessary that they are trained through hands-on interactions with these technologies. It is also imperative to take these students to construction sites for experiential learning of sensing technologies. This is difficult because most institutions often experience barriers and hindrances like weather constraints, difficulty in accessing jobsites, and schedule constraints. Also, while some institutions can afford these sensing technologies, others cannot, making it difficult to train students adequately. Due to the benefits of virtual learning environments (such as mixed reality and virtual reality), this study investigates a mixed reality (holographic) environment that can allow learners an experiential opportunity to acquire competencies for implementing sensing systems on construction projects. To achieve this, this research first investigated the required competencies such as skills, knowledge, and abilities for implementing sensing technologies on construction projects. The current state of sensing technologies in the industry and sensing technology education in construction engineering and management programs were investigated. The results from the first study in this research informed the learning contents of the learning environment. Afterwards, a learnability study was conducted with industry practitioners already adopting sensing technologies to assess the learning environment. Feedback from the learnability study was implemented to further improve the learning environment after which a usability evaluation was conducted. To investigate the pedagogical value of the learning environment in construction education, a summative evaluation was conducted with construction engineering students. The research contributes to the definition of the domain-specific skills required of the future workforce for implementing sensing technologies in the construction industry and how such skills can be developed and enhanced within a mixed reality learning environment. The design features such as the concise outline and sequential design of the user interface, further revealed that knowledge scaffolding can improve task performance in a mixed reality environment. This research further contributes to the body of knowledge by promoting immersive hands-on learning discourses previously confined by technology. It opens a new avenue for both researchers and practitioners to further investigate the opportunities offered by mixed reality for future workforce development.
32

Towards a Unified Framework for Smart Built Environment Design: An Architectural Perspective

Dasgupta, Archi 07 May 2018 (has links)
Smart built environments (SBE) include fundamentally different and enhanced capabilities compared to the traditional built environments. Traditional built environments consist of basic building elements and plain physical objects. These objects offer primitive interactions, basic use cases and direct affordances. As a result, the traditional architectural process is completely focused on two dimensions of design, i.e., the physical environment based on context and functional requirements based on the users. Whereas, SBEs have a third dimension, computational and communication capabilities embedded with physical objects enabling enhanced affordance and multi-modal interaction with the surrounding environment. As a result of the added capability, there is a significant change in activity pattern/spatial use pattern in an SBE. So, the traditional architectural design process needs to be modified to meet the unique requirements of SBE design. The aim of this thesis is to modify the traditional architectural design process by introducing SBE requirements. Secondly, this thesis explores a reference implementation of immersive technology based SBE design framework. The traditional architectural design tools are not always enough to represent, visualize or model the vast amount of data and digital components of SBE. SBE empowered with IoT needs a combination of the virtual and real world to assist in the design, evaluation and interaction process. A detailed discussion explored the required capabilities for facilitating an MR-based SBE design approach. An immersive technology is particularly helpful for SBE design because SBEs offer novel interaction scenarios and complex affordance which can be tested using immersive techniques. / Master of Science / Smart built environments (SBE) are fundamentally different from our everyday built environments. SBEs have enhanced capabilities compared to the traditional built environments because computational and communication capabilities are embedded with everyday objects in case of SBEs. An wall or a table is no longer just a simple object rather an interactive component that can process information and communicate with people or other devices. The introduction of these smart capabilities in physical environment leads to change in user's everyday activity pattern. So the spatial design approach also needs to be reflect these changes. As a result, the traditional architectural design process needs to be modified for designing SBEs. The aim of this thesis is to introduce a modified SBE design process based on the traditional architectural design process. Secondly, this thesis explores an immersive technology (e.g.- mixed reality, virtual reality etc.) based SBE design framework. The traditional architectural design tools mostly provide two dimensional representations like sketches or renderings. But two dimensional drawings are not always enough to represent, visualize or model the vast amount of data and digital components associated with SBE. The SBE design process needs enhanced capabilities to represent the interdependency of connected devices and interaction scenarios with people. Immersive technology can be introduced to address this problem, to test the proposed SBE in a virtual/mixed reality environment and to test the proposed 'smartness' of the objects. This thesis explores the potentials of this type of immersive technology based SBE design approach.
33

Exploring the Benefits of the Integration of XR and BIM for Retrofitting Projects

Sermarini, John 01 January 2024 (has links) (PDF)
Rapidly changing population dynamics and increased energy needs have reduced demand for building renovation in favor of more wasteful complete demolition and reconstruction. This dissertation aims to enhance the accessibility and ease of use of challenging retrofitting methodologies to mitigate adverse effects of urbanization, increasing resource use, and aging building stock within the United States. Retrofitting is a process focused on upgrading a component or feature of a structure that was not initially constructed or manufactured, and it is often done to modernize, restore, or repurpose a structure. These renovations are difficult and costly to plan and implement, frequently contributing to eschewing them in favor of complete reconstruction. This research proposes a solution: integrating Extended Reality (XR) technology and Building Information Modeling (BIM) data into the retrofitting workflow. Individually and together, these technologies have been applied to construction work with great success, although this area has previously been predominantly confined to new construction. We present this concept applied to three retrofitting subprocesses: design, implementation training, and model building. For each component, a human-subject study evaluates the system’s effectiveness in improving the efficiency and accessibility of this technology in this new context. We found that when applied to design review, technological limitations of existing XR systems may limit their ability to separate from conventional means, but increasing emphasis on eye movement in the future system design should be prioritized depending on environmental factors. In implementation training, these systems can effectively improve the identification of relevant building components while reducing physical and cognitive demands. Investigation into augmenting human-robot collaboration is still ongoing, but early results indicate great potential in improving control and ease of use when performing tasks later needed to create building models for guiding retrofitting projects. This dissertation provides a foundation for XR-BIM technology applied to retrofitting and, with it, a positive outlook and recommendations for related future work.
34

Erfahrungen zur Nutzung von Mixed und Virtual Reality im Lehralltag an der HTW Dresden

Göbel, Gunther, Sonntag, Ralph 28 March 2018 (has links) (PDF)
Der Einsatz von immersiven Systemen, also Virtual Reality (VR), Augmented Reality (AR) und Mixed Reality (MR) Systemen in der Lehre ist naheliegend. Eigene interaktive Erfahrung einer Tätigkeit ist immer einer reinen rezeptiven Beobachtung bzw. verbalen Erläuterung vorzuziehen. Trotzdem ist heutige Lehre selbst in Praktika und Übungen zum sehr großen Teil passiv, die selbständige Umsetzung, etwa das Bedienen einer Anlage oder die eigenständige Synthese einer Chemikalie, können aus Gründen der Zeit, Verfügbarkeit, Sicherheitsbedenken und Kostengründen oft nur selten eingesetzt werden. Dem Einsatz o.g. neuer immersiven Technologien stand bisher nicht nur der erhebliche Aufwand zur Erstellung entsprechender Simulationen gegenüber. Vor allem aber auch der Hardwareaufwand bei gleichzeitigem nicht optimalem Grad an Immersivität ließ kaum Möglichkeiten offen. Jeden Studenten einzeln ausreichend Zeit in einer teuren und großen Cave-Umgebung zu ermöglichen, damit dieser virtuell technische Anlagen bedient, ist für größere Studentenzahlen untauglich. [... aus der Einleitung]
35

Mixed-Reality-in-the-Loop Simulation zur Schulung technischer Fachkräfte im Maschinen- und Anlagenbau

Hönig, Jana, Schnierle, Marc, Wehnert, Camilla, Littfinski, Daniel, Scheifele, Christian, Pfeifer, Denis, Münster, Carlos, Roth, Armin, Franz, Julia, Röck, Sascha, Verl, Alexander 27 January 2022 (has links)
Dieser Beitrag stellt die Mixed-Reality-in-the-Loop Simulation (MRiLS) zur Schulung technischer Fachkräfte im Maschinen- und Anlagenbau vor. Die MRiLS koppelt die aus dem Engineering bereits vorhandenen Modelle der Hardware-in-the-Loop Simulation (HiLS) mit Visualisierungs- und Interaktionsmethoden der Mixed Reality (MR) und integriert dadurch den Nutzenden und dessen Verhalten sowie die reale Umgebung vollständig in den Simulationskreislauf. Der Beitrag thematisiert neben der notwendigen Middleware zur Kopplung der HiLS mit der MR-Umgebung auch die Steuerungsbelastung durch Multiuser-Zugriffe. Die Funktionsfähigkeit des vorgestellten Konzepts wird anhand eines ausgewählten beispielhaften Automatisierungssystems belegt. Für das Automatisierungssystem wird der Aufbau der MRiLS sowie das Konzept für den Ablauf einer Schulung mittels MRiLS vorgestellt.
36

An Evaluative Study on the Impact of Immersion and Presence for Flight Simulators in XR

Dahlkvist, Robin January 2023 (has links)
Flight simulators are a central training method for pilots and with the advances of human-computer interaction, new cutting-edge technology introduces a new type of simulator using extended reality (XR). XR is an umbrella term for many representative forms of realities, where physical reality (PR) and virtual reality (VR) are the endpoints of this spectrum, and any reality in between can be seen as mixed reality (MR). The purpose of this thesis was to investigate the applicabilities of XR and how they can be compared with each other in terms of usability, immersion, presence, and simulator sickness for flight simulators, respectively. To answer these questions, a MR and a VR version was implemented in Unity using the Varjo XR-3 head-mounted display based on the Framework for Immersive Virtual Environments (FIVE). To evaluate these aspects, a user study (N = 11) was conducted, focusing on quantitative and qualitative experimental research methods. Interaction with physical interfaces is a core procedure for pilots; thus, three reaction tests were conducted with the goal of pressing a random button that is lit green for a 3 x 3 Latin square layout for a given time to measure the efficiency of interaction for both versions. Reaction tests were conducted in different complexities: Simple (no flight), moderate (easy flight), and advanced (difficult flight). Participants experienced the MR and VR versions, and completed complementary questionnaires on immersion, presence, and simulator sickness while remaining in the simulation. The user study showed that the usability in MR is considerably higher, and more immersive than VR when incorporating interaction. However, excluding the interaction aspects showed that VR was more immersive. Overall, this work demonstrates how to achieve high levels of immersion, and a high elicitation of sense of presence, simultaneously while having minuscule levels of simulator sickness with a relatively realistic experience. / Flygsimulatorer är en central träningsmetod för piloter, och med framsteg inom människa-datorinteraktion introduceras ny, toppmodernt teknik som använder utökad verklighet (XR) för en ny typ av simulator. XR är ett samlingsbeteckning för många olika former av verkligheter, där den fysiska verkligheten (PR) och den virtuell verklighet (VR) är ändpunkterna på detta spektrum, och alla verkligheter däremellan kan ses som blandad verklighet (MR). Syftet med denna avhandling var att undersöka tillämpbarheten av XR och hur de kan jämföras med varandra när det gäller användbarhet, immersion, närvaro och simulatorsjuka för flygsimulatorer. För att besvara dessa frågor implementerades en MR- och en VR-version i Unity med hjälp av Varjo XR-3 huvudmonterad display baserat på ramverket för immersiva virtuella miljöer FIVE. För att utvärdera dessa aspekter genomfördes en användarstudie (N = 11), med fokus på kvantitativa och kvalitativa experimentella forskningsmetoder. Interaktion med fysiska gränssnitt är en kärnprocedur för piloter; Därför genomfördes tre reaktionstester med målet att trycka på en slumpmässig knapp som lyser grönt för en 3 x 3 latinsk kvadrat under en given tid för att mäta interaktionens användbarhet för båda versionerna. Reaktionstesterna genomfördes under olika komplexiteter: Enkel (utan flygning), måttlig (enkel flygning) och avancerad (svår flygning). Deltagarna upplevde MR- och VR-versionerna och fyllde i kompletterande enkäter om immersion, närvaro och simulatorsjuka medan de var kvar i simuleringen. Användarstudien visade att användbarheten i MR är betydligt högre och mer immersiv än i VR när man inkluderar interaktion. Exkluderar man interaktionsaspekter visade det sig att VR var mer immersiv. Sammantaget visar detta arbete hur man kan uppnå höga nivåer av immersion och hög framkallning av sinnesnärvaro samtidigt som man har minimala nivåer av simulatorsjuka med en relativt realistisk upplevelse.
37

Dynamic Mixed Reality AssemblyGuidance Using Optical Recognition Methods

Guðjónsdóttir, Harpa Hlíf, Ólafsson, Gestur Andrei January 2022 (has links)
Mixed Reality (MR) is an emerging paradigm in industry. While MR equipment and software have taken great technological strides in past years, standardized methods and workflows for developing MR systems for industry have not been widely adopted for many tasks. This thesis proposes a dynamic MR system for an assembly process. Optical recognition methods are explored to drive the application logic. The systemis developed using the Unity platform for the HoloLens 2. The software tools Vuforia Engine and Mixed Reality Toolkit (MRTK) are utilized. The project work concludes with an application capable of guiding users using graphics and audio. Successful methods are realized for calibrating the application logic for dynamic object positions,as well as for validating user actions. Experiments are conducted to validate the system. Subjects complete a different assembly process using paper instructions as guidance before using the MR application. Qualitative results regarding the MR experience are obtained through a questionnaire subjects answer, where the experience using paper instructions serves as a benchmark. Data obtained from an experienced user completing the assembly process is used as a quantitative benchmark for system performance measures. All subjects were able to complete the assembly tasks correctly using the MR application. Results show significantly better system performance for the experienced user compared to subjects unfamiliar with the MR system. Vuforia Engine recognition tools successfully tracked individual components that meet a specific criterion. Methods for validating user actions using Vuforia Engine software tools and the HoloLens’s internal hand tracking capabilities resulted in a high validation success rate. The thesis concludes effective training methods for the specific assembly scenario, although not robust for general implementation. / Mixed Reality (MR) är ett framväxande paradigm inom industrin. Medan tillbehör och programvara för MR har gjort enorma framsteg under det senaste decenniet, har standardiserade metoder och arbetsflöden för utveckling av MR applikationer i industriella kontexter inte använts i lika stor utsträckning. Det här examensarbetet utvecklar och proponerar en dynamisk MR applikation för en monteringsprocess. Optiska valideringsmetoder utforskas för att använda applikationen. Applikationen är utvecklad med hjälp av Unity game engine för HoloLens 2. Programvaran Vuforia Engine och MRTK är utnyttjad. Projektarbetet resulterade i en applikation som kan vägleda användare med hjälp av ljud och grafik. Framgångsrika metoder implementerades för att kalibrera applikationslogiken av dynamisk objektspositionering, samt för att validera användarens rörelser. Ett experiment utfördes för att validera MR applikationen där deltagare genomförde en monteringsprocess med hjälp av pappersinstruktioner, vilket används som ett kvalitativt riktmärke. Mätningar av en erfaren applikationsanvändare har använts som ett kvantitativt riktmärke för mätning av systemmässigt utförande. Alla deltagare kunde utföra monteringsuppgifterna korrekt med hjälp av MR applikationen. Resultaten visar betydligt bättre utförande för den erfarna användaren jämfört med personer som inte är bekanta med MR systemet. Spårning av enskilda objekt med hjälp av Vuforia Engine igenkänningsverktyg var framgångsrikt för komponenter som uppfyller ett specifikt kriterium. Metoder för att validera användarens rörelser med programvaran Vuforia Engine samt HoloLens interna handspårningsfunktion gav mycket framgångsrika resultat vid validering. Sammanfattningsvis kom studien fram till effektiva upplärningsmetoder för det här monteringsscenariot, även om de inte var robusta nog för generell implementering.
38

Erfahrungen zur Nutzung von Mixed und Virtual Reality im Lehralltag an der HTW Dresden

Göbel, Gunther, Sonntag, Ralph January 2017 (has links)
Der Einsatz von immersiven Systemen, also Virtual Reality (VR), Augmented Reality (AR) und Mixed Reality (MR) Systemen in der Lehre ist naheliegend. Eigene interaktive Erfahrung einer Tätigkeit ist immer einer reinen rezeptiven Beobachtung bzw. verbalen Erläuterung vorzuziehen. Trotzdem ist heutige Lehre selbst in Praktika und Übungen zum sehr großen Teil passiv, die selbständige Umsetzung, etwa das Bedienen einer Anlage oder die eigenständige Synthese einer Chemikalie, können aus Gründen der Zeit, Verfügbarkeit, Sicherheitsbedenken und Kostengründen oft nur selten eingesetzt werden. Dem Einsatz o.g. neuer immersiven Technologien stand bisher nicht nur der erhebliche Aufwand zur Erstellung entsprechender Simulationen gegenüber. Vor allem aber auch der Hardwareaufwand bei gleichzeitigem nicht optimalem Grad an Immersivität ließ kaum Möglichkeiten offen. Jeden Studenten einzeln ausreichend Zeit in einer teuren und großen Cave-Umgebung zu ermöglichen, damit dieser virtuell technische Anlagen bedient, ist für größere Studentenzahlen untauglich. [... aus der Einleitung]
39

A HOPE FOR STROKE REHABILITATION : EXPLORING THE REHATT MIXED REALITY APPLICATION

Widengren, Mattias January 2021 (has links)
Unilateral spatial neglect (USN) is a common disorder after stroke. An application especially developed for stroke rehabilitation, the RehAtt Mixed Reality (MR) intends to train cognitive and motor functions that are affected by USN, by the means of interactive 3D games played in mixed reality, through smart glasses. The present study targets one specific cognitive function, namely spatial attention, and compares individual performances in one of the games (scenarios) to performances in a widely used test for detection of deficits in spatial attention – the Posner test. The hypothesis is that user reaction times in the RehAtt MR scenario correlates with user reaction times in the Posner test. Another test, including a questionnaire, to validate the usability of the RehAtt MR is also conducted. The sample for the usability test and questionnaire includes a total of 74 participants (47 women, 27 men, M = 39.6 years of age), of which 29 individuals (13 women, 16 men, M = 35 years of age) carried out the experimental part of the study. The results suggest that there is a significant correlation, r(27) = .411, p = .027, between reaction times in the Posner test and the scenario in the RehAtt MR, and that the product usability shows high quality. It is concluded that the results support that the scenario explored in the RehAtt MR trains spatial attention, although further research is needed for full validation. / Unilateraltspatialt neglekt (USN) är en vanlig funktionsnedsättning efter stroke. En applikation som utvecklats speciellt för strokerehabilitering - RehAtt Mixed Reality (MR) - har som mål att träna kognitiva och motoriska funktioner som är påverkade av USN, med hjälp av 3D-spel som spelas i mixed reality, genom smarta glasögon. Den aktuella studien siktar in sig på en specifik kognitiv funktion – spatial uppmärksamhet – och jämför individuella prestationer i ett av spelen i RehAtt MR med prestationer i ett vanligt, ofta använt test för att upptäcka nedsättningar i spatial uppmärksamhet – Posner-testet. Hypotesen är att användares reaktionstider i spelet i RehAtt MR korrelerar med användares reaktionstider i Posner-testet. Ett annat test, inklusive en enkät, görs också, för att validera användbarheten i RehAtt MR. 74 deltagare (47 kvinnor, 27 män, M = 39.6 år) finns med i användbarhetstestet och enkäten, av vilka 29 individer (13 kvinnor, 16 män, M = 35 år) medverkade i den experimentella delen av studien. Resultaten indikerar att det är en signifikant korrelation, r(27) = .411, p = .027, mellan reaktionstiderna i Posner-testet och spelet i RehAtt MR, och att användbarheten hos produktenvisar hög kvalitet. Slutsatsen är att vad som hittats i den aktuella studien stödjer idéen att spelet i RehAtt MR tränar spatial uppmärksamhet, även om vidare studier krävs för en full validering.
40

Mixed Reality Displays in Warehouse Management : A study revealing new possibilities for Warehouse Management and Tangar / ”Mixed Reality”-skärmar inom lagerarbete : En studie som åskådliggör nya möjligheter för lagerarbete och Tangar

Karlsson, Adam January 2019 (has links)
This work has investigated how head-mounted-displays can enable more efficient and better work conditions for warehouse workers. Head-mounted-displays have increased in popularity among companies because of an increase in the field of e-commerce, therefore warehouse labour was an interesting area to review. The purpose of this project has been to investigate how head-mounted-displays can simplify warehouse work and to find an area where Tangar can be utilized. Tangar is an application to facilitate indoor navigation by helping users to reach points of interest. Through a mixed methodology approach that utilizes both quantitative and qualitative methods, a broad understanding in warehouse and inventory management have been established. The potentials of head-mounted display were evaluated using empirical and theoretical studies. Based on an early concept that was evaluated by a collaboration with a warehouse-solution company, factors that are of importance in warehouse management were identified. A decision to direct the project towards order picking was taken as it is a fundamental process within warehouse management. Three concepts were generated that harness the benefits of head-mounted-displays. With an informed decision the benefits for each of the concepts were compared with important parameters for a profitable warehouse management. It turned out that "Pick-by-Light", a common system in warehouse management, can be made virtual using head-mounted-displays. Since the system had never previously been operated virtually, an extensive study needed to be done in order to evaluate the viability in order-picking to propose a final concept. An experimental environment was set for the empirical studies, and two other common order picking systems were compared to the virtual Pick-by-Light system. Quantitative data in the form of time measurements from the order picking as well as picking errors and qualitative data from a NASA-TLX survey, was extracted from twelve users. A total of 360 samples from the quantitative study and 36 questionnaires from the qualitative study was then analysed. The result resembled those from similar studies with a conventional Pick-by-Light system. Thus, parallels were drawn that indicated that the virtual system had good potential to perform at least as well as a regular Pick-by-Light. A virtual Pick-by-Light system might be able to reduce implementation-, work- and operational costs as the use of material is replaced by a virtual product, and also no installation is required. With the combination of Tangar, there is also a potential that a virtual Pick-by-Light system could be more efficient and accurate. The disadvantages of the conventional Pick-by-Light system are also that confirmations are ineffective and that workers find it difficult to get an overview of pickplaces. Which can potentially be eliminated with the proposed concept. However, a new generation of hardware and further studies are required in order to establish a final concept. The Magic Leap One, which is the head-mounted-display used in the project, is new. Many problems regarding the display have been discovered during the project and affected the results of the user studies. Further studies need to be done with other displays to determine the validity of the results of this work. In summary, this work gives an introduction in how "Mixed-reality" can be used in warehouse management and recommendations for continued work. / Det här arbetet har undersökt hur huvudmonterade skärmar kan möjliggöra ett effektivare och bättre arbete för lager-personal. Huvudmonterade skärmar har ökat i popularitet bland företaget på grund av ökningen inom e-handel och därför var lagerarbete ett intressant område att undersöka. Syftet med det här projektet har varit att undersöka hur huvudmonterade skärmar fortsatt skulle kunna förenkla lagerarbete samt att ta hitta ett område där Tangar kan användas. Tangar är en applikation som förenklar inomhus navigering genom att leda användaren till valda intressepunkter. Genom en metodisk undersökning som utnyttjar både kvantitativa och kvalitativa metoder, har en bred bakgrund inom lagerhantering kunnat upprättas. Potentialen av att använda huvudmonterade skärmar har undersökts genom empiriska och teoretiska studier. Baserat på ett tidigt koncept som utvärderas genom ett samarbete med ett sakkunnigt företag, identifierades flertalet faktorer som är av vikt i lagerhantering. Ett beslut om att rikta projektet mot order-plockning togs då det är en fundamental process inom lagerabete. Tre koncept genererades som utnyttjar fördelarna med huvudmonterade skärmar. Genom att ta ett informativt beslut, kunde fördelarna för var och ett av koncepten jämföras med viktiga parametrar för ett lönsamt lagerarbete. Det visade sig att ”Pick-by-Light”, ett vanligt system inom lagerhantering, kunde göras virtuellt med hjälp av huvudmonterade skärmar. I och med att systemet tidigare aldrig utförts virtuellt, behövdes en omfattande studie göras för att evaluera dess potential inom order-plockning för att kunna föreslå ett slutgiltigt koncept. En experimentell miljö sattes upp som ram för de empiriska studierna och två andra vanliga order-plocknings system kunde jämföras mot det virtuella Pick-by-Light systemet. Kvantitativa data i form av orderplockningstider samt plock-fel och kvalitativa data från observationer samt en NASA-TLX enkät, kunde extraheras från tolv användare. Totalt kunde 360 stickprov från den kvantitative studien och 36 enkäter från den kvalitative studien därefter analyseras. Resultatet liknade det som observerats i liknande studier där ett vanligt Pick-by-Light system evaluerats. Därmed kunde paralleller dras som visade att det virtuella systemet hade god potential till att kunna prestera åtminstone lika bra som ett vanliga Pick-by-Light systemet och ett koncept togs fram för vidare utveckling. Ett virtuellt Pick-by-Light system skulle kunna reducera implementerings-, arbetes- samt driftkostnader i och med att materialåtgången ersätts av en virtuell produkt, samt att ingen installation krävs. I och med kombinationen av Tangar finns det även potential att konceptet är mer effektivt och exakt. De nackdelar med det traditionella Pick-by-Light systemet är också att plock-bekräftelser görs ineffektivt och att arbetare har svårt att få en överblick gällande plockställen. Vilket skulle kunna elimineras med det föreslagna konceptet. Dock krävs en ny generation hårdvara och vidare studier för att kunna fastställa ett slutgiltigt koncept. Magic Leap One, som är den huvudmonterade skärmen som används i projektet är väldigt ny. Många problem gällande displayen har upptäckts under projektet och påverkat resultatet av användarstudierna. Fortsatta studier skulle behöva göras med andra displayer för att fastställa validiteten av resultaten från det här arbetet. Sammanfattningsvis ger det här arbetet en introduktion om hur ”Mixed-reality” kan användas inom lagerhantering samt rekommendationer till fortsatt arbete.

Page generated in 0.0572 seconds