Spelling suggestions: "subject:"augmentedreality"" "subject:"augmentedreality’s""
581 |
Bediener-Assistenzsysteme - Menschliche Erfahrungen und Maschinelles Lernen: VVD-Anwenderforum 2018 am 23./24.10.2018 in Berlin23 November 2018 (has links)
Mit der Automatisierung in der Produktion wird oft versucht, den Menschen als mögliche Fehlerquelle zunehmend vom Prozess auszuschließen. Dabei besitzt der Mensch einzigartige und nützliche motorische, sensorische und kognitive Fähigkeiten. Innovative Technologien bieten nun die Grundlage, Automatisierung und menschliche Fähigkeiten ideal zusammenzuführen und somit die Effizienz von Produktionsprozessen deutlich zu steigern.
Wir möchten Sie herzlich einladen, diese neuen Möglichkeiten mit uns zu diskutieren. Vertreter aus Forschung und Industrie werden aktuelle Strategien und Entwicklungen vorstellen. In der begleitenden Demo-Session finden Sie Gelegenheit, mit Experten zu sprechen und Technologien auszuprobieren.Ziel ist es, Ihnen einen ersten Einblick zu bieten und dadurch den Grundstein für eigene Anwendungsideen und -projekte zu legen.:1. Andre Schult (Fraunhofer IVV, Dresden): Begrüßung
2. Peter Seeberg (Softing Industrial Automation GmbH): KeyNote: Industrie 4.0 - Revolution durch Maschinelles Lernen
3. Andre Schult (Fraunhofer IVV, Dresden): Selbstlernende Bediener-Assistenzsysteme - Ein Update
4. Dr. Lukas Oehm (Fraunhofer IVV, Dresden): Ideenwerkstatt zukünftiger Projekte
5. Dr. Romy Müller (TU Dresden): Übervertrauen in Assistenzsysteme: Entstehungsbedingungen und Gegenmaßnahmen
6. Diego Arribas (machineering GmbH & Co. KG): Mehr Geschwindigkeit durch Digitales Engineering, Virtuelle Realität und Simulation
7. Sebastian Carsch (Fraunhofer IVV, Dresden): Informationsaustausch im interdisziplinären Entwicklungsprozess
8. Prof. Rainer Groh (TU Dresden): Das menschliche Maß der Interaktion
9. Fanny Seifert (Elco Industrie Automation GmbH): Smart Maintenance - Industrie-Apps als Grundlage für ein durchgängig integriertes Assistenzsystem
10. Markus Windisch (Fraunhofer IVV, Dresden): Cyber Knowledge Systems - Wissensbausteine für die digitalisierte Bauteilreinigung
11. Dr. Marius Grathwohl (MULTIVAC Sepp Hagemüller SE & Co. KG): IoT und Smart Services in agiler Entwicklung – Phasen der digitalen Transformation bei MULTIVAC
12. Andre Schult (Fraunhofer IVV, Dresden): Zusammenfassung und Abschlussdiskussion
|
582 |
Augmented Reality-spel för att motverka social isoleringÖsterlind, Egil, Ingelsson Fredler, Axel January 2023 (has links)
hälsotillstånd. Individer med intellektuell funktionsnedsättning, autism, eller båda nedsättningar, har ofta högre risk att hamna i social isolering än individer utan dessa funktionsnedsättningar. Forskning saknas kring hur Augmented Reality-spel kan tillämpas för att underlätta sociala interaktioner för vuxna individer med autism, intellektuell funktionsnedsättning, eller båda funktionsnedsättningar. Tidigare forskning undersöker ämnet mer generellt och fokuserar mer frekvent på barn och unga individer som målgrupp, denna forskning har dock visat positiva resultat gällande Augmented Reality som stöd för utlärandet av viktiga vardagskunskaper. Det problem denna studie ämnar att undersöka är hur vuxna individer med dessa typer av funktionsnedsättningar har en högre risk att hamna i social isolering, samt att det idag finns en brist på forskning kring riktlinjer om hur spel kan utformas för denna målgrupp. Social isolering är när en individ upplever social ensamhet, bristfällig kontakt med familj, social oro och depression Genom att utveckla en Augmented Reality-app-prototyp ämnar författarna att undersöka dess potential för att öka sociala interaktioner mellan vuxna individer med intellektuell funktionsnedsättning, autism eller båda funktionsnedsättningar. Genom att individerna använder funktionen i appen “ring en vän” så uppstår det en chans för att de träffas. På det sättet hoppas författarna av denna studie att slutprodukten skulle kunna skapa fler sociala interaktioner. Datainsamlingen utförs genom observationer och intervjuer med personal från en daglig verksamhet som dagligen interagerar med individer från målgruppen, med syfte att ge kunskap om Augmented Reality som undervisningshjälpmedel för individer med förutnämnda funktionsnedsättningar. Författarna av denna studie kodar och kategoriserade sedan datan i teman för att undersöka resultatet. I denna studie uppkommer det att det är svårt för personalen på den dagliga verksamheten att motverka social isolering för individerna med de förutnämnda funktionsnedsättningarna. Resultatet visar att det finns intresse och potential för Augmented Reality som ett hjälpmedel för att motverka social isolering. Författarna uppfattar det som att studien har brister då den data som samlats in inte hämtats från den aktuella målgruppen. Framtida forskning skulle kunna vidareutveckla artefakten som har skapats för denna studie. Framtida forskning skulle också kunna utforska möjligheten att utveckla “ring en vän” funktionen till ett verktyg som skulle kunna appliceras på olika applikationer / Social isolation, which is when an individual is distanced from their desired or necessary social networks, can lead to deteriorated mental health. Individuals with intellectual disabilities, autism, or both disabilities often have a higher risk of experiencing social isolation compared to individuals without these disabilities. There is a lack of research on how Augmented Reality games can be applied to facilitate social interactions for adults with autism, intellectual disabilities, or both disabilities. Previous research has explored the topic more generally and has focused more frequently on children and young individuals as the target audience. However, this research has shown positive results regarding the use of Augmented Reality as a support for learning essential life skills. The problem this study aims to investigate is how adults with these types of disabilities are at a higher risk of experiencing social isolation, and there is currently a lack of research on guidelines for designing games for this target group. Social isolation occurs when an individual experiences social loneliness, lack of contact with family, social anxiety, and depression. By developing an Augmented Reality app prototype, the authors intend to examine its potential to increase social interactions among adults with intellectual disabilities, autism, or both disabilities. By using the "call a friend" feature in the app, there is an opportunity for individuals to meet face to face. In this way, we hope that the end product could create more social interactions. Data collection is performed through observations and interviews with staff from a daily activity center who interact with individuals from the target group on a daily basis. The purpose is to provide knowledge about the use of Augmented Reality as an educational tool for individuals with the aforementioned disabilities. The authors of this study then code and categorize the data into themes to investigate the results. This study reveals that it is challenging for the staff at the daily activity center to counteract social isolation for individuals with the aforementioned disabilities. The results demonstrate that there is interest and potential for Augmented Reality as a tool to counteract social isolation. The authors perceive the study to have limitations as the data collected was not obtained directly from the actual target group. Future research could further develop the artifact that has been created,and could also explore the possibility of expanding the "call a friend" function into a tool that could be applied to different applications.
|
583 |
Registration and Localization of Unknown Moving Objects in Markerless Monocular SLAMBlake Austin Troutman (15305962) 18 May 2023 (has links)
<p>Simultaneous localization and mapping (SLAM) is a general device localization technique that uses realtime sensor measurements to develop a virtualization of the sensor's environment while also using this growing virtualization to determine the position and orientation of the sensor. This is useful for augmented reality (AR), in which a user looks through a head-mounted display (HMD) or viewfinder to see virtual components integrated into the real world. Visual SLAM (i.e., SLAM in which the sensor is an optical camera) is used in AR to determine the exact device/headset movement so that the virtual components can be accurately redrawn to the screen, matching the perceived motion of the world around the user as the user moves the device/headset. However, many potential AR applications may need access to more than device localization data in order to be useful; they may need to leverage environment data as well. Additionally, most SLAM solutions make the naive assumption that the environment surrounding the system is completely static (non-moving). Given these circumstances, it is clear that AR may benefit substantially from utilizing a SLAM solution that detects objects that move in the scene and ultimately provides localization data for each of these objects. This problem is known as the dynamic SLAM problem. Current attempts to address the dynamic SLAM problem often use machine learning to develop models that identify the parts of the camera image that belong to one of many classes of potentially-moving objects. The limitation with these approaches is that it is impractical to train models to identify every possible object that moves; additionally, some potentially-moving objects may be static in the scene, which these approaches often do not account for. Some other attempts to address the dynamic SLAM problem also localize the moving objects they detect, but these systems almost always rely on depth sensors or stereo camera configurations, which have significant limitations in real-world use cases. This dissertation presents a novel approach for registering and localizing unknown moving objects in the context of markerless, monocular, keyframe-based SLAM with no required prior information about object structure, appearance, or existence. This work also details a novel deep learning solution for determining SLAM map initialization suitability in structure-from-motion-based initialization approaches. This dissertation goes on to validate these approaches by implementing them in a markerless, monocular SLAM system called LUMO-SLAM, which is built from the ground up to demonstrate this approach to unknown moving object registration and localization. Results are collected for the LUMO-SLAM system, which address the accuracy of its camera localization estimates, the accuracy of its moving object localization estimates, and the consistency with which it registers moving objects in the scene. These results show that this solution to the dynamic SLAM problem, though it does not act as a practical solution for all use cases, has an ability to accurately register and localize unknown moving objects in such a way that makes it useful for some applications of AR without thwarting the system's ability to also perform accurate camera localization.</p>
|
584 |
Entwicklung einer Schnittstelle zur Visualisierung von Brandsimulationen im virtuellen RaumNabrotzky, Toni 22 December 2023 (has links)
Die Digitalisierung im Bauwesen schreitet immer weiter voran und während in diesem
Zusammenhang oftmals das Stichwort Building Information Modeling (BIM) fällt,
entwickeln sich Disziplinen wie das Brandschutzingenieurwesen (BSI) unabhängig weiter.
Das Brandschutzbüro Brandschutz Consult Ingenieurgesellschaft mbH Leipzig (BCL)
verwendet das BSI, um ingenieurtechnische Verfahren heranzuziehen. BCL verfolgt als
Unternehmensphilosophie das Ziel, mit neuen Methoden und Erkenntnissen ständig die
eigenen Prozesse zu optimieren und zu erweitern.
Unter diesem Gesichtspunkt soll in dieser Arbeit in Kooperation mit BCL untersucht
werden, inwieweit sich die Ergebnisse aus einer Brandsimulation, darunter besonders
der Rauch, in einer virtuellen Realität (engl. Virtual Reality (VR)) darstellen und in
bestehende oder potenzielle Anwendungsfälle integrieren lassen. Dazu soll zunächst mit
einer Betrachtung der brandschutztechnischen Grundlagen inklusive des BSIs und einer
Analyse zum Stand des Brandschutzes in BIM begonnen werden. Im nächsten Schritt
sind für die Brandsimulation bestimmte Fragen zu klären, wie z.B. eine entsprechende
Berechnung technisch abläuft und welche Ausgabedaten und -formate eine solche
Simulation bereitstellt.
Zur Darstellung der Simulationsergebnisse in virtuellen Realitäten werden Grafik.Engines benötigt, die VR-Anwendungen ermöglichen. Wichtige Untersuchungsgegenstände sind z.B. die anwendbaren Programmier- und Skriptsprachen, mit deren Einsatz
die Daten eingelesen und visualisiert werden können. Für die gefundenen Grafik-Engines
wird dann recherchiert, ob es bereits bestehende Anwendungen oder Prozesse zur Darstellung von Brandsimulationen gibt. Ist dies der Fall, sollen deren Workflows untersucht
werden, um anschließend ihre grundsätzliche Einsatzfähigkeit zu bewerten und Verbesserungsvorschläge zu äußern...:1. Prozesse im Brandschutz
1.1. Brandschutztechnische Grundlagen
1.2. Angewandte Ingenieurmethoden
1.3. Brandschutz mit Building Information Modeling
2. Ablauf einer Brandsimulation
2.1. Verfügbare Software
2.2. Aufbau einer FDS-Eingabedatei
2.3. Generieren von Simulationsdaten in FDS
2.4. Ausgabedaten und -formate
3. Software zur Darstellung in VR
3.1. Blender
3.2. Unity Engine
3.3. Unreal Engine
3.4. Vergleich der Engines
4. Visualisierung der Brandsimulation
4.1. Konzept der Datenübertragung
4.2. Bestehende Workflows für VR-Programme
4.3. Versuchsdurchführung
4.4. Auswertung der Versuche
5. Anwendungsfälle und Optimierungspotenzial
5.1. Potenzielle Einsatzmöglichkeiten
5.2. Optimierungspotenzial
6. Fazit
A. Beispielmodell Blender
B. Beispielmodell VRSmokeVis
C. Prüfmodell
Abkürzungsverzeichnis
Abbildungsverzeichnis
Tabellenverzeichnis
Literaturverzeichnis / Digitization in the construction industry is progressing and while the keyword Building
Information Modeling (BIM) is frequently mentioned, disciplines like the fire safety
engineering are also evolving independently. The fire protection office Brandschutz Consult Ingenieurgesellschaft mbH Leipzig (BCL)) uses fire safety engineering for including
engineering procedures. As a corporate philosophy BCL pursues the goal of constantly
optimizing and expanding its own processes with new methods and scientific findings.
From this point of view, in cooperation with BCL, this master thesis will examine to
which extent it is possible to visualize the results of a fire simulation, in particular
including the smoke, in Virtual Reality (VR) and to integrate them into existing or
evolving applications. For this purpose, a consideration of the fire protection basics
including fire protection engineering and an analysis of the status of fire protection in
BIM has been started. In the next step the fire simulation must be investigated, i.e. how
the corresponding calculation technically works and which output data and formats
such a simulation provides.
Graphic engines that enable VR applications are required to display the simulation
results in VR. Important objects of investigation are e.g. the applicable programming
and scripting languages. Those scripting languages are used to import and visualize
the data. For the graphic engines found, research is initiated to determine whether
there are already existing applications or processes for displaying fire simulations. If
this is the case these workflows should be examined in order to subsequently evaluate
their fundamental usability and to express suggestions for improvement. If possible,
some of the optimizations should be carried out. Based on the existing processes in fire
protection helpful application options are derived, for which the use must be proven in
future projects.:1. Prozesse im Brandschutz
1.1. Brandschutztechnische Grundlagen
1.2. Angewandte Ingenieurmethoden
1.3. Brandschutz mit Building Information Modeling
2. Ablauf einer Brandsimulation
2.1. Verfügbare Software
2.2. Aufbau einer FDS-Eingabedatei
2.3. Generieren von Simulationsdaten in FDS
2.4. Ausgabedaten und -formate
3. Software zur Darstellung in VR
3.1. Blender
3.2. Unity Engine
3.3. Unreal Engine
3.4. Vergleich der Engines
4. Visualisierung der Brandsimulation
4.1. Konzept der Datenübertragung
4.2. Bestehende Workflows für VR-Programme
4.3. Versuchsdurchführung
4.4. Auswertung der Versuche
5. Anwendungsfälle und Optimierungspotenzial
5.1. Potenzielle Einsatzmöglichkeiten
5.2. Optimierungspotenzial
6. Fazit
A. Beispielmodell Blender
B. Beispielmodell VRSmokeVis
C. Prüfmodell
Abkürzungsverzeichnis
Abbildungsverzeichnis
Tabellenverzeichnis
Literaturverzeichnis
|
585 |
Automated and adaptive geometry preparation for ar/vr-applicationsDammann, Maximilian Peter, Steger, Wolfgang, Stelzer, Ralph 25 January 2023 (has links)
Product visualization in AR/VR applications requires a largely manual process of data preparation. Previous publications focus on error-free triangulation or transformation of product structure data and display attributes for AR/VR applications. This paper focuses on the preparation of the required geometry data. In this context, a significant reduction in effort can be achieved through automation. The steps of geometry preparation are identified and examined concerning their automation potential. In addition, possible couplings of sub-steps are discussed. Based on these explanations, a structure for the geometry preparation process is proposed. With this structured preparation process, it becomes possible to consider the available computing power of the target platform during the geometry preparation. The number of objects to be rendered, the tessellation quality, and the level of detail (LOD) can be controlled by the automated choice of transformation parameters. Through this approach, tedious preparation tasks and iterative performance optimization can be avoided in the future, which also simplifies the integration of AR/VR applications into product development and use. A software tool is presented in which partial steps of the automatic preparation are already implemented. After an analysis of the product structure of a CAD file, the transformation is executed for each component. Functions implemented so far allow, for example, the selection of assemblies and parts based on filter options, the transformation of geometries in batch mode, the removal of certain details, and the creation of UV maps. Flexibility, transformation quality, and timesavings are described and discussed.
|
586 |
Sensor fusion between positioning system and mixed reality / Sensorfusion mellan positioneringssystem och mixed realityLifwergren, Anton, Jonatan, Jonsson January 2022 (has links)
In situations where we want to use mixed reality systems over larger areas, it is necessary for these systems to maintain a correct orientation with respect to the real world. A solution for synchronizing the mixed reality and the real world over time is therefore essential to provide a good user experience. This thesis proposes such a solution, utilizing both a local positioning system named WISPR using Ultra Wide Band technology and an internal positioning system based on Google ARCore utilizing feature tracking. This is done by presenting a prototype mobile application utilizing the positions from these two positioning systems to align the physical environment with a corresponding virtual 3D-model. This enables increased environmental awareness by displaying virtual objects in accurately placed locations in the environment that otherwise are difficult or impossible to observe. Two transformation algorithms were implemented to align the physical environment with the corresponding virtual 3D-model: Singular Value Decomposition and Orthonormal Matrices. The choice of algorithm showed minimal effect on both positional accuracy and computational cost. The most significant factor influencing the positional accuracy was found to be the quality of sampled position pairs from the two positioning systems. The parameters used to ensure high quality for the sampled position pairs were the LPS accuracy threshold, sampling frequency, sampling distance, and sample limit. A fine-tuning process of these parameters is presented and resulted in a mean Euclidean distance error of less than 10 cm to a predetermined path in a sub-optimal environment. The aim of this thesis was not only to achieve high positional accuracy but also to make the application usable in environments such as mines, which are prone to worse conditions than those able to be evaluated in the available test environment. The design of the application, therefore, focuses on robustness and being able to handle connection losses from either positioning system. The resulting implementation can detect a connection loss, determine if the loss is destructive enough through performing quality checking of the transformation, and with this can apply both essential recovery actions and identify when such a recovery is deemed unnecessary.
|
587 |
A small step for a sensor : Detecting limited spatial movement with mobile AR / Ett litet steg för en sensorFallström, Johan January 2023 (has links)
In this paper, a technical overview will be provided for a developed mobile exergame, with a particular focus on its movement tracking. By utilizing spatial movement readout from the AR algorithm, we've managed to create an easy to use exergame that allows the user to track their horizontal movement. In contrast to a more conventional approach, our solution can work indoors, and can be applied to vertical motion tracking as well. The applied method led to an exergame tailor-made for its target group, but it didn't include a thorough examination of other alternatives to our AR usage. This means that our solution should be researched further to better understand its relevance in the field, while we've shown with a practical example how it can be utilized. / Heart-eXg
|
588 |
Integration of Head-Up Display Technology as an Assisting Tool for Paramedics / Integration av head-up display teknik som ett hjälpmedel för ambulanspersonalTran, Quoc Huy Martin January 2022 (has links)
When paramedics arrive at an accident site there are a lot of complex variables that they must keep in mind when making decisions. Thus, a problem of too much information presented at once makes the decision-making harder. To aid paramedics, smart glasses can be used to streamline the workflow and present relevant information immediately. Attempts at implementing smart glasses within prehospital care have previously been made. However, with no extensive research regarding what an implementation of how the user interface could look like has been presented. In addition, providing paramedics with optimal information to aid in decision making. Thus the goal of this thesis is to develop an Android application for the Google Glass Enterprise Edition 2 to visualise a patient's vital signs to aid paramedics to focus more on treating the patient. A limitation of the project was to only focus on the visualisation of the following vital parameters were focused: pulse from ECG, SpO2, EtCO2, and NiBP. To determine what would be demanded from the application, interviews with paramedics of varying seniority ranging from 7 to 23 years experience, were conducted. The paramedics interviewed were from different regions within Sweden and the UK. This gave a generalised understanding of how paramedics work and what would be desired by paramedics from smart glasses as an assisting tool. The Android application used a Wi-Fi Direct connection with the MobiMed Patient Unit which was transmitting the patient's vital sign data. The data was packaged in a JSON file and then transmitted over User Datagram Protocol to the Google Glass. The data was visualised with a focused layout of only one vital sign showing. In addition, a dual view where two vitals are shown simultaneously was created. The implementation was then run through different experiments to ensure that the glasses would be able to perform under relevant circumstances that could occur at an accident site. Finally, several topics for future development and use cases for the project in pre-hospital and hospital care were explored. / När ambulanspersonal anländer till en olycksplats finns många komplexa variabler som bör beaktas vid fattande av ett avgörande beslut. Genom detta uppstår ett problem där ett överflöde av information presenteras samtidigt vilket försvårar beslutstagande. För att lösa detta introduceras smarta glasögon då det kan effektivisera arbetet och medföra relevant information med en gång. Försök att implementera smarta glasögon inom den prehospitala vården har tidigare gjorts. Men utan omfattande forskning om hur en implementering skulle se ut samt att tillföra ambulanspersonal med information för att underlätta deras beslutsfattande har tidigare gjorts. Därför är målet med detta examensarbete att utveckla en Android-applikation för Google Glass för att visualisera en patients vitalparametrar för att hjälpa ambulanspersonal att fokusera mer på att behandla patienten. För att avgränsa projektet fokuserades visualisering på följande patient parametrarna; puls från EKG, SpO2, EtCO2, och NiBP. För att avgöra vad som skulle krävas från ambulanspersonalen så utfördes en intervjustudie med personal av varierande tjänstgöringstid från 7 till 23 år. Ambulanspersonalen som intervjuades var från olika län i Sverige och Storbritannien. Detta gav en generaliserad förståelse för hur ambulanspersonal arbetar och vad som skulle önskas av ambulanspersonal från smarta glasögon som ett hjälpmedel. Android-applikationen som skapades använde en Wi-Fi Direct-anslutning med MobiMed patientenheten som överför patientens vitalparametrar. Informationen paketerades i en JSON-fil och överfördes sedan via UDP till Google Glass. Data visualiserades med en fokuserad design där endast en parameter visas. Dessutom skapades en dubbelparameter design där två parametrar visas samtidigt. Implementeringen kördes sedan genom olika experiment för att säkerställa att glasögonen skulle kunna prestera under vissa omständigheter som skulle kunna inträffa vid vårdtillfällen. Slutligen undersöktes framtida utvecklingsarbeten och användningsfall för projektet.
|
589 |
Inklusion durch informationstechnische Assistenzsysteme – Gelingensbedingungen digitaler Lernszenarien mit Hilfe von Augmented Reality am Beispiel hörbeeinträchtigter oder gehörloser Menschen in der technischen BildungWinkler, Daniel, Lindner, Fabian, Meyer-Ross, K. Kathy 14 October 2024 (has links)
Digital Health & Inclusion B.2 / Aus Punkt 1 Einführung:
Eine Möglichkeit, Inklusion durch Digitalisierung umzusetzen, ist der Einsatz von informationstechnischen Assistenzsystemen wie beispielsweise den Technologien Augmented Reality, Bildverarbeitungssysteme zur automatisierten Prozesskontrolle, bildbasierte Anleitungen, GPS-basierte Orientierungshilfe, Licht und/oder Vibrationen als Alarmgeber, Maschinen mit Sprach-Interface, Smart Glove, Smartwatch oder andere asisstive bzw. behinderungskompensierende Technologien (Bratan et al. 2022; Engels 2016,; Weller 2019; Revermann 2010) Vor allem mit der Technologie Augmented Reality können virtuelle und reale Welten efektiv miteinander verknüpft, die Zusammenarbeit über verschiedene Standorte hinweg verbessert (Egger und Masood 2020) und die Bereiche Visualisierung, Anleitung und Interaktion (Porter und Heppelmann 2017; Winkler et al. 2020a) insbesondere für Menschen mit Inklusionsbedarf verbessert werden.
|
590 |
AR-HUD Design Guidelines : A Cross-Cultural Usability Study on Cognitive Workload and Preferences in HUD interfacesSvensson, Jonatan, Hammar, Jesper January 2024 (has links)
This master’s thesis project has been made in collaboration with Luleå University of Technology and client company ZEEKR to advance the knowledge about Augmented-Reality Head-Up Displays (AR-HUDs). The technological advancements being made in the automotive industry are rapidly moving forward, and the implementation of Head-Up Displays has been a hot topic and focal point of user safety and driving assistance discussions for the past couple of years.The earliest HUDs would only show static information (speedometer, speed limits, and other status information), but lately the implementation of Augmented Reality technology in HUDs can be seen in many flagship car brands. The projection technology has been improving drastically since the early versions, but despite this, many still believe the system to be only a gimmick that does not add real value to the user. In fact, many believe it has the opposite effect of its actual purpose: to aid the user in driving. We have engaged in this project with ZEEKR to establish a guideline on what to have in mind when designing the interface for a system like this. Given ZEEKR’s market presence in China and Europe, we also explore cultural expectations and user interactions to balance satisfaction across markets. To summarize this into something concise, the following research questions have been shaped: 1. How can the design of an AR-HUD be tailored to meet the divergent cultural expectations of users in ZEEKR’s primary markets, China and Europe, while maintaining a cohesive user experience? 2. How can information be optimally presented on an AR-HUD to achieve a balanced cognitive workload for the driver? As we progressed in the thesis, we gravitated towards a third research question which proved to also be of interest to ZEEKR. The research question emerged as: 3. How can AR-HUD systems be assessed resource efficiently while maximizing user feedback? The thesis follows an iterative, 4-phase process based on an Industrial Design Engineer’s workflow. Initially context was explored through user and stakeholder interactions, collecting qualitative and quantitative data. Then multiple HUD concepts were generated and tested. Three comprehensive user tests were conducted: a low-fidelity prototyping workshop, a medium-fidelity VR user test with digitally added HUD elements, and a high-fidelity VR user test with a Logitech G29 rig in a Unity game engine for interactive driving simulations. Our findings, combined with academic research and expertise in interface design and user experience, are concrete design suggestions for the designing of AR HUD systems. The findings also show that the cultural differences between the two user groups were not a as big as anticipated, altough further testing is required to fully determine this. The resutls also include a standardized test-protocol built in Unity that ZEEKR can use for future testings. / Detta examensarbete har utförts i samarbete med Luleå tekniska universitet och ZEEKR för att öka kunskapen om Augmented Reality Head-Up Displays (AR- HUDs). De teknologiska framstegen inom fordonsindustrin rör sig snabbt framåt, och implementeringen av HUD:ar har varit en central punkt i diskussioner om användarsäkerhet och körassistans. De tidigaste HUD:arna visade endast statisk information, men på senare tid har augmented reality-teknologi implementerats i många ledande bilmärken. Projektionsteknologin har förbättrats drastiskt, men många anser att systemet bara är en gimmick som inte tillför något verkligt värde. Många anser faktiskt att det har motsatt effekt av sitt egentliga syfte: att hjälpa användaren vid körning. Vi har engagerat oss i detta projekt med ZEEKR för att etablera en riktlinje för vad man ska ha i åtanke när man designar gränssnittet för ett system som detta. Med tanke på ZEEKR:s marknadsnärvaro i Kina och Europa utforskar vi också kulturella förväntningar och användarinteraktioner för att balansera tillfredsställelse över marknader. För att sammanfatta detta i något koncist har följande forskningsfrågor formulerats: 1. Hur kan designen av en AR-HUD skräddarsys för att möta de olika kulturella förväntningarna hos användare på ZEEKR:s primära marknader, Kina och Europa, samtidigt som man bibehåller en sammanhängande användarupplevelse? 2. Hur kan information presenteras optimalt på en AR-HUD för att uppnå en balanserad kognitiv arbetsbelastning för föraren? När vi gick vidare med avhandlingen, drogs vi mot en tredje forskningsfråga som också visade sig vara av intresse för ZEEKR. Forskningsfrågan formulerades som: 3. Hur kan AR-HUD-system utvärderas resurseffektivt samtidigt som användarinsikterna maximeras? Examensarbetet följer en iterativ process med fyra phasear baserad på en industridesigningenjörs arbetsflöde. Inledningsvis undersöktes kontexten genom interaktioner med användare och intressenter, och kvalitativ och kvantitativ data samlades in. Därefter genererades och testades flera HUD-koncept. Tre omfattande användartester genomfördes: en lågfidelitets-prototyp workshop, ett medelfidelitets- VR-test med digitalt tillagda HUD-element, och ett högfidelitets-VR-test med en Logitech G29-rigg i Unity-spelmotor för interaktiva körsimuleringar. Våra resultat,i kombination med akademisk forskning och expertis inom gränssnittsdesign och användarupplevelse, utgör konkreta designförslag för utformningen av AR HUD-system. Resultaten visar också att de kulturella skillnaderna mellan de två användargrupperna inte var så stora som förväntat, även om ytterligare tester krävs för att fullt ut fastställa detta. Resultaten inkluderar också ett standardiserat testprotokoll byggt i Unity som ZEEKR kan använda för framtida tester.
|
Page generated in 0.0947 seconds