Spelling suggestions: "subject:"[een] MIXED REALITY"" "subject:"[enn] MIXED REALITY""
91 |
Evaluation of Hand Collision in Mixed RealityTegelind, Adrian January 2024 (has links)
Background. With the growing prospects of extended realities (XR), new use casesand experiences are constantly being developed. Especially with the introduction ofmixed reality (MR), allowing for a more seamless blend of the physical and digitalspace, it provides great opportunities in many fields such as education and trainingwhere dangerous procedures can be practiced safely. However, to make these experi-ences as effective and educational as possible, there is a need to make the experiencesrealistic. Objectives. One important aspect of creating realistic experiences is believablecollision between the user’s physical hand and the digital objects. This study specif-ically takes aim at this aspect. Trying to find how the performance difference anduser experience (UX) is affected by the addition of collision around the user’s handsin an MR environment. In order to help guide the way to get the answers to thesequestions, a set of objectives has been formulated. These objectives are; finding andimplementing a hand collision method, designing and performing the user study, andfinally finding and utilizing appropriate methods for analyzing the collected data. Methods. To get a better understanding of the UX and performance of using handcollision, a user study was created where the participants had to complete a seriesof tasks, with and without collision around their hands. For each task, answering aquestionnaire about their experience. Once the data have been collected, it will beanalyzed with the help of the SUS scoring system and statistical tests. Results. The study had 12 participants. With and without hand collision receivedan average SUS score of 62,5 and 69,2 respectively. The results show that the methodusing no collision performed better in terms of time to complete the task. However,hand collision performed better with fewer grabs used. No statistically significantdifference was detected between having or not having hand collision in terms of in-tuitiveness and realism. However, participants were observed to intuitively use thehand collision to their advantage. Conclusions. In conclusion, the participants did not perform better with handcollision, however, did indicate some level of increased intuition and realism. Thenegative aspects of the hand collision are believed to be attributed to the methodused to implement it, and potential in the area exists for further improvements andresearch. / Bakgrund. Med ett växande potential för extended realities (XR), nya använd-ningsområden och upplevelser utvecklas ständigt. Speciellt med införandet av mixedrealities (MR), möjligjorde en mer enad upplevelse av det fysiska och digitala, medstora möjligheter inom utbildning och träning där det farligt sitvationer kan övas påett säkert sätt. Men, för att göra dessa upplevelser så effektiva och pedagogiska sommöjligt behöves mer realistiska upplevelser. Syfte. En viktig aspekt av att skapa realistiska upplevelser är att skapa trovärdigakollisioner mellan användarens fysiska hand och the digitala objekten. Detta är ettav målen denna studien tar sikte på. Att försöker hitta hur prestandaskillnaden äroch användarupplevelsen (UX) påverkas med tillägget av kollision runt användarenshänder i en MR-miljö. För att enklare kunna hitta en väg till svaret för dessa frå-gor har mål formulerats. Dessa mål är att; hitta och implementera en handkollisionsmetod, designa och utför en användarstudie, och hitta samt använd lämpliga metoderför att analysera den insamlade datan. Metod. För att få en bättre förståelse för hur UX och prestanda för använd-ning av handkollsion skapades en användarstudie där deltagarna genomförde en serieuppgifter, med och utan kollision runt deras händer. För varje uppgift besvaradesett frågeformulär om deras upplevelse. När uppgifterna har samlats in kommer deatt analyseras med hjälp av SUS poängsystem och statistiska tester. Resultat. Denna studie hade 12 deltagare. Med och utan handkollision fick engenomsnittlig SUS-poäng av 62,5 respektive 69,2. Resultaten visar att metoden sominte använder någon kollision presterade bättre när det gäller tid för att slutförauppgiften. Men, kollision fick dock bättre med resultat med ett färre antal greppsom används. Det var ingen statistiskt signifikant skillnad som upptäcktes mellanmed och utan handkollision i avsikt på intuitivitet och realism. Dock observeradesdeltagarna att använda kollisionen på ett mer intuitivt sätt till sin fördel. Slutsatser. Sammanfattningsvis, deltagarna presterade inte bättre med kollision,men indikerade viss nivå av ökad intuition och realism. De negativa aspekterna avkollisionen tros att vara i grund på den metod som använts för att implementera den,och potential finns inom området för ytterligare förbättringar och forskning.
|
92 |
Holographic Sign Language Interpreter: A User Interaction Study within Mixed Reality ClassroomFu Chia Yang (12469872) 27 April 2023 (has links)
<p>An application was developed to explore user interactions with the holographic sign language interpreters within HoloLens MR classrooms for Deaf and Hard of Hearing (DHH) students. The proposed system aims to enhance DHH students’ learning efficacy. Despite the ongoing advancement of assistive technology and the trend to adopt Mixed Reality applications into education, not much existing research provides user study or design guidelines for HoloLens development targeting the DHH community. The developed HoloLens application projects a holographic American Sign Language (ASL) avatar that signs the lecture while a speaking instructor is teaching. The usability test focused on avatar manipulation (move, rotate, and resize) and avatar framing (full-body and half-body displays) within the MR classroom. A mixed-method approach was used to analyze quantitative and qualitative data through test recordings, surveys, and interviews. The result shows user preferences toward viewing holographic signing avatars in the MR space and user acceptability toward such applications</p>
|
93 |
Understanding Immersive Environments for Visual Data AnalysisSatkowski, Marc 06 February 2024 (has links)
Augmented Reality enables combining virtual data spaces with real-world environments through visual augmentations, transforming everyday environments into user interfaces of arbitrary type, size, and content. In the past, the development of Augmented Reality was mainly technology-driven. This made head-mounted Mixed Reality devices more common in research, industrial, or personal use cases. However, such devices are always human-centered, making it increasingly important to closely investigate and understand human factors within such applications and environments. Augmented Reality usage can reach from a simple information display to a dedicated device to present and analyze information visualizations. The growing data availability, amount, and complexity amplified the need and wish to generate insights through such visualizations. Those, in turn, can utilize human visual perception and Augmented Reality’s natural interactions, the potential to display three-dimensional data, or the stereoscopic display.
In my thesis, I aim to deepen the understanding of how Augmented Reality applications must be designed to optimally adhere to human factors and ergonomics, especially in the area of visual data analysis. To address this challenge, I ground my thesis on three research questions: (1) How can we design such applications in a human-centered way? (2) What influence does the real-world environment have within such applications? (3) How can AR applications be combined with existing systems and devices?
To answer those research questions, I explore different human properties and real-world environments that can affect the same environment’s augmentations. For human factors, I investigate the competence in working with visualizations as visualization literacy, the visual perception of visualizations, and physical ergonomics like head movement. Regarding the environment, I examine two main factors: the visual background’s influence on reading and working with immersive visualizations and the possibility of using alternative placement areas in Augmented Reality. Lastly, to explore future Augmented Reality systems, I designed and implemented Hybrid User Interfaces and authoring tools for immersive environments. Throughout the different projects, I used empirical, qualitative, and iterative methods in studying and designing immersive visualizations and applications. With that, I contribute to understanding how developers can apply human and environmental parameters for designing and creating future AR applications, especially for visual data analysis. / Augmented Reality ermöglicht es, die reale Welt mit virtuellen Datenräume durch visuelle Augmentierungen zu kombinieren. Somit werden alltägliche Umgebungen in Benutzeroberflächen beliebiger Art, Größe und beliebigen Inhalts verwandelt. In der Vergangenheit war die Entwicklung von Augmented Reality hauptsächlich technologiegetrieben. Folglich fanden head-mounted Mixed Reality Geräte immer häufiger in der Forschung, Industrie oder im privaten Bereich anwendung. Da die Geräte jedoch immer auf den Menschen ausgerichtet sind, wird es immer wichtiger die menschlichen Faktoren in solchen Anwendungen und Umgebungen genau zu untersuchen. Die Nutzung von Augmented Reality kann von einer einfachen Informationsanzeige bis hin zur Darstellung und Analyse von Informationsvisualisierungen reichen. Die wachsende Datenverfügbarkeit, -menge und -komplexität verstärkte den Bedarf und Wunsch, durch solche Visualisierungen Erkenntnisse zu gewinnen. Diese wiederum können die menschliche visuelle Wahrnehmung und die durch Augmented Reality bereitgestellte natürlichen Interaktion und die Darstellung dreidimensionale and stereoskopische Daten nutzen.
In meiner Dissertation möchte ich das Verständnis dafür vertiefen, wie Augmented Reality-Anwendungen gestaltet werden müssen, um menschliche Faktoren und Ergonomie optimal zu berücksichtigen, insbesondere im Bereich der visuellen Datenanalyse. Hierbei stütze ich mich in meiner Arbeit auf drei Forschungsfragen: (1) Wie können solche Anwendungen menschenzentriert gestaltet werden? (2) Welchen Einfluss hat die reale Umgebung auf solche Anwendungen? (3) Wie können AR Anwendungen mit existierenden Systemen und Geräten kombiniert werden?
Um diese Forschungsfragen zu beantworten, untersuche ich verschiedene menschliche und Umgebungseigenschaften, die sich auf die Augmentierungen derselben Umgebung auswirken können. Für menschliche Faktoren untersuche ich die Kompetenz im Umgang mit Visualisierungen als ``Visualization Literacy'', die visuelle Wahrnehmung von Visualisierungen, und physische Ergonomie wie Kopfbewegungen. In Bezug auf die Umgebung untersuche ich zwei Hauptfaktoren: den Einfluss des visuellen Hintergrunds auf das Lesen und Arbeiten mit immersiven Visualisierungen und die Möglichkeit der Verwendung alternativer Platzierungsbereiche in Augmented Reality. Um zukünftige Augmented Reality-Systeme zu erforschen, habe ich schließlich Hybride Benutzerschnittstellen und Konfigurationstools für immersive Umgebungen entworfen und implementiert. Während der verschiedenen Projekte habe ich empirische, qualitative und iterative Methoden bei der Untersuchung und Gestaltung von immersiven Visualisierungen und Anwendungen eingesetzt. Damit trage ich zum Verständnis bei, wie Entwickler menschliche und umbebungsbezogene Parameter für die Gestaltung und Erstellung zukünftiger AR-Anwendungen, insbesondere für die visuelle Datenanalyse, nutzen können.
|
94 |
Sensor fusion between positioning system and mixed reality / Sensorfusion mellan positioneringssystem och mixed realityLifwergren, Anton, Jonatan, Jonsson January 2022 (has links)
In situations where we want to use mixed reality systems over larger areas, it is necessary for these systems to maintain a correct orientation with respect to the real world. A solution for synchronizing the mixed reality and the real world over time is therefore essential to provide a good user experience. This thesis proposes such a solution, utilizing both a local positioning system named WISPR using Ultra Wide Band technology and an internal positioning system based on Google ARCore utilizing feature tracking. This is done by presenting a prototype mobile application utilizing the positions from these two positioning systems to align the physical environment with a corresponding virtual 3D-model. This enables increased environmental awareness by displaying virtual objects in accurately placed locations in the environment that otherwise are difficult or impossible to observe. Two transformation algorithms were implemented to align the physical environment with the corresponding virtual 3D-model: Singular Value Decomposition and Orthonormal Matrices. The choice of algorithm showed minimal effect on both positional accuracy and computational cost. The most significant factor influencing the positional accuracy was found to be the quality of sampled position pairs from the two positioning systems. The parameters used to ensure high quality for the sampled position pairs were the LPS accuracy threshold, sampling frequency, sampling distance, and sample limit. A fine-tuning process of these parameters is presented and resulted in a mean Euclidean distance error of less than 10 cm to a predetermined path in a sub-optimal environment. The aim of this thesis was not only to achieve high positional accuracy but also to make the application usable in environments such as mines, which are prone to worse conditions than those able to be evaluated in the available test environment. The design of the application, therefore, focuses on robustness and being able to handle connection losses from either positioning system. The resulting implementation can detect a connection loss, determine if the loss is destructive enough through performing quality checking of the transformation, and with this can apply both essential recovery actions and identify when such a recovery is deemed unnecessary.
|
95 |
Understanding Mixed Reality Immersion in Online Learning: A Socio-Spatial and Social Presence PerspectiveFarrokhi, Aydin January 2024 (has links)
In online learning, little is known about the impact of mixed reality and its underlying socio-technological factors on social space perceived by learners. The term social space refers to a group's socio-emotional dynamics that structure the social relationships among its members. Drawing upon extant literature on mixed reality, interaction and social presence, this research proposes and validates a theoretical model that elucidates the influence of immersion on users’ social space experiences within digital environments across different mixed realities (i.e., augmented and virtual realities, and video conferencing). Interaction and social presence are identified as two key factors mediating this relationship. To test the proposed model, a survey involving 488 participants in higher education was conducted, and the effects were examined under three conditions: video conferencing, augmented-reality, and virtual-reality. The findings show that 1) an immersion perspective plays a significant role in educational technology, particularly for its role in facilitating social space among learners in higher education; 2) the impact of immersion on social space is entirely mediated by learners’ assessment of interactions and their perception of social presence in digitally facilitated learning environments; and 3) the influences of immersion on learners to experience a prosocial space vary across virtual reality, augmented reality, and video conferencing. VR environments offer the most pronounced sensation of social presence, while AR environments prove to be optimal for interacting in digitally facilitated learning environments. Out of these three conditions, VC environments were perceived the lowest for interaction, sensation of social presence, and the establishment of a communal atmosphere of collaboration in digitally mediated learning environments.
These findings make valuable contributions to theory by providing insights into the influence of immersion and variability in mixed reality on learners’ perception of social space experiences. In this respect, this research expands the body of knowledge and research in both the information systems and education fields. Furthermore, this research offers valuable insights for educators to make informed decisions regarding the selection and adoption of augmented and virtual reality technologies, as well as devising digital strategies in higher education. It contributes to our understanding of effective implementation of mixed reality in this context. / Dissertation / Doctor of Philosophy (PhD)
|
96 |
Development of a stereo vision mixed reality frameworkLe Roux, Christiaan Johannes Hendrik 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Augmented reality is a fairly young research field, still in an infancy stage
at Stellenbosch University. Since this is one of the first augmented reality
projects, one goal is to present a theoretical study of augmented reality. This
study is given in the literature study, along with a review of the available
development solutions.
While there are various tools available with which one can create markerbased
augmented reality applications, these tools are not meant for testing new
techniques and algorithms in an augmented or mixed reality. The remaining
goals of this project is to create a platform for the rapid design of augmented
reality applications, and to expand the capabilities of this platform beyond
marker-based augmented reality.
In this project we present the design and implementation of a pragmatic
mixed-reality framework capable of a wider variety of applications. A design
is shown where marker tracking can be used alongside other computer vision
techniques to design new applications. The framework utilises stereo cameras
to find the position of real world objects, and a 3D display to make the mixed
reality environment as immersive as possible.
Proof of concept test applications built with the framework are presented.
Colour based techniques are used to find a user’s hand and create a virtual
representation of it. This allows the user to interact with a virtual object in
an augmented reality scene by ‘touching’ it with her hand. / AFRIKAANSE OPSOMMING: Toegevoegde realiteit is ’n jong navorsingsveld by Universiteit Stellenbosch.
Aangesien hierdie een van die eerste projekte is wat fokus op toegevoegde
realiteit, is ’n teoretiese studie van toegevoegde realiteit as ’n doel gestel.
Dit word verskaf in die literatuurstudie, tesame met ’n oorsig oor bestaande
oplossings vir die ontwikkeling van toegevoegde realiteit sagteware.
Bestaande oplossings is gefokus op die ontwikkeling van merker-gebaseerde
toegevoegde realiteit, maar los min ruimte vir die toets van nuwe tegnieke
toepasbaar op die veld. Dit lei tot die oorblywende doelwitte van die projek:
om ’n platform te ontwerp vir die ontwikkeling van merker-gebaseerde
toegevoegde realiteit programme, asook om die platform uit te brei.
Ons lewer ’n pragmatiese ontwikkelingsraamwerk wat dit moontlik maak
om ’n verskeidenheid nuwe toegevoegde realiteit programme te ontwikkel. Die
raamwerk is ontwerp sodat die ontwikkelaar merkers saam met ander rekenaarvisie
tegnieke kan gebruik om sagteware te skep. Stereo kameras word gebruik
om die posisie van werklike voorwerpe te vind. Die raamwerk maak ook gebruik
van ’n 3D skerm om virtuele objekte te vertoon.
Toetsprogramme gebou as ’n bewys van die konsep, word vertoon en bespreek.
’n Kleur-gebaseerde tegniek word gebruik om ’n gebruiker se hand te
vind, en ’n virtuele voorstelling van die hand word geskep. Die gebruiker kan
virtuele voorwerpe laat reageer deur dit met haar hand aan te raak.
|
97 |
Mobile learning using mixed reality games and a conversational, instructional and motivational paradigm : design and implementation of technical language learning mobile games for the developing world with special attention to mixed reality games for the realization of a conversational, instructional and motivational paradigmFotouhi-Ghazvini, Faranak January 2011 (has links)
Mobile learning has significant potential to be very influential in further and higher education. In this research a new definition for Mobile Educational Mixed Reality Games (MEMRG) is proposed based on a mobile learning environment. A questionnaire and a quantifying scale are utilised to assist the game developers in designing a MEMRG. A 'Conversational Framework' is proposed as an appropriate psycho-pedagogical approach to teaching and learning for MEMRG. This methodology is based on the theme of a 'conversation' between different actors of the learning community with the objective of building the architectural framework for MEMRG. Various elements responsible for instructing and motivating learners in educational games are utilised in an instructional-motivational model. User interface design for the games incorporates an efficient navigation system that uses contextual information, and allows the players to move seamlessly between real and virtual worlds. The implementation of MEMRG using the Java 2 Micro Edition (J2ME) platform iii is presented. The hardware and software specification for the MEMRG implementation and deployment are also discussed. MEMRG has produced improvements in the different cognitive processes of the learner, and also produced a deeper level of learning through enculturation, externalising ideas, and socialising. Learners' enjoyment, involvement, motivation, autonomy and metacognition skills have improved. This research will assist developers and teachers to gain an insight into learning paradigms which utilise mobile game environments that are formed by mixing real and virtual spaces, and provide them with a vision for effectively incorporating these games into formal and informal classroom sessions.
|
98 |
Evaluating the User Experience of Microsoft HoloLens and Mobile Device Using an Augmented Reality ApplicationPola, Sai Vijay January 2019 (has links)
Context: In recent years, everyone is completely relying on using computers and smartphones in our daily activities. Augmented Reality will superimpose virtual and computer-generated information on top of the real world. Volvo Construction Equipment (VCE) team is planning to use Augmented Reality applications on a real construction site to track the details of the vehicles without going to the laboratory. An Augmented reality application is developed for Microsoft HoloLensand Mobile device and the user experience is evaluated. This research has been conducted at PDRL-BTH, in collaboration with VCE. Objectives: In this research, the key attributes are collected, which should be displayed in both the devices and compare the user experience using the user satisfaction score. Furthermore, this research involves exploring and evaluating the difference in the user experience between both the devices. Methods: In this study first an interview is carried out with the design engineers of the VCE team. Some open-ended questions were asked to the VCE team. The information required from the VCE team is collected and documented and further an experiment on the user experience has been conducted to calculate the User Satisfaction Score between the Microsoft HoloLens and Mobile device. After the experiment, the significant difference has been measured using statistical techniques among the two devices. To measure the size of the difference Cohen’s D effect size is used. Results: The significant difference between the User Satisfaction Score of the two devices has been done using T-test. The results state that the significant value is less than 0.05 and hence the null hypothesis is rejected. The measurable difference states that Microsoft HoloLens has a better user interface than the Mobile device with respect to the user satisfaction Score. Conclusions: After obtaining the results and analyzing the data, we conclude that there is a significant difference in the user experience of Microsoft HoloLens when compared to Mobile device. We also conclude that Microsoft HoloLens has better user experience when compared to the Mobile device.
|
99 |
MEASURING SITUATION AWARENESS IN MIXED REALITY SIMULATIONSForsman, Viking January 2019 (has links)
Off-highway vehicle, such as excavators and forklifts, are heavy machines that are capable of causing harm to humans or damage property. Therefore, it is necessary to be able to develop interfaces for these kind of vehicles that can aid the operator to maintain a high level of situational awareness. How the interface affects the operators’ situational awareness is consequently an important metric to measure when evaluating the interface. Mixed reality simulators can be used to both develop and evaluate such interfaces in an immersive and safe environment. In this thesis we investigated how to measure situational awareness in a mixed-reality off-highway vehicle simulation scenario, without having to pause the scenario, by cross-referencing logs from the virtual environment and logs from the users' gaze position. Our method for investigating this research question was to perform a literature study and a user test. Each participant in the user test filled out a SART post-simulation questionnaire which we then compared with our measurement system.
|
100 |
The construction of a Haptic application in a Virtual Environment as a post-Stroke arm Rehabilitation exerciseDreifaldt, Ulrika, Lövquist, Erik January 2006 (has links)
<p>This thesis describes a six-month project based on stroke rehabilitation and involves designing with medical doctors, a physiotherapist and an occupational therapist, prototyping and evaluating with both stroke patients and other users. Our project involves the construction of a rehabilitation exercise system, based on virtual environments (VE) and haptics, designed for stroke patients. Our system uses a commercially available haptic device called the PHANTOM Omni, which has the possibility of being used as a rehabilitation tool to interact with virtual environments. The PHANTOM Omni is used in combination with our own developed software based on the platform H3D API. Our goal is to construct an application which will motivate the stroke patient to start using their arm again.</p><p>We give a review of the different aspects of stroke, rehabilitation, VE and haptics and how these have previously been combined. We describe our findings from our literature studies and from informal interviews with medical personnel. From these conclusions we attempt to take the research area further by suggesting and evaluating designs of different games/genres that can be used with the PHANTOM Omni as possible haptic exercises for post-stroke arm rehabilitation. We then present two different implementations to show how haptic games can be constructed. We mainly focus on an application we built, a game, using an iterative design process based on studies conducted during the project, called "The Labyrinth". The game is used to show many of the different aspects that have to be taken into account when designing haptic games for stroke patients. From a study with three stroke patients we have seen that "The Labyrinth" has the potential of being a stimulating, encouraging and fun exercise complement to the traditional rehabilitation. Through the design process and knowledge we acquired during this thesis we have created a set of general design guidelines that we believe can help in the future software development of haptic games for post-stroke arm rehabilitation.</p>
|
Page generated in 0.0507 seconds