• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 15
  • 10
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 198
  • 198
  • 71
  • 67
  • 67
  • 44
  • 41
  • 38
  • 34
  • 33
  • 27
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality

Xiong, Yiyan 01 January 2014 (has links)
3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character's behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant's pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting their mobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision.
102

Remote collaboration within a Mixed Reality rehabilitation environment : The usage of audio and video streams for mixed platform collaboration

Eriksson, Hanna January 2022 (has links)
This thesis investigates methods for remote collaboration and communication within a Mixed Reality (MR) rehabilitation environment. Based on the research on remote communication methods and an interview with an occupational therapist with previous experience in MR rehabilitation, a video and audio stream communication method was chosen to be implemented. The implementation consists of two applications, one patient application developed for HoloLens 2 and one therapist application for Android devices. The latter was tested on professional occupational therapists to investigate the feasibility of the method.  The result of the test indicated that the general attitude toward remote rehabilitation was positive. However, the chosen method did not allow the therapist to see the patient's face and surroundings which was a problem for a majority of the test participants. The cognitive workload for the therapist when communicating with the patient was in magnitude to similar tasks and the application was relatively easy to navigate.
103

The use of mixed reality in simulations

Byström, Jesper January 2022 (has links)
Simulators utilizing virtual reality have a problem with visibility of controls; when using a head mounted display, a user is blind to the controls being used. Meaning that the user needs to become accustomed to the controls before utilizing the simulator properly. Oryx Simulations has acknowledged this issue, and have been experimenting if it would be possible to use mixed reality to solve this issue. This study investigates two techniques as a solution, depth occlusion and stencil masking. The study compares depth occlusion and stencil masking to the commonly-used chroma key functionality. Chroma keying could theoretically achieve a seamless blend between the virtual objects and real objects. The results presented show a promising outcome for depth occlusion specifically, which has the highest total score, visibility and lowest amount of leakages between the categories tested. This report presents and reflects upon those results. It concludes by discussing opportunities for further investigation into depth occlusion.
104

Embodied Data Exploration in Immersive Environments: Application in Geophysical Data Analysis

Sardana, Disha 05 June 2023 (has links)
Immersive analytics is an emerging field of data exploration and analysis in immersive environments. It is an active research area that explores human-centric approaches to data exploration and analysis based on the spatial arrangement and visualization of data elements in immersive 3D environments. The availability of immersive extended reality systems has increased tremendously recently, but it is still not as widely used as conventional 2D displays. In this dissertation, we described an immersive analysis system for spatiotemporal data and performed several user studies to measure the user performance in the developed system, and laid out design guidelines for an immersive analytics environment. In our first study, we compared the performance of users based on specific visual analytics tasks in an immersive environment and on a conventional 2D display. The approach was realized based on the coordinated multiple-views paradigm. We also designed an embodied interaction for the exploration of spatial time series data. The findings from the first user study showed that the developed system is more efficient in a real immersive environment than using it on a conventional 2D display. One of the important challenges we realized while designing an immersive analytics environment was to find the optimal placement and identification of various visual elements. In our second study, we explored the iterative design of the placement of visual elements and interaction with them based on frames of reference. Our iterative designs explored the impact of the visualization scale for three frames of reference and used the collected user feedback to compare the advantages and limitations of these three frames of reference. In our third study, we described an experiment that quantitatively and qualitatively investigated the use of sonification, i.e., conveying information through nonspeech audio, in an immersive environment that utilized empirical datasets obtained from a multi-dimensional geophysical system. We discovered that using event-based sonification in addition to the visual channel was extremely effective in identifying patterns and relationships in large, complex datasets. Our findings also imply that the inclusion of audio in an immersive analytics system may increase users’ level of confidence when performing analytics tasks like pattern recognition. We outlined the sound design principles for an immersive analytics environment using real-world geospace science datasets and assessed the benefits and drawbacks of using sonification in an immersive analytics setting. / Doctor of Philosophy / When it comes to exploring data, visualization is the norm. We make line charts, scatter plots, bar graphs, or heat maps to look for patterns in data using traditional desktop-based approaches. However, biologically humans are optimized to observe the world in three dimensions. This research is motivated by the idea that representing data in immersive 3D environments can provide a new perspective that may lead to the discovery of previously undetected data patterns. Experiencing the data in three dimensions, engaging multiple senses like sound and sight, and leveraging human embodiment, interaction capabilities, and sense of presence may lead to a unique understanding of the data that is not feasible using traditional visual analytics. In this research, we first compared the data analysis process in a mixed reality system, where real and virtual worlds co-exist, versus doing the same analytical tasks in a desktop-based environment. In our second study, we studied where different charts and data visualizations should be placed based on the scale of the environment, such as table-top versus room-sized. We studied the strengths and limitations of different scales based on the visual and interaction design of the developed system. In our third study, we used a real-world space science dataset to test the liabilities and advantages of using the immersive approach. We also used audio and explored what kinds of audio work for which analytical tasks and laid out design guidelines based on audio. Through this research, we studied how to do data analytics in emerging mixed reality environments and presented results and design guidelines for future developers, designers, and researchers in this field.
105

Looks Good To Me (LGTM): Authentication for Augmented Reality

Gaebel, Ethan Daniel 27 June 2016 (has links)
Augmented reality is poised to become the next dominant computing paradigm over the course of the next decade. With the three-dimensional graphics and interactive interfaces that augmented reality promises it will rival the very best science fiction novels. Users will want to have shared experiences in these rich augmented reality scenarios, but surely users will want to restrict who can see their content. It is currently unclear how users of such devices will authenticate one another. Traditional authentication protocols reliant on centralized authorities fall short when different systems with different authorities try to communicate and extra infrastructure means extra resource expenditure. Augmented reality content sharing will usually occur in face-to-face scenarios where it will be advantageous for both performance and usability reasons to keep communications and authentication localized. Looks Good To Me (LGTM) is an authentication protocol for augmented reality headsets that leverages the unique hardware and context provided with augmented reality headsets to solve an old problem in a more usable and more secure way. LGTM works over point to point wireless communications so users can authenticate one another in any circumstance and is designed with usability at its core, requiring users to perform only two actions: one to initiate and one to confirm. LGTM allows users to intuitively authenticate one another, using seemingly only each other's faces. Under the hood LGTM uses a combination of facial recognition and wireless localization to ensure secure and extremely simple authentication. / Master of Science
106

<b>Real–time performance comparison of environments created using traditional geometry rendering versus Unreal Nanite technology in virtual reality.</b>

Tianshu Li Sr. (17596065) 26 April 2024 (has links)
<p dir="ltr">This study talks about the use of Nanite in Unreal Engine 5.3 in a VR environment and evaluates its impact on scene performance and image quality. Through experimental studies, it was found that Nanite significantly reduced the number of triangles and draw calls for complex scenes. However, Nanite may have caused FPS drops and excessive GPU load, limiting its application areas. Additionally, as a prior condition for Nanite, disabling forward shading reduces performance even though it has positive impacts on graphic quality. The results show that Nanite may have potential in VR environments but requires further optimization to improve its performance. Future research should focus on new optimization methods and expand the use of Nanite in different fields while hardware technology improves.</p>
107

<b>Immersion in Georges Seurat’s Painting “La Grande Jatte” with VR: A Study for Art Appreciation</b>

Siddhant Bal (19182175) 20 July 2024 (has links)
<p dir="ltr">The study aims to provide insight into improving the understanding and, in tandem, appreciation of a traditional art piece, the "Sunday Afternoon on the Island of La Grande Jatte" made by Georges Seurat, using virtual reality.</p>
108

Holographic Sign Language Interpreter: A User Interaction Study within Mixed Reality Classroom

Fu Chia Yang (12469872) 27 April 2023 (has links)
<p>An application was developed to explore user interactions with the holographic sign language interpreters within HoloLens MR classrooms for Deaf and Hard of Hearing (DHH) students. The proposed system aims to enhance DHH students’ learning efficacy. Despite the ongoing advancement of assistive technology and the trend to adopt Mixed Reality applications into education, not much existing research provides user study or design guidelines for HoloLens development targeting the DHH community. The developed HoloLens application projects a holographic American Sign Language (ASL) avatar that signs the lecture while a speaking instructor is teaching. The usability test focused on avatar manipulation (move, rotate, and resize) and avatar framing (full-body and half-body displays) within the MR classroom. A mixed-method approach was used to analyze quantitative and qualitative data through test recordings, surveys, and interviews. The result shows user preferences toward viewing holographic signing avatars in the MR space and user acceptability toward such applications</p>
109

Understanding Immersive Environments for Visual Data Analysis

Satkowski, Marc 06 February 2024 (has links)
Augmented Reality enables combining virtual data spaces with real-world environments through visual augmentations, transforming everyday environments into user interfaces of arbitrary type, size, and content. In the past, the development of Augmented Reality was mainly technology-driven. This made head-mounted Mixed Reality devices more common in research, industrial, or personal use cases. However, such devices are always human-centered, making it increasingly important to closely investigate and understand human factors within such applications and environments. Augmented Reality usage can reach from a simple information display to a dedicated device to present and analyze information visualizations. The growing data availability, amount, and complexity amplified the need and wish to generate insights through such visualizations. Those, in turn, can utilize human visual perception and Augmented Reality’s natural interactions, the potential to display three-dimensional data, or the stereoscopic display. In my thesis, I aim to deepen the understanding of how Augmented Reality applications must be designed to optimally adhere to human factors and ergonomics, especially in the area of visual data analysis. To address this challenge, I ground my thesis on three research questions: (1) How can we design such applications in a human-centered way? (2) What influence does the real-world environment have within such applications? (3) How can AR applications be combined with existing systems and devices? To answer those research questions, I explore different human properties and real-world environments that can affect the same environment’s augmentations. For human factors, I investigate the competence in working with visualizations as visualization literacy, the visual perception of visualizations, and physical ergonomics like head movement. Regarding the environment, I examine two main factors: the visual background’s influence on reading and working with immersive visualizations and the possibility of using alternative placement areas in Augmented Reality. Lastly, to explore future Augmented Reality systems, I designed and implemented Hybrid User Interfaces and authoring tools for immersive environments. Throughout the different projects, I used empirical, qualitative, and iterative methods in studying and designing immersive visualizations and applications. With that, I contribute to understanding how developers can apply human and environmental parameters for designing and creating future AR applications, especially for visual data analysis. / Augmented Reality ermöglicht es, die reale Welt mit virtuellen Datenräume durch visuelle Augmentierungen zu kombinieren. Somit werden alltägliche Umgebungen in Benutzeroberflächen beliebiger Art, Größe und beliebigen Inhalts verwandelt. In der Vergangenheit war die Entwicklung von Augmented Reality hauptsächlich technologiegetrieben. Folglich fanden head-mounted Mixed Reality Geräte immer häufiger in der Forschung, Industrie oder im privaten Bereich anwendung. Da die Geräte jedoch immer auf den Menschen ausgerichtet sind, wird es immer wichtiger die menschlichen Faktoren in solchen Anwendungen und Umgebungen genau zu untersuchen. Die Nutzung von Augmented Reality kann von einer einfachen Informationsanzeige bis hin zur Darstellung und Analyse von Informationsvisualisierungen reichen. Die wachsende Datenverfügbarkeit, -menge und -komplexität verstärkte den Bedarf und Wunsch, durch solche Visualisierungen Erkenntnisse zu gewinnen. Diese wiederum können die menschliche visuelle Wahrnehmung und die durch Augmented Reality bereitgestellte natürlichen Interaktion und die Darstellung dreidimensionale and stereoskopische Daten nutzen. In meiner Dissertation möchte ich das Verständnis dafür vertiefen, wie Augmented Reality-Anwendungen gestaltet werden müssen, um menschliche Faktoren und Ergonomie optimal zu berücksichtigen, insbesondere im Bereich der visuellen Datenanalyse. Hierbei stütze ich mich in meiner Arbeit auf drei Forschungsfragen: (1) Wie können solche Anwendungen menschenzentriert gestaltet werden? (2) Welchen Einfluss hat die reale Umgebung auf solche Anwendungen? (3) Wie können AR Anwendungen mit existierenden Systemen und Geräten kombiniert werden? Um diese Forschungsfragen zu beantworten, untersuche ich verschiedene menschliche und Umgebungseigenschaften, die sich auf die Augmentierungen derselben Umgebung auswirken können. Für menschliche Faktoren untersuche ich die Kompetenz im Umgang mit Visualisierungen als ``Visualization Literacy'', die visuelle Wahrnehmung von Visualisierungen, und physische Ergonomie wie Kopfbewegungen. In Bezug auf die Umgebung untersuche ich zwei Hauptfaktoren: den Einfluss des visuellen Hintergrunds auf das Lesen und Arbeiten mit immersiven Visualisierungen und die Möglichkeit der Verwendung alternativer Platzierungsbereiche in Augmented Reality. Um zukünftige Augmented Reality-Systeme zu erforschen, habe ich schließlich Hybride Benutzerschnittstellen und Konfigurationstools für immersive Umgebungen entworfen und implementiert. Während der verschiedenen Projekte habe ich empirische, qualitative und iterative Methoden bei der Untersuchung und Gestaltung von immersiven Visualisierungen und Anwendungen eingesetzt. Damit trage ich zum Verständnis bei, wie Entwickler menschliche und umbebungsbezogene Parameter für die Gestaltung und Erstellung zukünftiger AR-Anwendungen, insbesondere für die visuelle Datenanalyse, nutzen können.
110

Sensor fusion between positioning system and mixed reality / Sensorfusion mellan positioneringssystem och mixed reality

Lifwergren, Anton, Jonatan, Jonsson January 2022 (has links)
In situations where we want to use mixed reality systems over larger areas, it is necessary for these systems to maintain a correct orientation with respect to the real world. A solution for synchronizing the mixed reality and the real world over time is therefore essential to provide a good user experience. This thesis proposes such a solution, utilizing both a local positioning system named WISPR using Ultra Wide Band technology and an internal positioning system based on Google ARCore utilizing feature tracking. This is done by presenting a prototype mobile application utilizing the positions from these two positioning systems to align the physical environment with a corresponding virtual 3D-model. This enables increased environmental awareness by displaying virtual objects in accurately placed locations in the environment that otherwise are difficult or impossible to observe. Two transformation algorithms were implemented to align the physical environment with the corresponding virtual 3D-model: Singular Value Decomposition and Orthonormal Matrices. The choice of algorithm showed minimal effect on both positional accuracy and computational cost. The most significant factor influencing the positional accuracy was found to be the quality of sampled position pairs from the two positioning systems. The parameters used to ensure high quality for the sampled position pairs were the LPS accuracy threshold, sampling frequency, sampling distance, and sample limit. A fine-tuning process of these parameters is presented and resulted in a mean Euclidean distance error of less than 10 cm to a predetermined path in a sub-optimal environment. The aim of this thesis was not only to achieve high positional accuracy but also to make the application usable in environments such as mines, which are prone to worse conditions than those able to be evaluated in the available test environment. The design of the application, therefore, focuses on robustness and being able to handle connection losses from either positioning system. The resulting implementation can detect a connection loss, determine if the loss is destructive enough through performing quality checking of the transformation, and with this can apply both essential recovery actions and identify when such a recovery is deemed unnecessary.

Page generated in 0.1089 seconds