• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 15
  • 10
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 70
  • 63
  • 63
  • 42
  • 38
  • 37
  • 33
  • 31
  • 27
  • 18
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Convergence in mixed reality-virtuality environments : facilitating natural user behavior

Johansson, Daniel January 2012 (has links)
This thesis addresses the subject of converging real and virtual environments to a combined entity that can facilitate physiologically complying interfaces for the purpose of training. Based on the mobility and physiological demands of dismounted soldiers, the base assumption is that greater immersion means better learning and potentially higher training transfer. As the user can interface with the system in a natural way, more focus and energy can be used for training rather than for control itself. Identified requirements on a simulator relating to physical and psychological user aspects are support for unobtrusive and wireless use, high field of view, high performance tracking, use of authentic tools, ability to see other trainees, unrestricted movement and physical feedback. Using only commercially available systems would be prohibitively expensive whilst not providing a solution that would be fully optimized for the target group for this simulator. For this reason, most of the systems that compose the simulator are custom made to facilitate physiological human aspects as well as to bring down costs. With the use of chroma keying, a cylindrical simulator room and parallax corrected high field of view video see-though head mounted displays, the real and virtual reality are mixed. This facilitates use of real tool as well as layering and manipulation of real and virtual objects. Furthermore, a novel omnidirectional floor and thereto interface scheme is developed to allow limitless physical walking to be used for virtual translation. A physically confined real space is thereby transformed into an infinite converged environment. The omnidirectional floor regulation algorithm can also provide physical feedback through adjustment of the velocity in order to synchronize virtual obstacles with the surrounding simulator walls. As an alternative simulator target use, an omnidirectional robotic platform has been developed that can match the user movements. This can be utilized to increase situation awareness in telepresence applications.
42

Interactive Visualization of Underground Infrastructures via Mixed Reality

Sela, Sebastian, Gustafsson, Elliot January 2019 (has links)
Visualization of underground infrastructures, such as pipes and cables, can be useful for infrastructure providers and can be utilized for both planning and maintenance. The purpose of this project is therefore to develop a system that provides interactive visualization of underground infrastructures using mixed reality. This requires positioning the user and virtual objects outdoors, as well as optimizing the system for outdoor use. To accomplish these, GPS coordinates must be known so the system is  capable of accurately drawing virtual underground infrastructures in real time in relation to the real world. To get GPS data into the system, a lightweight web server written in Python was developed to run on GPS-enabled Android devices, which responds to a given HTTP request with the current GPS coordinates of the device. A mixed reality application was developed in Unity and written in C# for the Microsoft HoloLens. This requests the coordinates via HTTP in order to draw virtual objects, commonly called holograms, representing the underground infrastructure. The application uses the Haversine formula to calculate distances using GPS coordinates. Data, including GPS coordinates, pertaining real underground infrastructures have been provided by Halmstad Energi och Miljö. The result is therefore a HoloLens application which, in combination with a Python script, draws virtual objects based on real data (type of structures, size, and their corresponding coordinates) to enable the user to view the underground infrastructure. The user can customize the experience by choosing to display certain types of pipes, or changing the chosen navigational tool. Users can also view the information of valves, such as their ID, type, and coordinates. Although the developed application is fully functional, the visualization of holograms with HoloLens outdoors is problematic because of the brightness of natural light affecting the application’s visibility, and lack of points for tracking of its surroundings causing the visualization to be wrongly displayed. Visualization of underground infrastructures, such as pipes and cables, can be useful for infrastructure providers and can be utilized for both planning and maintenance. The purpose of this project is therefore to develop a system that provides interactive visualization of underground infrastructures using mixed reality. This requires positioning the user and virtual objects outdoors, as well as optimizing the system for outdoor use. To accomplish these, GPS coordinates must be known so the system is capable of accurately drawing virtual underground infrastructures in real time in relation to the real world. To get GPS data into the system, a lightweight web server written in Python was developed to run on GPS-enabled Android devices, which responds to a given HTTP request with the current GPS coordinates of the device. A mixed reality application was developed in Unity and written in C# for the Microsoft HoloLens. This requests the coordinates via HTTP in order to draw virtual objects, commonly called holograms, representing the underground infrastructure. The application uses the Haversine formula to calculate distances using GPS coordinates. Data, including GPS coordinates, pertaining real underground infrastructures have been provided by Halmstad Energi och Miljö. The result is therefore a HoloLens application which, in combination with a Python script, draws virtual objects based on real data (type of structures, size, and their corresponding coordinates) to enable the user to view the underground infrastructure. The user can customize the experience by choosing to display certain types of pipes, or changing the chosen navigational tool. Users can also view the information of valves, such as their ID, type, and coordinates. Although the developed application is fully functional, the visualization of holograms with HoloLens outdoors is problematic because of the brightness of natural light affecting the application’s visibility, and lack of points for tracking of its surroundings causing the visualization to be wrongly displayed.
43

Iluminação baseada em séries temporais de imagens com aplicações em realidade mista / Time series image based lighting with mixed reality applications

Valente, Caio de Freitas 06 September 2016 (has links)
A estimação da iluminação é essencial para aplicações de realidade mista que se propõem a integrar elementos virtuais a cenas reais de maneira harmoniosa e sem a perda do realismo. Um dos métodos mais utilizados para fazer essa estimação é conhecido como iluminação baseada em imagens (Image Based Lighting - IBL), método que utiliza light probes para capturar a intensidade da iluminação incidente em uma cena. Porém, IBL estima a iluminação incidente apenas para um determinado instante e posição. Nesta dissertação, será avaliado um modelo de iluminação que utiliza séries temporais de imagens de light probes, obtidas de maneira esparsa em relação ao tempo, para renderizar cenas em instantes arbitrários. Novas cenas contendo objetos virtuais poderão ser renderizadas utilizando imagens de light probes artificiais, geradas a partir das amostras da iluminação originais. Diferentes funções de interpolação e aproximação são avaliadas para modelar o comportamento luminoso. As imagens finais produzidas pela metodologia também são verificadas por voluntários, de modo a determinar o impacto na qualidade de renderização em aplicações de realidade mista. Além da metodologia, foi desenvolvida uma ferramenta de software em forma de plugin para facilitar o uso de IBL temporalmente variável, permitindo assim a renderização realística de objetos virtuais para instantes arbitrários / Lighting estimation is essential for mixed reality applications that strive to integrate virtual elements into real scenes in a seamlessly fashion without sacrificing realism. A widely used method for lighting estimation is known as Image Based Lighting (IBL), which utilizes light probes to determine incident light intensity within a scene. However, IBL estimates light incidence only for a given time and position. In this dissertation, we assess a lighting model based on a time series of light probe images, obtained sparsely, to render scenes at arbitrary times. New scenes containing virtual objects can then be rendered by using artificial light probe images, which are generated from the original light samples. Different types of interpolation and approximation functions were evaluated for modeling lighting behavior. The resulting images were assessed for the impact in rendering quality for mixed reality applications by volunteers. In addition to the methodology, we also developed a software plugin to simplify the use of temporally variable IBL, allowing realistic rendering of virtual objects for arbitrary times
44

Contribution to the study of projection-based systems for industrial applications in mixed reality / Contribution à l’étude des systèmes de projection pour des applications industrielles en réalité mixte

Cortes, Guillaume 24 October 2018 (has links)
La réalité mixte apporte certains avantages aux applications industrielles. Elle peut, entre autres, faciliter la visualisation et validation de projets ou assister des opérateurs durant des tâches spécifiques. Les systèmes de projection (PBS), tels que les CAVE ou la réalité augmentée spatiale, fournissent un environnement de réalité mixte permettant une collaboration directe avec des utilisateurs externes. Dans cette thèse, nous visons à améliorer l'utilisation des systèmes de projection pour des applications industrielles en abordant deux défis majeurs: (1) améliorer les composantes techniques des PBS et (2) augmenter l'expérience utilisateur dans les PBS. En tant que premier défi technique, nous visons à améliorer les systèmes de suivi de mouvements optiques. Nous proposons une approche permettant d’élargir l’espace de travail de ces systèmes grâce à deux méthodes. La première permet de suivre les mouvements à partir d’une seule caméra tandis ce que la deuxième permet de contrôler les caméras et suivre les objets dans l’espace de travail. Le système qui en résulte fournit des performances acceptables pour des applications en réalité mixte tout en augmentant considérablement l'espace de travail. Un tel système de suivi de mouvement peut permettre de mieux exploiter le potentiel des systèmes de projection et d’élargir le champ possible des interactions. En tant que deuxième défi technique, nous concevons un casque « tout-en-un » pour la réalité augmentée spatiale mobile. Le casque rassemble à la fois un système de projection et un système de suivi de mouvements qui sont embarqués sur la tête de l'utilisateur. Avec un tel système, les utilisateurs sont capables de se déplacer autour d'objets tangibles et de les manipuler directement à la main tout en projetant du contenu virtuel par-dessus. Nous illustrons notre système avec deux cas d'utilisation industriels: le prototypage virtuel et la visualisation médicale. Enfin, nous abordons le défi qui vise à améliorer l’expérience utilisateur dans les PBS. Nous proposons une approche permettant d’incarner un personnage virtuel et d’augmenter la perception spatiale des utilisateurs dans les PBS. Pour ce faire, nous ajoutons l’ombre virtuelle des utilisateurs dans les systèmes de projection immersifs. L'ombre virtuelle est directement corrélée aux mouvements des utilisateurs afin qu'ils la perçoivent comme si c'était la leur. Nous avons effectué une expérience afin d'étudier l'influence de la présence de l'ombre virtuelle sur le comportement des utilisateurs. / Mixed reality brings some advantages to industrial applications. It can, among others, facilitate visualizing and validating projects or assist operators during specific tasks. Projection-based Systems (PBS), such as CAVEs or spatial augmented reality provide a mixed reality environment enabling straightforward collaboration with external users. In this thesis, we aim at improving the usage of PBS for industrial applications by considering two main challenges: (1) improving the technical components of PBS and (2) improving the user experience when using PBS. As a first technical challenge, we propose to address the improvement of the tracking component. We introduce an approach that enables increasing the workspace of optical tracking systems by using two methods. As a first method, we propose to use monocular tracking. As a second method, we propose to use controlled cameras that follow the targets across the workspace. The resulting system provides acceptable performances for mixed reality applications while considerably increasing the workspace. Such a tracking system can make it easier to use large projection-based displays and can widen the range of available interactions. As a second technical challenge, we design an “all-in-one” headset for mobile spatial augmented reality on tangible objects. The headset gathers both a projection and a tracking system that are embedded on the user’s head. With such a system, the users are able to move around tangible objects and to manipulate them directly by hand while projecting virtual content over them. We illustrate our system with two industrial use cases: virtual prototyping and medical visualization. Finally, we address the challenge that aims at improving the user experience when using PBS. We introduce a method that provides virtual embodiment and increases the spatial perception of the users when using PBS. To do so we add the user’s virtual shadow in immersive projection-based systems. The virtual shadow is dynamically mapped to users’ movements in order to make them perceive the shadow as if it was their own. We then carry out an experiment to study the influence of the presence of the virtual shadow on the user experience and behavior.
45

Edge Computing for Mixed Reality / Blandad virtuell verklighet med stöd av edge computing

Lindqvist, Johan January 2019 (has links)
Mixed reality, or augmented reality, where the real and the virtual worlds are combined, has seen an increase in interest in recent years with the release of tools like Google ARCore and Apple ARkit. Edge computing, where the distributed computing resources are located near the end device at the edge of the network, is a paradigm that enables offloading of computing tasks with latency requirements to dedicated servers. This thesis studies how edge computing can be used to bring mixed reality capabilities to mobile end devices that lack native support for that. It presents a working prototype for delivering mixed reality, evaluates the different technologies in it in relation to stability, responsiveness and resource usage, and studies the requirements on the end and edge devices. The experimental evaluation revealed that transmission time is the most significant chunk of end-to-end latency for the developed application. Reducing that delay will have a significant impact on future deployments of such systems. The thesis also presents other bottlenecks and best practices found during the prototype’s development, and how to proceed from here.
46

Préparation à la conduite automatisée en Réalité Mixte / Get ready for automated driving with Mixed Reality

Sportillo, Daniele 19 April 2019 (has links)
L'automatisation de la conduite est un processus en cours qui est en train de changer radicalement la façon dont les gens voyagent et passent du temps dans leur voiture pendant leurs déplacements. Les véhicules conditionnellement automatisés libèrent les conducteurs humains de la surveillance et de la supervision du système et de l'environnement de conduite, leur permettant d'effectuer des activités secondaires pendant la conduite, mais requièrent qu’ils puissent reprendre la tâche de conduite si nécessaire. Pour les conducteurs, il est essentiel de comprendre les capacités et les limites du système, d’en reconnaître les notifications et d'interagir de manière adéquate avec le véhicule pour assurer leur propre sécurité et celle des autres usagers de la route. À cause de la diversité des situations de conduite que le conducteur peut rencontrer, les programmes traditionnels de formation peuvent ne pas être suffisants pour assurer une compréhension efficace de l'interaction entre le conducteur humain et le véhicule pendant les transitions de contrôle. Il est donc nécessaire de permettre aux conducteurs de vivre ces situations avant leur première utilisation du véhicule. Dans ce contexte, la Réalité Mixte constitue un outil d'apprentissage et d'évaluation des compétences potentiellement efficace qui permettrait aux conducteurs de se familiariser avec le véhicule automatisé et d'interagir avec le nouvel équipement dans un environnement sans risque. Si jusqu'à il y a quelques années, les plates-formes de Réalité Mixte étaient destinées à un public de niche, la démocratisation et la diffusion à grande échelle des dispositifs immersifs ont rendu leur adoption plus accessible en termes de coût, de facilité de mise en œuvre et de configuration. L'objectif de cette thèse est d'étudier le rôle de la réalité mixte dans l'acquisition de compétences pour l'interaction d'un conducteur avec un véhicule conditionnellement automatisé. En particulier, nous avons exploré le rôle de l'immersion dans le continuum de la réalité mixte en étudiant différentes combinaisons d'espaces de visualisation et de manipulation et la correspondance entre le monde virtuel et le monde réel. Du fait des contraintes industrielles, nous avons limité les candidats possibles à des systèmes légers portables, peu chers et facilement accessibles; et avons analysé l’impact des incohérences sensorimotrices que ces systèmes peuvent provoquer sur la réalisation des activités dans l’environnement virtuel. À partir de ces analyses, nous avons conçu un programme de formation visant l'acquisition des compétences, des règles et des connaissances nécessaires à l'utilisation d'un véhicule conditionnellement automatisé. Nous avons proposé des scénarios routiers simulés de plus en plus complexes pour permettre aux apprenants d’interagir avec ce type de véhicules dans différentes situations de conduite. Des études expérimentales ont été menées afin de déterminer l'impact de l'immersion sur l'apprentissage, la pertinence du programme de formation conçu et, à plus grande échelle, de valider l'efficacité de l'ensemble des plateformes de formation par des mesures subjectives et objectives. Le transfert de compétences de l'environnement de formation à la situation réelle a été évalué par des essais sur simulateurs de conduite haut de gamme et sur des véhicules réels sur la voie publique. / Driving automation is an ongoing process that is radically changing how people travel and spend time in their cars during journeys. Conditionally automated vehicles free human drivers from the monitoring and supervision of the system and driving environment, allowing them to perform secondary activities during automated driving, but requiring them to resume the driving task if necessary. For the drivers, understanding the system’s capabilities and limits, recognizing the system’s notifications, and interacting with the vehicle in the appropriate way is crucial to ensuring their own safety and that of other road users. Because of the variety of unfamiliar driving situations that the driver may encounter, traditional handover and training programs may not be sufficient to ensure an effective understanding of the interaction between the human driver and the vehicle during transitions of control. Thus, there is the need to let drivers experience these situations before their first ride. In this context, Mixed Reality provides potentially valuable learning and skill assessment tools which would allow drivers to familiarize themselves with the automated vehicle and interact with the novel equipment involved in a risk-free environment. If until a few years ago these platforms were destined to a niche audience, the democratization and the large-scale spread of immersive devices since then has made their adoption more accessible in terms of cost, ease of implementation, and setup. The objective of this thesis is to investigate the role of Mixed Reality in the acquisition of competences needed for a driver’s interaction with a conditionally automated vehicle. In particular, we explored the role of immersion along the Mixed Reality continuum by investigating different combinations of visualization and manipulation spaces and the correspondence between the virtual and the real world. For industrial constraints, we restricted the possible candidates to light systems that are portable, cost-effective and accessible; we thus analyzed the impact of the sensorimotor incoherences that these systems may cause on the execution of tasks in the virtual environment. Starting from these analyses, we designed a training program aimed at the acquisition of skills, rules and knowledge necessary to operate a conditionally automated vehicle. In addition, we proposed simulated road scenarios with increasing complexity to suggest what it feels like to be a driver at this level of automation in different driving situations. Experimental user studies were conducted in order to determine the impact of immersion on learning and the pertinence of the designed training program and, on a larger scale, to validate the effectiveness of the entire training platform with self-reported and objective measures. Furthermore, the transfer of skills from the training environment to the real situation was assessed with test drives using both high-end driving simulators and actual vehicles on public roads.
47

Design, implementation and evaluation for continuous interaction in image-guided surgery

Trevisan, Daniela 03 March 2006 (has links)
Recent progress in the overlay and registration of digital information on the users workspace in a spatially meaningful way has allowed mixed reality (MR) to become a more effective operational medium. In the area of medical surgery, surgeons are conveyed with information such as the incisions location, regions to be avoided, diseased tissues, etc, while staying in and keeping their original working environment. The main objective of this Thesis is identifying theoretical and practical basis for how mixed reality interfaces might provide support and augmentation maximizing the continuity of interaction. We start proposing a set of design principles organized in a design space which allows to identify continuity interaction properties at an early stage of the development system. Once the abstract design possibilities have been identified and a concrete design decision has been taken, an implementational strategy can be developed. Two approaches were investigated: markerless and marker-based. The last one is used to provide surgeons with guidance on an osteotomy task in the maxillo-facial surgery. The evaluation process applies usability tests with users to validate the augmented guidance in different scenarios and to study the influence of different design variables in the final user interaction. As a result we have found a model to describe the contribution factors of each variable for the continuity of the user interaction. We suggest that this methodology can be applied mainly to those applications in which smooth connections and interactions, with virtual and real environments, are critical for the system; i.e. surgery, drivers applications or pilot simulations.
48

Mixed reality interactive storytelling : acting with gestures and facial expressions

Martin, Olivier 04 May 2007 (has links)
This thesis aims to answer the following question : “How could gestures and facial expressions be used to control the behavior of an interactive entertaining application?”. An answer to this question is presented and illustrated in the context of mixed reality interactive storytelling. The first part focuses on the description of the Artificial Intelligence (AI) mechanisms that are used to model and control the behavior of the application. We present an efficient real-time hierarchical planning engine, and show how active modalities (such as intentional gestures) and passive modalities (such as facial expressions) can be integrated into the planning algorithm, in such a way that the narrative (driven by the behavior of the virtual characters inside the virtual world) can effectively evolve in accordance with user interactions. The second part is devoted to the automatic recognition of user interactions. After briefly describing the implementation of a simple but robust rule-based gesture recognition system, the emphasis is set on facial expression recognition. A complete solution integrating state-of-the-art techniques along with original contributions is drawn. It includes face detection, facial feature extraction and analysis. The proposed approach combines statistical learning and probabilistic reasoning in order to deal with the uncertainty associated with the process of modeling facial expressions.
49

Design, implementation and evaluation for continuous interaction in image-guided surgery

Trevisan, Daniela 03 March 2006 (has links)
Recent progress in the overlay and registration of digital information on the users workspace in a spatially meaningful way has allowed mixed reality (MR) to become a more effective operational medium. In the area of medical surgery, surgeons are conveyed with information such as the incisions location, regions to be avoided, diseased tissues, etc, while staying in and keeping their original working environment. The main objective of this Thesis is identifying theoretical and practical basis for how mixed reality interfaces might provide support and augmentation maximizing the continuity of interaction. We start proposing a set of design principles organized in a design space which allows to identify continuity interaction properties at an early stage of the development system. Once the abstract design possibilities have been identified and a concrete design decision has been taken, an implementational strategy can be developed. Two approaches were investigated: markerless and marker-based. The last one is used to provide surgeons with guidance on an osteotomy task in the maxillo-facial surgery. The evaluation process applies usability tests with users to validate the augmented guidance in different scenarios and to study the influence of different design variables in the final user interaction. As a result we have found a model to describe the contribution factors of each variable for the continuity of the user interaction. We suggest that this methodology can be applied mainly to those applications in which smooth connections and interactions, with virtual and real environments, are critical for the system; i.e. surgery, drivers applications or pilot simulations.
50

Utformning av projektorsystem / Designing of a projector system

Läbom, Malin January 2010 (has links)
Det här examensarbetet på D-nivå har utförts i samarbete men företaget XM reality i Linköping. Företaget jobbar med att ta fram olika system inom området mixed reality, som på svenska översätts till förhöjd verklighet.   Syftet med projektet var att utforma två olika handhållna projektorsystem. Produkterna skulle utformas för att passa in i sjukhusmiljö. Båda enheterna har samma funktion men innehåller olika komponenter. Produkterna är anpassade till en ny teknik som företaget har utvecklat. Den nya tekniken är en mixed reality applikation som gör det möjligt att projicera skiktscanningar, CT eller MRI, direkt på patientens kropp. Målet med projektet var att göra två fungerade modeller. Modellerna skall fungera som prototyper i den medicinska forskarstudien som projektet ingår i.   En designstrategi utformades till företaget som skall uttrycka företagets image och vision. I arbetet med designstrategin togs ett antal kärnvärden fram. Kärnvärdena blev high tech, professionell, framtid, exklusiv, kvalité, hållbar samt kompakt vilket skall gälla för företagets samtliga produkter. Till produktsegmentet sjukhusprodukter tillades kärnvärdena ergonomisk samt dynamisk.   Arbetet med att utveckla produkterna började med litterära ergonomiska studier som följdes upp av fysiska användarstudier. Under arbetats gång har flera olika metoder tillämpats. Metoderna har används för att fatta beslut men även som stöd i den kreativa gestaltningsprocessen.   Resultatet är två ergonomiskt anpassade handhållna enheter. Enheterna är utformade för att passa in i sjukhusmiljön. De är smidiga samt enkla att använda. Båda produkterna är utformade efter designstrategin och för att förmedla företagets image. De fysiska modeller som gjordes innehåller fungerande komponenter som går att bytas ut. / This thesis D-level is done in collaboration with the company XM reality that is located in Linköping, Sweden. The company is working with the development of various systems in the field of mixed reality.   The task was to design two different hand-held projector systems. The products should be designed to fit in to the hospital environment. Both products have the same function but contain different components. The products are adapted to a new technology that the company has developed. The new technology is a mixed reality application that makes it possible to project CT or MRI scans directly on to the patient's body. The goal of the project was to make two functioning models. The models are to be used as prototypes in the medical research study the project is a part of.   A design strategy was created for the company to reflect the company’s image and vision. In the process of developing the design strategy, a number of core values were established. The core values were high-tech, professional, futuristic, exclusive, quality, durable and compact, which should to be valid for all the company’s products. To the product segment of hospital products were the core values ergonomic and dynamic added.   The work to develop the products began with literary ergonomic studies that were followed up by user studies. During the working process, several different methods were used. The methods were used not only to make decisions but also to help the creative design process.   The result is two ergonomically designed hand-held devices. The products are designed to fit into the hospital environment. They are slim and easy to use. Both products are designed along the design strategy and to communicate the company's image. The physical models that were made contain functional components that can be changed.

Page generated in 0.4392 seconds