• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 115
  • 16
  • 10
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 201
  • 201
  • 71
  • 68
  • 68
  • 45
  • 42
  • 38
  • 34
  • 33
  • 27
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Spatial Interaction for Immersive Mixed-Reality Visualizations

Büschel, Wolfgang 02 June 2023 (has links)
Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations. / Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können.
42

System Support for Next-Gen Mobile Applications

Jiayi Meng (16512234) 10 July 2023 (has links)
<p>Next-generation (Next-Gen) mobile applications, Extended Reality (XR), which encompasses Virtual/Augmented/Mixed Reality (VR/AR/MR), promise to revolutionize how people interact with technology and the world, ushering in a new era of immersive experiences. However, the hardware capacity of mobile devices will not grow proportionally with the escalating resource demands of the mobile apps due to their battery constraint. To bridge the gap, edge computing has emerged as a promising approach. It is further boosted by emerging 5G cellular networks, which promise low latency and high bandwidth. However, realizing the full potential of edge computing faces several fundamental challenges.</p> <p><br></p> <p>In this thesis, we first discuss a set of fundamental design challenges in supporting Next-Gen mobile applications via edge computing. These challenges extend across the three key system components involved — mobile clients, edge servers, and cellular networks. We then present how we address several of these challenges, including (1) how to coordinate mobile clients and edge servers to achieve stringent QoE requirements for Next-Gen apps; (2) how to optimize energy consumption of running Next-Gen apps on mobile devices to ensure long-lasting user experience; and (3) how to model and generate control-plane traffic of cellular networks to enable innovation on mobile network architectural design to support Next-Gen apps not only over 4G but also over 5G and beyond.</p> <p><br></p> <p>First, we present how to optimize the latency in edge-assisted XR system via the mobile-client and edge-server co-design. Specifically, we exploit key insights about frame similarity in VR to build the first multiplayer edge-assisted VR design, Coterie. We demonstrate that compared with the prior work on single-player VR, Coterie reduces the per-player network load by 10.6X−25.7X, and can easily support 4 players for high-quality VR apps on Pixel 2 over 802.11ac running at 60 FPS and under 16ms responsiveness without exhausting the finite wireless bandwidth.</p> <p><br></p> <p>Second, we focus on the energy perspective of running Next-Gen apps on mobile devices. We study a major limitation of a classic and de facto app energy management technique, reactive energy-aware app adaptation, which was first proposed two decades ago. We propose, design, and validate a new solution, the first proactive energy-aware app adaptation, that effectively tackles the limitation and achieves higher app QoE while meeting a given energy drain target. Compared with traditional approaches, our proactive solution improves the QoE by 44.8% (Pixel 2) and 19.2% (Moto Z3) under low power budget.</p> <p><br></p> <p>Finally, we delve into the third system component, cellular networks. To facilitate innovation in mobile network architecture to better support Next-Gen apps, we characterize and model the control-plane traffic of cellular networks, which has been mostly overlooked by prior work. To model the control-plane traffic, we first prove that traditional probability distributions that have been widely used for modeling Internet traffic (e.g., Poisson, Pareto, and Weibull) cannot model the control-plane traffic due to the much higher burstiness and longer tails in the cumulative distributions of the control-plane traffic. We then propose a two-level state-machine-based traffic model based on the Semi-Markov model. We finally validate that the synthesized traces by using our model achieve small differences compared with the real traces, i.e., within 1.7%, 4.9% and 0.8%, for phones, connected cars, and tablets, respectively. We also show that our model can be easily adjusted from LTE to 5G, enabling further research on control-plane design and optimization for 4G/5G and beyond.</p>
43

<b>UNDERSTANDING CROSS REALITY INTERACTION IN A CO-DESIGN TASK</b>

Sathvik Reddy Vudumula (19120255) 13 July 2024 (has links)
<p dir="ltr">This study provides insights into the right combination of devices in a co-design task, in this case, at designing a game level. The cross-reality systems enable users to connect and collaborate across the Reality-Virtuality continuum i.e., PC/Desktop, augmented reality (AR), mixed reality (MR) and virtual reality (VR). Co-design involves two or more users coming together to ideate a clear objective, and build using the appropriate tools for collaboration, design, testing, and refinement for the masses. It also considers the time and resources used throughout the process with constant and open communication. The simulation design is based on developing an application that allows two users to connect in a pairwise modality (Desktop-Desktop, VR-VR or Desktop-VR) and use the assets provided to design a game level. The users were given a layout of the level and factors based on which the level will be designed. The results are discussed, and future work and conclusions are provided based on them.</p>
44

Convergence in mixed reality-virtuality environments : facilitating natural user behavior

Johansson, Daniel January 2012 (has links)
This thesis addresses the subject of converging real and virtual environments to a combined entity that can facilitate physiologically complying interfaces for the purpose of training. Based on the mobility and physiological demands of dismounted soldiers, the base assumption is that greater immersion means better learning and potentially higher training transfer. As the user can interface with the system in a natural way, more focus and energy can be used for training rather than for control itself. Identified requirements on a simulator relating to physical and psychological user aspects are support for unobtrusive and wireless use, high field of view, high performance tracking, use of authentic tools, ability to see other trainees, unrestricted movement and physical feedback. Using only commercially available systems would be prohibitively expensive whilst not providing a solution that would be fully optimized for the target group for this simulator. For this reason, most of the systems that compose the simulator are custom made to facilitate physiological human aspects as well as to bring down costs. With the use of chroma keying, a cylindrical simulator room and parallax corrected high field of view video see-though head mounted displays, the real and virtual reality are mixed. This facilitates use of real tool as well as layering and manipulation of real and virtual objects. Furthermore, a novel omnidirectional floor and thereto interface scheme is developed to allow limitless physical walking to be used for virtual translation. A physically confined real space is thereby transformed into an infinite converged environment. The omnidirectional floor regulation algorithm can also provide physical feedback through adjustment of the velocity in order to synchronize virtual obstacles with the surrounding simulator walls. As an alternative simulator target use, an omnidirectional robotic platform has been developed that can match the user movements. This can be utilized to increase situation awareness in telepresence applications.
45

Interactive Visualization of Underground Infrastructures via Mixed Reality

Sela, Sebastian, Gustafsson, Elliot January 2019 (has links)
Visualization of underground infrastructures, such as pipes and cables, can be useful for infrastructure providers and can be utilized for both planning and maintenance. The purpose of this project is therefore to develop a system that provides interactive visualization of underground infrastructures using mixed reality. This requires positioning the user and virtual objects outdoors, as well as optimizing the system for outdoor use. To accomplish these, GPS coordinates must be known so the system is  capable of accurately drawing virtual underground infrastructures in real time in relation to the real world. To get GPS data into the system, a lightweight web server written in Python was developed to run on GPS-enabled Android devices, which responds to a given HTTP request with the current GPS coordinates of the device. A mixed reality application was developed in Unity and written in C# for the Microsoft HoloLens. This requests the coordinates via HTTP in order to draw virtual objects, commonly called holograms, representing the underground infrastructure. The application uses the Haversine formula to calculate distances using GPS coordinates. Data, including GPS coordinates, pertaining real underground infrastructures have been provided by Halmstad Energi och Miljö. The result is therefore a HoloLens application which, in combination with a Python script, draws virtual objects based on real data (type of structures, size, and their corresponding coordinates) to enable the user to view the underground infrastructure. The user can customize the experience by choosing to display certain types of pipes, or changing the chosen navigational tool. Users can also view the information of valves, such as their ID, type, and coordinates. Although the developed application is fully functional, the visualization of holograms with HoloLens outdoors is problematic because of the brightness of natural light affecting the application’s visibility, and lack of points for tracking of its surroundings causing the visualization to be wrongly displayed. Visualization of underground infrastructures, such as pipes and cables, can be useful for infrastructure providers and can be utilized for both planning and maintenance. The purpose of this project is therefore to develop a system that provides interactive visualization of underground infrastructures using mixed reality. This requires positioning the user and virtual objects outdoors, as well as optimizing the system for outdoor use. To accomplish these, GPS coordinates must be known so the system is capable of accurately drawing virtual underground infrastructures in real time in relation to the real world. To get GPS data into the system, a lightweight web server written in Python was developed to run on GPS-enabled Android devices, which responds to a given HTTP request with the current GPS coordinates of the device. A mixed reality application was developed in Unity and written in C# for the Microsoft HoloLens. This requests the coordinates via HTTP in order to draw virtual objects, commonly called holograms, representing the underground infrastructure. The application uses the Haversine formula to calculate distances using GPS coordinates. Data, including GPS coordinates, pertaining real underground infrastructures have been provided by Halmstad Energi och Miljö. The result is therefore a HoloLens application which, in combination with a Python script, draws virtual objects based on real data (type of structures, size, and their corresponding coordinates) to enable the user to view the underground infrastructure. The user can customize the experience by choosing to display certain types of pipes, or changing the chosen navigational tool. Users can also view the information of valves, such as their ID, type, and coordinates. Although the developed application is fully functional, the visualization of holograms with HoloLens outdoors is problematic because of the brightness of natural light affecting the application’s visibility, and lack of points for tracking of its surroundings causing the visualization to be wrongly displayed.
46

Iluminação baseada em séries temporais de imagens com aplicações em realidade mista / Time series image based lighting with mixed reality applications

Valente, Caio de Freitas 06 September 2016 (has links)
A estimação da iluminação é essencial para aplicações de realidade mista que se propõem a integrar elementos virtuais a cenas reais de maneira harmoniosa e sem a perda do realismo. Um dos métodos mais utilizados para fazer essa estimação é conhecido como iluminação baseada em imagens (Image Based Lighting - IBL), método que utiliza light probes para capturar a intensidade da iluminação incidente em uma cena. Porém, IBL estima a iluminação incidente apenas para um determinado instante e posição. Nesta dissertação, será avaliado um modelo de iluminação que utiliza séries temporais de imagens de light probes, obtidas de maneira esparsa em relação ao tempo, para renderizar cenas em instantes arbitrários. Novas cenas contendo objetos virtuais poderão ser renderizadas utilizando imagens de light probes artificiais, geradas a partir das amostras da iluminação originais. Diferentes funções de interpolação e aproximação são avaliadas para modelar o comportamento luminoso. As imagens finais produzidas pela metodologia também são verificadas por voluntários, de modo a determinar o impacto na qualidade de renderização em aplicações de realidade mista. Além da metodologia, foi desenvolvida uma ferramenta de software em forma de plugin para facilitar o uso de IBL temporalmente variável, permitindo assim a renderização realística de objetos virtuais para instantes arbitrários / Lighting estimation is essential for mixed reality applications that strive to integrate virtual elements into real scenes in a seamlessly fashion without sacrificing realism. A widely used method for lighting estimation is known as Image Based Lighting (IBL), which utilizes light probes to determine incident light intensity within a scene. However, IBL estimates light incidence only for a given time and position. In this dissertation, we assess a lighting model based on a time series of light probe images, obtained sparsely, to render scenes at arbitrary times. New scenes containing virtual objects can then be rendered by using artificial light probe images, which are generated from the original light samples. Different types of interpolation and approximation functions were evaluated for modeling lighting behavior. The resulting images were assessed for the impact in rendering quality for mixed reality applications by volunteers. In addition to the methodology, we also developed a software plugin to simplify the use of temporally variable IBL, allowing realistic rendering of virtual objects for arbitrary times
47

Contribution to the study of projection-based systems for industrial applications in mixed reality / Contribution à l’étude des systèmes de projection pour des applications industrielles en réalité mixte

Cortes, Guillaume 24 October 2018 (has links)
La réalité mixte apporte certains avantages aux applications industrielles. Elle peut, entre autres, faciliter la visualisation et validation de projets ou assister des opérateurs durant des tâches spécifiques. Les systèmes de projection (PBS), tels que les CAVE ou la réalité augmentée spatiale, fournissent un environnement de réalité mixte permettant une collaboration directe avec des utilisateurs externes. Dans cette thèse, nous visons à améliorer l'utilisation des systèmes de projection pour des applications industrielles en abordant deux défis majeurs: (1) améliorer les composantes techniques des PBS et (2) augmenter l'expérience utilisateur dans les PBS. En tant que premier défi technique, nous visons à améliorer les systèmes de suivi de mouvements optiques. Nous proposons une approche permettant d’élargir l’espace de travail de ces systèmes grâce à deux méthodes. La première permet de suivre les mouvements à partir d’une seule caméra tandis ce que la deuxième permet de contrôler les caméras et suivre les objets dans l’espace de travail. Le système qui en résulte fournit des performances acceptables pour des applications en réalité mixte tout en augmentant considérablement l'espace de travail. Un tel système de suivi de mouvement peut permettre de mieux exploiter le potentiel des systèmes de projection et d’élargir le champ possible des interactions. En tant que deuxième défi technique, nous concevons un casque « tout-en-un » pour la réalité augmentée spatiale mobile. Le casque rassemble à la fois un système de projection et un système de suivi de mouvements qui sont embarqués sur la tête de l'utilisateur. Avec un tel système, les utilisateurs sont capables de se déplacer autour d'objets tangibles et de les manipuler directement à la main tout en projetant du contenu virtuel par-dessus. Nous illustrons notre système avec deux cas d'utilisation industriels: le prototypage virtuel et la visualisation médicale. Enfin, nous abordons le défi qui vise à améliorer l’expérience utilisateur dans les PBS. Nous proposons une approche permettant d’incarner un personnage virtuel et d’augmenter la perception spatiale des utilisateurs dans les PBS. Pour ce faire, nous ajoutons l’ombre virtuelle des utilisateurs dans les systèmes de projection immersifs. L'ombre virtuelle est directement corrélée aux mouvements des utilisateurs afin qu'ils la perçoivent comme si c'était la leur. Nous avons effectué une expérience afin d'étudier l'influence de la présence de l'ombre virtuelle sur le comportement des utilisateurs. / Mixed reality brings some advantages to industrial applications. It can, among others, facilitate visualizing and validating projects or assist operators during specific tasks. Projection-based Systems (PBS), such as CAVEs or spatial augmented reality provide a mixed reality environment enabling straightforward collaboration with external users. In this thesis, we aim at improving the usage of PBS for industrial applications by considering two main challenges: (1) improving the technical components of PBS and (2) improving the user experience when using PBS. As a first technical challenge, we propose to address the improvement of the tracking component. We introduce an approach that enables increasing the workspace of optical tracking systems by using two methods. As a first method, we propose to use monocular tracking. As a second method, we propose to use controlled cameras that follow the targets across the workspace. The resulting system provides acceptable performances for mixed reality applications while considerably increasing the workspace. Such a tracking system can make it easier to use large projection-based displays and can widen the range of available interactions. As a second technical challenge, we design an “all-in-one” headset for mobile spatial augmented reality on tangible objects. The headset gathers both a projection and a tracking system that are embedded on the user’s head. With such a system, the users are able to move around tangible objects and to manipulate them directly by hand while projecting virtual content over them. We illustrate our system with two industrial use cases: virtual prototyping and medical visualization. Finally, we address the challenge that aims at improving the user experience when using PBS. We introduce a method that provides virtual embodiment and increases the spatial perception of the users when using PBS. To do so we add the user’s virtual shadow in immersive projection-based systems. The virtual shadow is dynamically mapped to users’ movements in order to make them perceive the shadow as if it was their own. We then carry out an experiment to study the influence of the presence of the virtual shadow on the user experience and behavior.
48

Edge Computing for Mixed Reality / Blandad virtuell verklighet med stöd av edge computing

Lindqvist, Johan January 2019 (has links)
Mixed reality, or augmented reality, where the real and the virtual worlds are combined, has seen an increase in interest in recent years with the release of tools like Google ARCore and Apple ARkit. Edge computing, where the distributed computing resources are located near the end device at the edge of the network, is a paradigm that enables offloading of computing tasks with latency requirements to dedicated servers. This thesis studies how edge computing can be used to bring mixed reality capabilities to mobile end devices that lack native support for that. It presents a working prototype for delivering mixed reality, evaluates the different technologies in it in relation to stability, responsiveness and resource usage, and studies the requirements on the end and edge devices. The experimental evaluation revealed that transmission time is the most significant chunk of end-to-end latency for the developed application. Reducing that delay will have a significant impact on future deployments of such systems. The thesis also presents other bottlenecks and best practices found during the prototype’s development, and how to proceed from here.
49

Préparation à la conduite automatisée en Réalité Mixte / Get ready for automated driving with Mixed Reality

Sportillo, Daniele 19 April 2019 (has links)
L'automatisation de la conduite est un processus en cours qui est en train de changer radicalement la façon dont les gens voyagent et passent du temps dans leur voiture pendant leurs déplacements. Les véhicules conditionnellement automatisés libèrent les conducteurs humains de la surveillance et de la supervision du système et de l'environnement de conduite, leur permettant d'effectuer des activités secondaires pendant la conduite, mais requièrent qu’ils puissent reprendre la tâche de conduite si nécessaire. Pour les conducteurs, il est essentiel de comprendre les capacités et les limites du système, d’en reconnaître les notifications et d'interagir de manière adéquate avec le véhicule pour assurer leur propre sécurité et celle des autres usagers de la route. À cause de la diversité des situations de conduite que le conducteur peut rencontrer, les programmes traditionnels de formation peuvent ne pas être suffisants pour assurer une compréhension efficace de l'interaction entre le conducteur humain et le véhicule pendant les transitions de contrôle. Il est donc nécessaire de permettre aux conducteurs de vivre ces situations avant leur première utilisation du véhicule. Dans ce contexte, la Réalité Mixte constitue un outil d'apprentissage et d'évaluation des compétences potentiellement efficace qui permettrait aux conducteurs de se familiariser avec le véhicule automatisé et d'interagir avec le nouvel équipement dans un environnement sans risque. Si jusqu'à il y a quelques années, les plates-formes de Réalité Mixte étaient destinées à un public de niche, la démocratisation et la diffusion à grande échelle des dispositifs immersifs ont rendu leur adoption plus accessible en termes de coût, de facilité de mise en œuvre et de configuration. L'objectif de cette thèse est d'étudier le rôle de la réalité mixte dans l'acquisition de compétences pour l'interaction d'un conducteur avec un véhicule conditionnellement automatisé. En particulier, nous avons exploré le rôle de l'immersion dans le continuum de la réalité mixte en étudiant différentes combinaisons d'espaces de visualisation et de manipulation et la correspondance entre le monde virtuel et le monde réel. Du fait des contraintes industrielles, nous avons limité les candidats possibles à des systèmes légers portables, peu chers et facilement accessibles; et avons analysé l’impact des incohérences sensorimotrices que ces systèmes peuvent provoquer sur la réalisation des activités dans l’environnement virtuel. À partir de ces analyses, nous avons conçu un programme de formation visant l'acquisition des compétences, des règles et des connaissances nécessaires à l'utilisation d'un véhicule conditionnellement automatisé. Nous avons proposé des scénarios routiers simulés de plus en plus complexes pour permettre aux apprenants d’interagir avec ce type de véhicules dans différentes situations de conduite. Des études expérimentales ont été menées afin de déterminer l'impact de l'immersion sur l'apprentissage, la pertinence du programme de formation conçu et, à plus grande échelle, de valider l'efficacité de l'ensemble des plateformes de formation par des mesures subjectives et objectives. Le transfert de compétences de l'environnement de formation à la situation réelle a été évalué par des essais sur simulateurs de conduite haut de gamme et sur des véhicules réels sur la voie publique. / Driving automation is an ongoing process that is radically changing how people travel and spend time in their cars during journeys. Conditionally automated vehicles free human drivers from the monitoring and supervision of the system and driving environment, allowing them to perform secondary activities during automated driving, but requiring them to resume the driving task if necessary. For the drivers, understanding the system’s capabilities and limits, recognizing the system’s notifications, and interacting with the vehicle in the appropriate way is crucial to ensuring their own safety and that of other road users. Because of the variety of unfamiliar driving situations that the driver may encounter, traditional handover and training programs may not be sufficient to ensure an effective understanding of the interaction between the human driver and the vehicle during transitions of control. Thus, there is the need to let drivers experience these situations before their first ride. In this context, Mixed Reality provides potentially valuable learning and skill assessment tools which would allow drivers to familiarize themselves with the automated vehicle and interact with the novel equipment involved in a risk-free environment. If until a few years ago these platforms were destined to a niche audience, the democratization and the large-scale spread of immersive devices since then has made their adoption more accessible in terms of cost, ease of implementation, and setup. The objective of this thesis is to investigate the role of Mixed Reality in the acquisition of competences needed for a driver’s interaction with a conditionally automated vehicle. In particular, we explored the role of immersion along the Mixed Reality continuum by investigating different combinations of visualization and manipulation spaces and the correspondence between the virtual and the real world. For industrial constraints, we restricted the possible candidates to light systems that are portable, cost-effective and accessible; we thus analyzed the impact of the sensorimotor incoherences that these systems may cause on the execution of tasks in the virtual environment. Starting from these analyses, we designed a training program aimed at the acquisition of skills, rules and knowledge necessary to operate a conditionally automated vehicle. In addition, we proposed simulated road scenarios with increasing complexity to suggest what it feels like to be a driver at this level of automation in different driving situations. Experimental user studies were conducted in order to determine the impact of immersion on learning and the pertinence of the designed training program and, on a larger scale, to validate the effectiveness of the entire training platform with self-reported and objective measures. Furthermore, the transfer of skills from the training environment to the real situation was assessed with test drives using both high-end driving simulators and actual vehicles on public roads.
50

Design, implementation and evaluation for continuous interaction in image-guided surgery

Trevisan, Daniela 03 March 2006 (has links)
Recent progress in the overlay and registration of digital information on the users workspace in a spatially meaningful way has allowed mixed reality (MR) to become a more effective operational medium. In the area of medical surgery, surgeons are conveyed with information such as the incisions location, regions to be avoided, diseased tissues, etc, while staying in and keeping their original working environment. The main objective of this Thesis is identifying theoretical and practical basis for how mixed reality interfaces might provide support and augmentation maximizing the continuity of interaction. We start proposing a set of design principles organized in a design space which allows to identify continuity interaction properties at an early stage of the development system. Once the abstract design possibilities have been identified and a concrete design decision has been taken, an implementational strategy can be developed. Two approaches were investigated: markerless and marker-based. The last one is used to provide surgeons with guidance on an osteotomy task in the maxillo-facial surgery. The evaluation process applies usability tests with users to validate the augmented guidance in different scenarios and to study the influence of different design variables in the final user interaction. As a result we have found a model to describe the contribution factors of each variable for the continuity of the user interaction. We suggest that this methodology can be applied mainly to those applications in which smooth connections and interactions, with virtual and real environments, are critical for the system; i.e. surgery, drivers applications or pilot simulations.

Page generated in 0.0958 seconds