• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 15
  • 10
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 166
  • 166
  • 60
  • 59
  • 54
  • 33
  • 30
  • 29
  • 28
  • 28
  • 20
  • 17
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Erfahrungen zur Nutzung von Mixed und Virtual Reality im Lehralltag an der HTW Dresden

Göbel, Gunther, Sonntag, Ralph January 2017 (has links)
Der Einsatz von immersiven Systemen, also Virtual Reality (VR), Augmented Reality (AR) und Mixed Reality (MR) Systemen in der Lehre ist naheliegend. Eigene interaktive Erfahrung einer Tätigkeit ist immer einer reinen rezeptiven Beobachtung bzw. verbalen Erläuterung vorzuziehen. Trotzdem ist heutige Lehre selbst in Praktika und Übungen zum sehr großen Teil passiv, die selbständige Umsetzung, etwa das Bedienen einer Anlage oder die eigenständige Synthese einer Chemikalie, können aus Gründen der Zeit, Verfügbarkeit, Sicherheitsbedenken und Kostengründen oft nur selten eingesetzt werden. Dem Einsatz o.g. neuer immersiven Technologien stand bisher nicht nur der erhebliche Aufwand zur Erstellung entsprechender Simulationen gegenüber. Vor allem aber auch der Hardwareaufwand bei gleichzeitigem nicht optimalem Grad an Immersivität ließ kaum Möglichkeiten offen. Jeden Studenten einzeln ausreichend Zeit in einer teuren und großen Cave-Umgebung zu ermöglichen, damit dieser virtuell technische Anlagen bedient, ist für größere Studentenzahlen untauglich. [... aus der Einleitung]
32

Spatial Interaction for Immersive Mixed-Reality Visualizations

Büschel, Wolfgang 02 June 2023 (has links)
Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations. / Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können.
33

System Support for Next-Gen Mobile Applications

Jiayi Meng (16512234) 10 July 2023 (has links)
<p>Next-generation (Next-Gen) mobile applications, Extended Reality (XR), which encompasses Virtual/Augmented/Mixed Reality (VR/AR/MR), promise to revolutionize how people interact with technology and the world, ushering in a new era of immersive experiences. However, the hardware capacity of mobile devices will not grow proportionally with the escalating resource demands of the mobile apps due to their battery constraint. To bridge the gap, edge computing has emerged as a promising approach. It is further boosted by emerging 5G cellular networks, which promise low latency and high bandwidth. However, realizing the full potential of edge computing faces several fundamental challenges.</p> <p><br></p> <p>In this thesis, we first discuss a set of fundamental design challenges in supporting Next-Gen mobile applications via edge computing. These challenges extend across the three key system components involved — mobile clients, edge servers, and cellular networks. We then present how we address several of these challenges, including (1) how to coordinate mobile clients and edge servers to achieve stringent QoE requirements for Next-Gen apps; (2) how to optimize energy consumption of running Next-Gen apps on mobile devices to ensure long-lasting user experience; and (3) how to model and generate control-plane traffic of cellular networks to enable innovation on mobile network architectural design to support Next-Gen apps not only over 4G but also over 5G and beyond.</p> <p><br></p> <p>First, we present how to optimize the latency in edge-assisted XR system via the mobile-client and edge-server co-design. Specifically, we exploit key insights about frame similarity in VR to build the first multiplayer edge-assisted VR design, Coterie. We demonstrate that compared with the prior work on single-player VR, Coterie reduces the per-player network load by 10.6X−25.7X, and can easily support 4 players for high-quality VR apps on Pixel 2 over 802.11ac running at 60 FPS and under 16ms responsiveness without exhausting the finite wireless bandwidth.</p> <p><br></p> <p>Second, we focus on the energy perspective of running Next-Gen apps on mobile devices. We study a major limitation of a classic and de facto app energy management technique, reactive energy-aware app adaptation, which was first proposed two decades ago. We propose, design, and validate a new solution, the first proactive energy-aware app adaptation, that effectively tackles the limitation and achieves higher app QoE while meeting a given energy drain target. Compared with traditional approaches, our proactive solution improves the QoE by 44.8% (Pixel 2) and 19.2% (Moto Z3) under low power budget.</p> <p><br></p> <p>Finally, we delve into the third system component, cellular networks. To facilitate innovation in mobile network architecture to better support Next-Gen apps, we characterize and model the control-plane traffic of cellular networks, which has been mostly overlooked by prior work. To model the control-plane traffic, we first prove that traditional probability distributions that have been widely used for modeling Internet traffic (e.g., Poisson, Pareto, and Weibull) cannot model the control-plane traffic due to the much higher burstiness and longer tails in the cumulative distributions of the control-plane traffic. We then propose a two-level state-machine-based traffic model based on the Semi-Markov model. We finally validate that the synthesized traces by using our model achieve small differences compared with the real traces, i.e., within 1.7%, 4.9% and 0.8%, for phones, connected cars, and tablets, respectively. We also show that our model can be easily adjusted from LTE to 5G, enabling further research on control-plane design and optimization for 4G/5G and beyond.</p>
34

Iluminação baseada em séries temporais de imagens com aplicações em realidade mista / Time series image based lighting with mixed reality applications

Valente, Caio de Freitas 06 September 2016 (has links)
A estimação da iluminação é essencial para aplicações de realidade mista que se propõem a integrar elementos virtuais a cenas reais de maneira harmoniosa e sem a perda do realismo. Um dos métodos mais utilizados para fazer essa estimação é conhecido como iluminação baseada em imagens (Image Based Lighting - IBL), método que utiliza light probes para capturar a intensidade da iluminação incidente em uma cena. Porém, IBL estima a iluminação incidente apenas para um determinado instante e posição. Nesta dissertação, será avaliado um modelo de iluminação que utiliza séries temporais de imagens de light probes, obtidas de maneira esparsa em relação ao tempo, para renderizar cenas em instantes arbitrários. Novas cenas contendo objetos virtuais poderão ser renderizadas utilizando imagens de light probes artificiais, geradas a partir das amostras da iluminação originais. Diferentes funções de interpolação e aproximação são avaliadas para modelar o comportamento luminoso. As imagens finais produzidas pela metodologia também são verificadas por voluntários, de modo a determinar o impacto na qualidade de renderização em aplicações de realidade mista. Além da metodologia, foi desenvolvida uma ferramenta de software em forma de plugin para facilitar o uso de IBL temporalmente variável, permitindo assim a renderização realística de objetos virtuais para instantes arbitrários / Lighting estimation is essential for mixed reality applications that strive to integrate virtual elements into real scenes in a seamlessly fashion without sacrificing realism. A widely used method for lighting estimation is known as Image Based Lighting (IBL), which utilizes light probes to determine incident light intensity within a scene. However, IBL estimates light incidence only for a given time and position. In this dissertation, we assess a lighting model based on a time series of light probe images, obtained sparsely, to render scenes at arbitrary times. New scenes containing virtual objects can then be rendered by using artificial light probe images, which are generated from the original light samples. Different types of interpolation and approximation functions were evaluated for modeling lighting behavior. The resulting images were assessed for the impact in rendering quality for mixed reality applications by volunteers. In addition to the methodology, we also developed a software plugin to simplify the use of temporally variable IBL, allowing realistic rendering of virtual objects for arbitrary times
35

Contribution to the study of projection-based systems for industrial applications in mixed reality / Contribution à l’étude des systèmes de projection pour des applications industrielles en réalité mixte

Cortes, Guillaume 24 October 2018 (has links)
La réalité mixte apporte certains avantages aux applications industrielles. Elle peut, entre autres, faciliter la visualisation et validation de projets ou assister des opérateurs durant des tâches spécifiques. Les systèmes de projection (PBS), tels que les CAVE ou la réalité augmentée spatiale, fournissent un environnement de réalité mixte permettant une collaboration directe avec des utilisateurs externes. Dans cette thèse, nous visons à améliorer l'utilisation des systèmes de projection pour des applications industrielles en abordant deux défis majeurs: (1) améliorer les composantes techniques des PBS et (2) augmenter l'expérience utilisateur dans les PBS. En tant que premier défi technique, nous visons à améliorer les systèmes de suivi de mouvements optiques. Nous proposons une approche permettant d’élargir l’espace de travail de ces systèmes grâce à deux méthodes. La première permet de suivre les mouvements à partir d’une seule caméra tandis ce que la deuxième permet de contrôler les caméras et suivre les objets dans l’espace de travail. Le système qui en résulte fournit des performances acceptables pour des applications en réalité mixte tout en augmentant considérablement l'espace de travail. Un tel système de suivi de mouvement peut permettre de mieux exploiter le potentiel des systèmes de projection et d’élargir le champ possible des interactions. En tant que deuxième défi technique, nous concevons un casque « tout-en-un » pour la réalité augmentée spatiale mobile. Le casque rassemble à la fois un système de projection et un système de suivi de mouvements qui sont embarqués sur la tête de l'utilisateur. Avec un tel système, les utilisateurs sont capables de se déplacer autour d'objets tangibles et de les manipuler directement à la main tout en projetant du contenu virtuel par-dessus. Nous illustrons notre système avec deux cas d'utilisation industriels: le prototypage virtuel et la visualisation médicale. Enfin, nous abordons le défi qui vise à améliorer l’expérience utilisateur dans les PBS. Nous proposons une approche permettant d’incarner un personnage virtuel et d’augmenter la perception spatiale des utilisateurs dans les PBS. Pour ce faire, nous ajoutons l’ombre virtuelle des utilisateurs dans les systèmes de projection immersifs. L'ombre virtuelle est directement corrélée aux mouvements des utilisateurs afin qu'ils la perçoivent comme si c'était la leur. Nous avons effectué une expérience afin d'étudier l'influence de la présence de l'ombre virtuelle sur le comportement des utilisateurs. / Mixed reality brings some advantages to industrial applications. It can, among others, facilitate visualizing and validating projects or assist operators during specific tasks. Projection-based Systems (PBS), such as CAVEs or spatial augmented reality provide a mixed reality environment enabling straightforward collaboration with external users. In this thesis, we aim at improving the usage of PBS for industrial applications by considering two main challenges: (1) improving the technical components of PBS and (2) improving the user experience when using PBS. As a first technical challenge, we propose to address the improvement of the tracking component. We introduce an approach that enables increasing the workspace of optical tracking systems by using two methods. As a first method, we propose to use monocular tracking. As a second method, we propose to use controlled cameras that follow the targets across the workspace. The resulting system provides acceptable performances for mixed reality applications while considerably increasing the workspace. Such a tracking system can make it easier to use large projection-based displays and can widen the range of available interactions. As a second technical challenge, we design an “all-in-one” headset for mobile spatial augmented reality on tangible objects. The headset gathers both a projection and a tracking system that are embedded on the user’s head. With such a system, the users are able to move around tangible objects and to manipulate them directly by hand while projecting virtual content over them. We illustrate our system with two industrial use cases: virtual prototyping and medical visualization. Finally, we address the challenge that aims at improving the user experience when using PBS. We introduce a method that provides virtual embodiment and increases the spatial perception of the users when using PBS. To do so we add the user’s virtual shadow in immersive projection-based systems. The virtual shadow is dynamically mapped to users’ movements in order to make them perceive the shadow as if it was their own. We then carry out an experiment to study the influence of the presence of the virtual shadow on the user experience and behavior.
36

Edge Computing for Mixed Reality / Blandad virtuell verklighet med stöd av edge computing

Lindqvist, Johan January 2019 (has links)
Mixed reality, or augmented reality, where the real and the virtual worlds are combined, has seen an increase in interest in recent years with the release of tools like Google ARCore and Apple ARkit. Edge computing, where the distributed computing resources are located near the end device at the edge of the network, is a paradigm that enables offloading of computing tasks with latency requirements to dedicated servers. This thesis studies how edge computing can be used to bring mixed reality capabilities to mobile end devices that lack native support for that. It presents a working prototype for delivering mixed reality, evaluates the different technologies in it in relation to stability, responsiveness and resource usage, and studies the requirements on the end and edge devices. The experimental evaluation revealed that transmission time is the most significant chunk of end-to-end latency for the developed application. Reducing that delay will have a significant impact on future deployments of such systems. The thesis also presents other bottlenecks and best practices found during the prototype’s development, and how to proceed from here.
37

Préparation à la conduite automatisée en Réalité Mixte / Get ready for automated driving with Mixed Reality

Sportillo, Daniele 19 April 2019 (has links)
L'automatisation de la conduite est un processus en cours qui est en train de changer radicalement la façon dont les gens voyagent et passent du temps dans leur voiture pendant leurs déplacements. Les véhicules conditionnellement automatisés libèrent les conducteurs humains de la surveillance et de la supervision du système et de l'environnement de conduite, leur permettant d'effectuer des activités secondaires pendant la conduite, mais requièrent qu’ils puissent reprendre la tâche de conduite si nécessaire. Pour les conducteurs, il est essentiel de comprendre les capacités et les limites du système, d’en reconnaître les notifications et d'interagir de manière adéquate avec le véhicule pour assurer leur propre sécurité et celle des autres usagers de la route. À cause de la diversité des situations de conduite que le conducteur peut rencontrer, les programmes traditionnels de formation peuvent ne pas être suffisants pour assurer une compréhension efficace de l'interaction entre le conducteur humain et le véhicule pendant les transitions de contrôle. Il est donc nécessaire de permettre aux conducteurs de vivre ces situations avant leur première utilisation du véhicule. Dans ce contexte, la Réalité Mixte constitue un outil d'apprentissage et d'évaluation des compétences potentiellement efficace qui permettrait aux conducteurs de se familiariser avec le véhicule automatisé et d'interagir avec le nouvel équipement dans un environnement sans risque. Si jusqu'à il y a quelques années, les plates-formes de Réalité Mixte étaient destinées à un public de niche, la démocratisation et la diffusion à grande échelle des dispositifs immersifs ont rendu leur adoption plus accessible en termes de coût, de facilité de mise en œuvre et de configuration. L'objectif de cette thèse est d'étudier le rôle de la réalité mixte dans l'acquisition de compétences pour l'interaction d'un conducteur avec un véhicule conditionnellement automatisé. En particulier, nous avons exploré le rôle de l'immersion dans le continuum de la réalité mixte en étudiant différentes combinaisons d'espaces de visualisation et de manipulation et la correspondance entre le monde virtuel et le monde réel. Du fait des contraintes industrielles, nous avons limité les candidats possibles à des systèmes légers portables, peu chers et facilement accessibles; et avons analysé l’impact des incohérences sensorimotrices que ces systèmes peuvent provoquer sur la réalisation des activités dans l’environnement virtuel. À partir de ces analyses, nous avons conçu un programme de formation visant l'acquisition des compétences, des règles et des connaissances nécessaires à l'utilisation d'un véhicule conditionnellement automatisé. Nous avons proposé des scénarios routiers simulés de plus en plus complexes pour permettre aux apprenants d’interagir avec ce type de véhicules dans différentes situations de conduite. Des études expérimentales ont été menées afin de déterminer l'impact de l'immersion sur l'apprentissage, la pertinence du programme de formation conçu et, à plus grande échelle, de valider l'efficacité de l'ensemble des plateformes de formation par des mesures subjectives et objectives. Le transfert de compétences de l'environnement de formation à la situation réelle a été évalué par des essais sur simulateurs de conduite haut de gamme et sur des véhicules réels sur la voie publique. / Driving automation is an ongoing process that is radically changing how people travel and spend time in their cars during journeys. Conditionally automated vehicles free human drivers from the monitoring and supervision of the system and driving environment, allowing them to perform secondary activities during automated driving, but requiring them to resume the driving task if necessary. For the drivers, understanding the system’s capabilities and limits, recognizing the system’s notifications, and interacting with the vehicle in the appropriate way is crucial to ensuring their own safety and that of other road users. Because of the variety of unfamiliar driving situations that the driver may encounter, traditional handover and training programs may not be sufficient to ensure an effective understanding of the interaction between the human driver and the vehicle during transitions of control. Thus, there is the need to let drivers experience these situations before their first ride. In this context, Mixed Reality provides potentially valuable learning and skill assessment tools which would allow drivers to familiarize themselves with the automated vehicle and interact with the novel equipment involved in a risk-free environment. If until a few years ago these platforms were destined to a niche audience, the democratization and the large-scale spread of immersive devices since then has made their adoption more accessible in terms of cost, ease of implementation, and setup. The objective of this thesis is to investigate the role of Mixed Reality in the acquisition of competences needed for a driver’s interaction with a conditionally automated vehicle. In particular, we explored the role of immersion along the Mixed Reality continuum by investigating different combinations of visualization and manipulation spaces and the correspondence between the virtual and the real world. For industrial constraints, we restricted the possible candidates to light systems that are portable, cost-effective and accessible; we thus analyzed the impact of the sensorimotor incoherences that these systems may cause on the execution of tasks in the virtual environment. Starting from these analyses, we designed a training program aimed at the acquisition of skills, rules and knowledge necessary to operate a conditionally automated vehicle. In addition, we proposed simulated road scenarios with increasing complexity to suggest what it feels like to be a driver at this level of automation in different driving situations. Experimental user studies were conducted in order to determine the impact of immersion on learning and the pertinence of the designed training program and, on a larger scale, to validate the effectiveness of the entire training platform with self-reported and objective measures. Furthermore, the transfer of skills from the training environment to the real situation was assessed with test drives using both high-end driving simulators and actual vehicles on public roads.
38

Mixed reality interactive storytelling : acting with gestures and facial expressions

Martin, Olivier 04 May 2007 (has links)
This thesis aims to answer the following question : “How could gestures and facial expressions be used to control the behavior of an interactive entertaining application?”. An answer to this question is presented and illustrated in the context of mixed reality interactive storytelling. The first part focuses on the description of the Artificial Intelligence (AI) mechanisms that are used to model and control the behavior of the application. We present an efficient real-time hierarchical planning engine, and show how active modalities (such as intentional gestures) and passive modalities (such as facial expressions) can be integrated into the planning algorithm, in such a way that the narrative (driven by the behavior of the virtual characters inside the virtual world) can effectively evolve in accordance with user interactions. The second part is devoted to the automatic recognition of user interactions. After briefly describing the implementation of a simple but robust rule-based gesture recognition system, the emphasis is set on facial expression recognition. A complete solution integrating state-of-the-art techniques along with original contributions is drawn. It includes face detection, facial feature extraction and analysis. The proposed approach combines statistical learning and probabilistic reasoning in order to deal with the uncertainty associated with the process of modeling facial expressions.
39

Utformning av projektorsystem / Designing of a projector system

Läbom, Malin January 2010 (has links)
Det här examensarbetet på D-nivå har utförts i samarbete men företaget XM reality i Linköping. Företaget jobbar med att ta fram olika system inom området mixed reality, som på svenska översätts till förhöjd verklighet.   Syftet med projektet var att utforma två olika handhållna projektorsystem. Produkterna skulle utformas för att passa in i sjukhusmiljö. Båda enheterna har samma funktion men innehåller olika komponenter. Produkterna är anpassade till en ny teknik som företaget har utvecklat. Den nya tekniken är en mixed reality applikation som gör det möjligt att projicera skiktscanningar, CT eller MRI, direkt på patientens kropp. Målet med projektet var att göra två fungerade modeller. Modellerna skall fungera som prototyper i den medicinska forskarstudien som projektet ingår i.   En designstrategi utformades till företaget som skall uttrycka företagets image och vision. I arbetet med designstrategin togs ett antal kärnvärden fram. Kärnvärdena blev high tech, professionell, framtid, exklusiv, kvalité, hållbar samt kompakt vilket skall gälla för företagets samtliga produkter. Till produktsegmentet sjukhusprodukter tillades kärnvärdena ergonomisk samt dynamisk.   Arbetet med att utveckla produkterna började med litterära ergonomiska studier som följdes upp av fysiska användarstudier. Under arbetats gång har flera olika metoder tillämpats. Metoderna har används för att fatta beslut men även som stöd i den kreativa gestaltningsprocessen.   Resultatet är två ergonomiskt anpassade handhållna enheter. Enheterna är utformade för att passa in i sjukhusmiljön. De är smidiga samt enkla att använda. Båda produkterna är utformade efter designstrategin och för att förmedla företagets image. De fysiska modeller som gjordes innehåller fungerande komponenter som går att bytas ut. / This thesis D-level is done in collaboration with the company XM reality that is located in Linköping, Sweden. The company is working with the development of various systems in the field of mixed reality.   The task was to design two different hand-held projector systems. The products should be designed to fit in to the hospital environment. Both products have the same function but contain different components. The products are adapted to a new technology that the company has developed. The new technology is a mixed reality application that makes it possible to project CT or MRI scans directly on to the patient's body. The goal of the project was to make two functioning models. The models are to be used as prototypes in the medical research study the project is a part of.   A design strategy was created for the company to reflect the company’s image and vision. In the process of developing the design strategy, a number of core values were established. The core values were high-tech, professional, futuristic, exclusive, quality, durable and compact, which should to be valid for all the company’s products. To the product segment of hospital products were the core values ergonomic and dynamic added.   The work to develop the products began with literary ergonomic studies that were followed up by user studies. During the working process, several different methods were used. The methods were used not only to make decisions but also to help the creative design process.   The result is two ergonomically designed hand-held devices. The products are designed to fit into the hospital environment. They are slim and easy to use. Both products are designed along the design strategy and to communicate the company's image. The physical models that were made contain functional components that can be changed.
40

Virtual reality platform modelling and design for versatile electric wheelchair simulation in an enabled environment.

Steyn, Nico. January 2014 (has links)
D. Tech. Electrical Engineering. / Developes a wheelchair motion platform whereby its user may be introduced into a simulated world. This simulated world is then required to be closely related to real world spaces that will be encountered by a disabled person using a wheelchair as a mobility aid. The wheelchair to be accommodated in the simulation environment may have multiple mechanical construct possibilities. The wheelchair used on the simulation platform needs to be driven by a combination of two wheels, as is generally found on manual and electric wheelchairs. The final objective was to design the simulation as closely as possible to the real world in order to use the VS-1 motion platform for architectural evaluations, possible training and general research in the field of simulators used in an enabled environment.

Page generated in 0.0602 seconds