• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 15
  • 10
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 166
  • 166
  • 60
  • 59
  • 54
  • 33
  • 30
  • 29
  • 28
  • 28
  • 20
  • 17
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Exploring the potential use of augmented reality in medical education

Orraryd, Pontus January 2017 (has links)
Human anatomy is traditionally taught using textbooks and dissections. With the advent of computer graphics, using 3D applications have started to see much more use in medical educations around the world. Today, technology such as Augmented Reality and Virtual Reality are on everybody’s lips, and many are now curious what we can do with this new technology. This thesis explores how Augmented Reality can be used in medical education to teach human anatomy. Two application prototypes were developed for the Microsoft Hololens that together tests different ways to use and interact with Augmented Reality. These applications were then tested in a case study with six medical students. From this study a number of hypotheses were formulated.
52

Plasticity for user interfaces in mixed reality / Plasticité des interfaces de réalité mixte

Lacoche, Jérémy 21 July 2016 (has links)
Cette thèse s'intéresse à la plasticité des interfaces de Réalité Mixte (RM), c'est-à-dire les applications de Réalité Virtuelle (RV), Réalité Augmentée (RA) et de Virtualité Augmentée (AV). Il y a un réel engouement aujourd’hui pour ce type d’applications notamment grâce à la démocratisation des périphériques tels les lunettes et casques immersifs, les caméras de profondeur et les capteurs de mouvement. La Réalité Mixte trouve notamment ses usages dans le divertissement, la visualisation de données, la formation et la conception en ingénierie. La plasticité d'un système interactif est sa capacité à s'adapter aux contraintes matérielles et environnementales dans le respect de son utilisabilité. La continuité de l'utilisabilité d'une interface plastique est assurée quel que soit le contexte d'usage. Nous proposons ainsi des modèles et une solution logicielle nommée 3DPlasticToolkit afin de permettre aux développeurs de créer des interfaces de Réalité Mixtes plastiques. Tout d'abord, nous proposons trois modèles pour modéliser les sources d'adaptation : un modèle pour représenter les dispositifs d'interaction et les dispositifs d'affichage, un modèle pour représenter les utilisateurs et leurs préférences et un modèle pour représenter la structure et la sémantique des données. Ces sources d'adaptation vont être prises en compte par un processus d'adaptation qui va déployer dans une application les composants applicatifs adaptés au contexte d'usage grâce à des mécanismes de notation. Le déploiement de ces composants va permettre d'adapter à la fois les techniques d'interaction de l'application et également la présentation de son contenu. Nous proposons également un processus de redistribution qui va permettre à l'utilisateur final de changer la distribution des composants de son système sur différentes dimensions : affichage, utilisateur et plateforme. Ce processus va ainsi permettre à l'utilisateur de changer de plateforme dynamiquement ou encore de combiner plusieurs plateformes. L'implémentation de ces modèles dans 3DPlasticToolkit permet de fournir aux développeurs une solution prête à l'usage qui peut gérer les périphériques actuels de Réalité Mixte et qui inclut un certain nombre de techniques d'interaction, d'effets visuels et de métaphores de visualisation de données. / This PhD thesis focuses on plasticity for Mixed Reality (MR) User interfaces, which includes Virtual Reality (VR), Augmented Reality (AR) and Augmented Virtuality (AV) applications. Today, there is a growing interest for this kind of applications thanks to the generalization of devices such as Head Mounted Displays, Depth sensors and tracking systems. Mixed Reality application can be used in a wide variety of domains such as entertainment, data visualization, education and training, and engineering. Plasticity refers to the capacity of an interactive system to withstand variations of both the system physical characteristics and the environment while preserving its usability. Usability continuity of a plastic interface is ensured whatever the context of use. Therefore, we propose a set of software models, integrated in a software solution named 3DPlasticToolkit, which allow any developer to create plastic MR user interfaces. First, we propose three models for modeling adaptation sources: a model for the description of display devices and interaction devices, a model for the description of the users and their preferences, a model for the description of data structure and semantic. These adaptation sources are taken into account by an adaptation process that deploys application components adapted to the context of use thanks to a scoring system. The deployment of these application components lets the system adapt the interaction techniques of the application of its content presentation. We also propose a redistribution process that allows the end-user to change the distribution of his/her application components across multiple dimensions: display, user and platform. Thus, it allows the end-user to switch dynamically of platform or to combine multiple platforms. The implementation of these models in 3DPlasticToolkit provides developers with a ready to use solution for the development of plastic MR user interfaces. Indeed, the solution already integrates different display devices and interaction devices and also includes multiple interaction techniques, visual effects and data visualization metaphors.
53

Bridging Physical and Virtual Learning: A Mixed-Reality System for Early Science

Yannier, Nesra 01 August 2016 (has links)
Tangible interfaces and mixed-reality environments have potential to bring together the advantages of physical and virtual environments to improve children’s learning and enjoyment. However, there are too few controlled experiments that investigate whether interacting with physical objects in the real world accompanied by interactive feedback may actually improve student learning compared to flat-screen interaction. Furthermore, we do not have a sufficient empirical basis for understanding how a mixed-reality environment should be designed to maximize learning and enjoyment for children. I created EarthShake, a mixed-reality game bridging physical and virtual worlds via a Kinect depth-camera and a specialized computer vision algorithm to help children learn physics. I have conducted three controlled experiments with EarthShake that have identified features that are more and less important to student learning and enjoyment. The first experiment examined the effect of observing physical phenomena and collaboration (pairs versus solo), while the second experiment replicated the effect of observing physical phenomena while also testing whether adding simple physical control, such as shaking a tablet, improves learning and enjoyment. The experiments revealed that observing physical phenomena in the context of a mixed-reality game leads to significantly more learning (5 times more) and enjoyment compared to equivalent screen-only versions, while adding simple physical control or changing group size (solo or pairs) do not have significant effects. Furthermore, gesture analysis provides insight as to why experiencing physical phenomena may enhance learning. My thesis work further investigates what features of a mixed-reality system yield better learning and enjoyment, especially in the context of limited experimental results from other mixed-reality learning research. Most mixed-reality environments, including tangible interfaces (where users manipulate physical objects to create an interactive output), currently emphasize open-ended exploration and problem solving, and are claimed to be most effective when used in a discovery-learning mode with minimal guidance. I investigated how critical to learning and enjoyment interactive guidance and feedback is (e.g. predict/observe/explain prompting structure with interactive feedback), in the context of EarthShake. In a third experiment, I compared the learning and enjoyment outcomes of children interacting with a version of EarthShake that supports guided-discovery, another version that supports exploration in discovery-learning mode, and a version that is a combination of both guideddiscovery and exploration. The results of the experiment reveals that Guided-discovery and Combined conditions where children are exposed to the guided discovery activities with the predict-observe-explain cycle with interactive feedback yield better explanation and reasoning. Thus, having guided-discovery in a mixed-reality environment helps with formulating explanation theories in children’s minds. However, the results also suggest that, children are able to activate explanatory theory in action better when the guided discovery activities are combined with exploratory activities in the mixed-reality system. Adding exploration to guided-discovery activities, not only fosters better learning of the balance/physics principles, but also better application of those principles in a hands-on, constructive problem-solving task. My dissertation contributes to the literatures on the effects of physical observation and mixed-reality interaction on students’ science learning outcomes in learning technologies. Specifically, I have shown that a mixed-reality system (i.e., combining physical and virtual environments) can lead to superior learning and enjoyment outcomes than screen-only alternatives, based on different measures. My work also contributes to the literature of exploration and guided-discovery learning, by demonstrating that having guided-discovery activities in a mixed-reality setting can improve children’s fundamental principle learning by helping them formulate explanations. It also shows that combining an engineering approach with scientific thinking practice (by combining exploration and guided-discovery activities) can lead to better engineering outcomes such as transferring to constructive hands-on activities in the real world. Lastly, my work aims to make a contribution from the design perspective by creating a new mixed-reality educational system that bridges physical and virtual environments to improve children’s learning and enjoyment in a collaborative way, fostering productive dialogue and scientific curiosity in museum and school settings, through an iterative design methodology to ensure effective learning and enjoyment outcomes in these settings.
54

Far Above Far Beyond

Krug, Dominik January 2017 (has links)
This project aims to explore what the brand Land Rover could stand for in the future. The brands rich history of exploring unconquered terrain earned it admiration and desirability all around the world. Further extending it's reach onto new worlds is within reach. In the 2030s the first manned missions to Mars are planned. The first arrivers will have exploration vehicles, that are limited in range and capability. To really explore the planet, vehicles with greater off-road capability and range will be needed. The vehicles also need to allow the expedition crews to stay in the vehicle for longer periods comfortably and also offer extended life support on multi-week long journeys.With this project I am exploring possible answers to face the harsh conditions on Mars. Furthermore, the vehicle and it's features project a vision of what a future off-road driving experience could be.
55

Iluminação baseada em séries temporais de imagens com aplicações em realidade mista / Time series image based lighting with mixed reality applications

Caio de Freitas Valente 06 September 2016 (has links)
A estimação da iluminação é essencial para aplicações de realidade mista que se propõem a integrar elementos virtuais a cenas reais de maneira harmoniosa e sem a perda do realismo. Um dos métodos mais utilizados para fazer essa estimação é conhecido como iluminação baseada em imagens (Image Based Lighting - IBL), método que utiliza light probes para capturar a intensidade da iluminação incidente em uma cena. Porém, IBL estima a iluminação incidente apenas para um determinado instante e posição. Nesta dissertação, será avaliado um modelo de iluminação que utiliza séries temporais de imagens de light probes, obtidas de maneira esparsa em relação ao tempo, para renderizar cenas em instantes arbitrários. Novas cenas contendo objetos virtuais poderão ser renderizadas utilizando imagens de light probes artificiais, geradas a partir das amostras da iluminação originais. Diferentes funções de interpolação e aproximação são avaliadas para modelar o comportamento luminoso. As imagens finais produzidas pela metodologia também são verificadas por voluntários, de modo a determinar o impacto na qualidade de renderização em aplicações de realidade mista. Além da metodologia, foi desenvolvida uma ferramenta de software em forma de plugin para facilitar o uso de IBL temporalmente variável, permitindo assim a renderização realística de objetos virtuais para instantes arbitrários / Lighting estimation is essential for mixed reality applications that strive to integrate virtual elements into real scenes in a seamlessly fashion without sacrificing realism. A widely used method for lighting estimation is known as Image Based Lighting (IBL), which utilizes light probes to determine incident light intensity within a scene. However, IBL estimates light incidence only for a given time and position. In this dissertation, we assess a lighting model based on a time series of light probe images, obtained sparsely, to render scenes at arbitrary times. New scenes containing virtual objects can then be rendered by using artificial light probe images, which are generated from the original light samples. Different types of interpolation and approximation functions were evaluated for modeling lighting behavior. The resulting images were assessed for the impact in rendering quality for mixed reality applications by volunteers. In addition to the methodology, we also developed a software plugin to simplify the use of temporally variable IBL, allowing realistic rendering of virtual objects for arbitrary times
56

Real-Time Object Removal in Augmented Reality

Dahl, Tyler 01 June 2018 (has links)
Diminished reality, as a sub-topic of augmented reality where digital information is overlaid on an environment, is the perceived removal of an object from an environment. Previous approaches to diminished reality used digital replacement techniques, inpainting, and multi-view homographies. However, few used a virtual representation of the real environment, limiting their domains to planar environments. This thesis provides a framework to achieve real-time diminished reality on an augmented reality headset. Using state-of-the-art hardware, we combine a virtual representation of the real environment with inpainting to remove existing objects from complex environments. Our work is found to be competitive with previous results, with a similar qualitative outcome under the limitations of available technology. Additionally, by implementing new texturing algorithms, a more detailed representation of the real environment is achieved.
57

Hawkeye : En titt in i framtidens räddningstjänstutrustning / Hawkeye : A peek into the future of emergency services

Nagy, Bence, Birath, Björn, Bergström, Edvin, Ahlroth, Erik, Sjöqvist, Jakob, Hjort, Jonathan, Tavakoli, Payam, Gunnarsson, Philip January 2020 (has links)
Denna rapport beskriver kandidatprojektet Hawkeye. Projektet har utförts av åtta studenter inom kursen TDDD96 vid Linköpings Universitet. Projektet har handlat om att skapa ett stöd till framtidens räddningstjänst med hjälp av en Microsoft HoloLens 2 som är ämnad att användas av främre befäl. Huvudfunktionerna för Hawkeye var bland annat att strömma video och sensordata till den bakre ledningen samt erbjuda en mixed reality-vy för att visa information om verktyg och patienter ämnad för den för den främre ledningen. Projektets beställare var forskningsgruppen Ubiquitous Computing and Analytics Group vid Institutionen för Datavetenskap på Linköpings universitet. Rapporten beskriver hur Hawkeye har framställts och de metoder som har tillämpats av projektgruppen, exempelvis Scrum. I slutet av rapporten presenteras de individuella bidrag som gruppmedlemmarna har skrivit och som är relaterade till Hawkeye-projektet.
58

Non Visuals : Material exploration of non-visual interaction design

de Cabo Portugal, Sebastian January 2020 (has links)
Design is all about visuals, or that is what I have found out during this thesis, from the process materials to the outcome our main entry point to any problem is how will we solve it visually so it’s understandable for the general user. This aspect is problematic in itself due to the fact that we, as humans, understand the world and the things around using all our senses continuously, even though we can forget as visuals are so overpowering. There is a huge opportunity area in exploring our other senses and bringing them back to technology, and this can be seen in works in the past like Tangible Interactions [1] or Natural User Interfaces [2]. But in this moment in time, where all these new technologies like VR/AR and IoT are about to enter our lives and change them forever, this topic is more important than ever. We have already seen what happens when we turn humans into mere machines with some fingers as interactive inputs, and barely any senses to process all the information given to us. Now that these technologies are still young and malleable, we can direct the future to where we want it instead of being guided by the technology itself. To do this we need to reimagine the design process, not reinvent the wheel, but add experts which we currently leave behind and I argue are key to unlock these technologies, experts not only of the technological side of things but on the human side too, like physiotherapists and dancers. Add also people who we never think about when we think of VR like visually impaired users, which could make these technologies inclusive since early on, instead of as an afterthought like we usually do. Not only people, but we also need to add new materials to understand how we use our senses and explore ways that we can understand and explore them differently; like bodystorming and improv theatre because when things aren’t visual, how do you sketch it? A sketch turns into a video about movement. The end result provides a wide breadth of examples of the types of innovations that can come out of using these new design materials, and to open new frontiers. From a VR game with no visuals whatsoever to an AR location based story game, to a home sized multimodal operating system containing several different apps controlled through physical movement. The examples open up the space instead of closing into a single solution. This is just the tip of the iceberg, a hope that others will be inspired by it and continue with this journey that has just started, to guide the future into one that is more technological and at the same time more human than ever before. What we know is that VR does not equate Visual Reality.
59

Taxonomía de aplicaciones y videojuegos de realidad mixta / Taxonomy of applications and video games of mixed reality

Sánchez Requejo, Luis Felipe, Ramirez Reyes, Jam Carlo 22 September 2020 (has links)
La realidad mixta, unificación de la realidad virtual y la realidad aumentada, posee muchas expectativas debido a las grandes tendencias que han surgido desde su creación y la forma en que ha fusionado casi en su totalidad a nuestro mundo real con el mundo digital, con la proyección de objetos digitales que estimulan los sentidos, logrando obtener una percepción similar a los objetos del entorno real y llevando su uso a múltiples posibilidades. En la investigación se identificó la problemática que aborda la necesidad de profundizar sobre las propiedades de la realidad mixta y los objetivos a trazar para cubrir con dicha necesidad. Durante el proyecto se logró recolectar información y crear un catálogo sobre los distintos tipos de aplicaciones, dispositivos y soluciones tecnológicas implementadas referente a la tecnología. Se investigó acerca de la parte teórica de la realidad mixta en base a las distintas definiciones del autor seminal y de expertos en la materia, además de las definiciones sobre las tecnologías con funcionalidades similares. Además, se procesó la información obtenida, identificando los rubros de negocio en donde se desempeña la realidad Mixta. Finalmente, se logró crear una taxonomía de realidad mixta y un gráfico estadístico de la participación de cada rubro de negocio en el mercado, con el fin de poder tener un panorama claro de la adopción y el valor comercial de cada rubro en donde se ejerce dicha tecnología y que pueda ser utilizado como referencia para la creación de proyectos de tecnología. / Mixed reality, the union of virtual reality and augmented reality, has high expectations due to the biggest trends that have emerged since its inception and the way in which it has almost entirely merged the real world with the digital world with the projection of digital objects that stimulate the senses, achieving a perception similar to the objects in the real environment and taking their use to multiple possibilities. The research identified the problem that addresses the need to delve into the properties of mixed reality and the objectives to be set in order to meet this need. During the project, information was collected in order to create a catalog on the different types of applications, devices and technological solutions implemented regarding that technology. The theoretical part of mixed reality was researched based on the different definitions given by authors and experts in the field, in addition to the definitions of technologies with similar functionalities. Moreover, the information obtained was processed, identifying the business areas where the mixed reality operates. Finally, it was possible to create a mixed reality taxonomy and a statistical graph of the participation of each business area in the market, for the purpose of being able to have a clear overview of the adoption and commercial value of each area where said technology is used and that can be employed as a reference for the creation of future technology projects. / Tesis
60

From TeachLivE™ to the Classroom: Building Preservice Special Educators’ Proficiency with Essential Teaching Skills

Dawson, Melanie Rees 01 May 2016 (has links)
Preservice special education teachers need to develop essential teaching skills to competently address student academics and behavior in the classroom. TeachLivETM is a sophisticated virtual simulation that has recently emerged in teacher preparation programs to supplement traditional didactic instruction and field experiences. Teacher educators can engineer scenarios in TeachLivETM to cumulatively build in complexity, allowing preservice teachers to incrementally interleave target skills in increasingly difficult situations. The purpose of this study was to investigate the effectiveness of TeachLivETM on preservice special education teachers’ delivery of error correction, specific praise, and praise around in the virtual environment and in authentic classroom settings. Four preservice special educators who were teaching on provisional licenses in upper elementary language arts classrooms participated in this multiple baseline study across target skills. Participants attended weekly TeachLivETM sessions as a group, where they engaged in three short teaching turns followed by structured feedback. Participants’ proficiency with the target skills was analyzed on three weekly assessments. First, participants’ mastery of current and previous target skills was measured during their third teaching turn of the intervention session (i.e., TeachLivETM training assessment). Next, participants’ proficiency with all skills, including those that had not been targeted yet in intervention, were measured immediately following intervention sessions (i.e., TeachLivETM comprehensive assessment). Finally, teachers submitted a weekly video recording of a lesson in their real classroom (i.e. classroom generalization assessment). Repeated practice and feedback in TeachLivETM promoted participants’ mastery of essential target skills. Specifically, all participants demonstrated proficiency with error correction, specific praise, and praise around on both the TeachLivETM training assessment and the more complex TeachLivETM comprehensive assessment, with a strong pattern of generalized performance to authentic classroom settings. Participants maintained proficiency with the majority of the target skills in both environments when assessed approximately one month after intervention was discontinued. Implications of the study are discussed, including the power of interleaved practice in TeachLivETM and how generalization and maintenance may be impacted by the degree of alignment between virtual and real teaching scenarios.

Page generated in 0.0348 seconds