• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 15
  • 10
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 173
  • 173
  • 64
  • 61
  • 59
  • 36
  • 33
  • 30
  • 30
  • 28
  • 20
  • 17
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Exploring a chromakeyed augmented virtual environment for viability as an embedded training system for military helicopters

Lennerton, Mark J. 06 1900 (has links)
Approved for public release, distribution is unlimited / Once the military helicopter pilot deploys aboard a naval vessel he leaves behind all training platforms, short of the actual aircraft, that present enough fidelity for him to maintain the highest levels of readiness. To that end, this thesis takes a preliminary step in creating a trainer that places the pilot in an immersive and familiar environment to exercise myriad piloting tasks as faithfully and as rigorously as in actual flight. The focus of this thesis it to assess the viability of an chromakeyed augmented virtual environment (ChrAVE) trainer embedded into a helicopter for use in maintaining certain perishable skills. Specifically this thesis will address the task of helicopter low-level land navigation. The ChrAVE was developed to substantiate the viability of having embedded trainers in helicopters. The ChrAVE is comprised of commercial off the shelf (COTS) equipment on a transportable cart. In determining whether a system such as the ChrAVE is viable as a laboratory for continued training in virtual environment, the opinion of actual pilots that were tasked with realistic workloads was used. Additionally, empirical data was collected and evaluated according to the subject pool's thresholds for acceptable low-level navigation performance. / Captain, United States Marine Corps
152

Spatial Analytic Interfaces

Ens, Barrett January 2016 (has links)
We propose the concept of spatial analytic interfaces (SAIs) as a tool for performing in-situ, everyday analytic tasks. Mobile computing is now ubiquitous and provides access to information at nearly any time or place. However, current mobile interfaces do not easily enable the type of sophisticated analytic tasks that are now well-supported by desktop computers. Conversely, desktop computers, with large available screen space to view multiple data visualizations, are not always available at the ideal time and place for a particular task. Spatial user interfaces, leveraging state-of-the-art miniature and wearable technologies, can potentially provide intuitive computer interfaces to deal with the complexity needed to support everyday analytic tasks. These interfaces can be implemented with versatile form factors that provide mobility for doing such taskwork in-situ, that is, at the ideal time and place. We explore the design of spatial analytic interfaces for in-situ analytic tasks, that leverage the benefits of an upcoming generation of light-weight, see-through, head-worn displays. We propose how such a platform can meet the five primary design requirements for personal visual analytics: mobility, integration, interpretation, multiple views and interactivity. We begin with a design framework for spatial analytic interfaces based on a survey of existing designs of spatial user interfaces. We then explore how to best meet these requirements through a series of design concepts, user studies and prototype implementations. Our result is a holistic exploration of the spatial analytic concept on a head-worn display platform. / October 2016
153

MusE-XR: musical experiences in extended reality to enhance learning and performance

Johnson, David 23 July 2019 (has links)
Integrating state-of-the-art sensory and display technologies with 3D computer graphics, extended reality (XR) affords capabilities to create enhanced human experiences by merging virtual elements with the real world. To better understand how Sound and Music Computing (SMC) can benefit from the capabilities of XR, this thesis presents novel research on the de- sign of musical experiences in extended reality (MusE-XR). Integrating XR with research on computer assisted musical instrument tutoring (CAMIT) as well as New Interfaces for Musical Expression (NIME), I explore the MusE-XR design space to contribute to a better understanding of the capabilities of XR for SMC. The first area of focus in this thesis is the application of XR technologies to CAMIT enabling extended reality enhanced musical instrument learning (XREMIL). A common approach in CAMIT is the automatic assessment of musical performance. Generally, these systems focus on the aural quality of the performance, but emerging XR related sensory technologies afford the development of systems to assess playing technique. Employing these technologies, the first contribution in this thesis is a CAMIT system for the automatic assessment of pianist hand posture using depth data. Hand posture assessment is performed through an applied computer vision (CV) and machine learning (ML) pipeline to classify a pianist’s hands captured by a depth camera into one of three posture classes. Assessment results from the system are intended to be integrated into a CAMIT interface to deliver feedback to students regarding their hand posture. One method to present the feedback is through real-time visual feedback (RTVF) displayed on a standard 2D computer display, but this method is limited by a need for the student to constantly shift focus between the instrument and the display. XR affords new methods to potentially address this limitation through capabilities to directly augment a musical instrument with RTVF by overlaying 3D virtual objects on the instrument. Due to limited research evaluating effectiveness of this approach, it is unclear how the added cognitive demands of RTVF in virtual environments (VEs) affect the learning process. To fill this gap, the second major contribution of this thesis is the first known user study evaluating the effectiveness of XREMIL. Results of the study show that an XR environment with RTVF improves participant performance during training, but may lead to decreased improvement after the training. On the other hand,interviews with participants indicate that the XR environment increased their confidence leading them to feel more engaged during training. In addition to enhancing CAMIT, the second area of focus in this thesis is the application of XR to NIME enabling virtual environments for musical expression (VEME). Development of VEME requires a workflow that integrates XR development tools with existing sound design tools. This presents numerous technical challenges, especially to novice XR developers. To simplify this process and facilitate VEME development, the third major contribution of this thesis is an open source toolkit, called OSC-XR. OSC-XR makes VEME development more accessible by providing developers with readily available Open Sound Control (OSC) virtual controllers. I present three new VEMEs, developed with OSC-XR, to identify affordances and guidelines for VEME design. The insights gained through these studies exploring the application of XR to musical learning and performance, lead to new affordances and guidelines for the design of effective and engaging MusE-XR. / Graduate
154

Modèles et outils pour la conception de Learning Games en Réalité Mixte / Models and Tools for Designing Mixed Reality Learning Games

Orliac, Charlotte 20 September 2013 (has links)
Les Learning Games sont des environnements d’apprentissage, souvent informatisés, qui utilisent des ressorts ludiques pour catalyser l’attention des apprenants et ainsi faciliter leur apprentissage. Ils ont des atouts indéniables mais présentent également certaines limites, comme des situations d’apprentissage trop artificielles. Ces limites peuvent être dépassées par l’intégration d’interactions en Réalité Mixte dans les Learning Games, que nous appelons alors des Mixed Reality Learning Games (MRLG). La Réalité Mixte, qui combine environnements numériques et objets réels, ouvre de nouvelles possibilités d’interactions et d’apprentissage qui gomment les limites précédentes et qu’il faut repérer et explorer. Dans ce contexte, nous nous intéressons au processus de conception des MRLG. Dans un premier temps, nous présentons une étude sur l’utilisation de la Réalité Mixte dans les domaines de l’apprentissage et du jeu, incluant un état de l’art des MRLG. Cette étude montre que, malgré de nombreux atouts, la conception des MRLG reste difficile à maîtriser. En effet, il n’existe ni méthode ni outil adapté à la conception de ce type d’environnements. Dans un second temps, nous analysons et modélisons l’activité de conception des MRLG à travers la littérature et des expériences de conception, dont une menée dans le cadre du projet SEGAREM. Cette démarche révèle des verrous spécifiques tels que l’absence d’aide à la modélisation (ou formalisation), à la créativité et à la vérification de la cohérence des idées. Nous éclairons nos réponses à ces besoins par un recensement des outils utilisés dans les domaines liés aux MRLG : situations d’apprentissage, jeux et environnements de la Réalité Mixte. Ceci nous amène à proposer deux outils conceptuels : un modèle de description de MRLG (f-MRLG) et des aides à la créativité sous la forme de propositions puis de recommandations. Le modèle de description a pour objectif de formaliser l’ensemble des éléments constituant un MRLG, mais aussi d’être un moyen d’identifier les éléments à définir, de structurer et de vérifier les idées. Les listes de propositions et recommandations ont pour but d’aider le concepteur à faire des choix cohérents par rapport à la situation d’apprentissage visée, en particulier en ce qui concerne les types de jeux et les dispositifs de Réalité Mixte. Une première évaluation de ces propositions a conduit à leur amélioration. Ces propositions sont à l’origine de la conception et du développement d’un outil auteur informatisé : MIRLEGADEE (Mixed Reality Learning Game DEsign Environment). MIRLEGADEE est basé sur LEGADEE, un environnement auteur pour la conception de Learning Games. Une expérimentation auprès de 20 enseignants et concepteurs de formation a validé le bienfondé de cet outil qui guide effectivement les concepteurs dans les phases amont du processus de conception de MRLG malgré des limites pour l’accompagnement de tâches complexes. / Game-based learning is one efficient pedagogical concept that uses game principles to incite learners to engage into learning activities. Learning Games (LG) are commonly known as digital environments. They have undeniable assets but also some limits, such as the artificiality of the learning context. In the mean time, new technologies have been increasingly developed, thus providing new perspectives in game-based learning. In particular, Mixed Reality (MR) technologies merge both real and digital worlds. Mixed Reality Learning Games (MRLG) offer real benefits for teaching: they enable active pedagogy trough the physical immersion of learners, “in situ” information while practicing and authentic context. In our work, we focus on the design process of MRLG. The first part of the thesis presents how Mixed Reality is used for educational and gaming purposes. An analysis of existing MRLG shows both their assets and the complexity of their design. MRLG designers have to cope with all the difficulties of learning design, game design and mixed reality design at the same time, and with the integration of all aspects in a coherent way. Besides, there is a lack of specific tool or methodology. In order to understand the specific needs of MRLG designers, we analyze and model the MRLG design activity from MRLG design processes described in papers and existing methodologies for LG and MR. We also illustrate and clarify MRLG design needs with the observation of a MRLG design activity in the SEGAREM project. We highlight some needs for modeling, creativity, and verification of coherence. To meet the identified needs, the third part is dedicated to a state of the art of tools available for learning design, game design and Mixed Reality design. This study leads us to three solutions to assist MRLG design: a model, a set of tools for creativity, and an authoring tool. We first propose a model called f-MRLG to describe fully and clearly a MRLG. f-MRLG is a support to MRLG design as it helps designer to organize their ideas and to identify which elements must be described. It also reinforces mutual comprehension in a team. Our second proposal is a set of tools for creativity: lists of possibilities, examples and suggestions for game types and Mixed Reality systems choices. We conducted a first experiment on the two proposals, which led to their improvement. These two proposals drove to the design and development of an authoring tool, named MIRLEGADEE (MIxed Reality LEarning GAme DEsign Environment), to support the MRLG design. This tool is an extension of LEGADEE, which already supports the design of learning games using a computer. An experiment with 20 teachers and training designers validated that MIRLEGADEE successfully guides the designers in the MRLG design process, in spite of limits for the support of complex tasks.
155

From Conceptual Links to Causal Relations — Physical-Virtual Artefacts in Mixed-Reality Space

Pederson, Thomas January 2003 (has links)
<p>This thesis presents a set of concepts and a general design approach for designing Mixed Reality environments based on the idea that the physical (real) world and the virtual (digital) world are equally important and share many properties. Focus is on the design of a technology infrastructure intended to relieve people from some of the extra efforts currently needed when performing activities that make heavy use of both worlds. An important part of the proposed infrastructure is the idea of creating Physical-Virtual Artefacts, objects manifested in the physical and the virtual world at the same time.</p><p>The presented work challenges the common view of Human-Computer Interaction as a research discipline mainly dealing with the design of “user interfaces” by proposing an alternative or complementary view, a physical-virtual design perspective, abstracting away the user interface, leaving only physical and virtual objects. There are at least three motives for adopting such a design perspective: 1) people well acquainted with specific (physical and virtual) environments are typically more concerned with the manipulation of (physical and virtual) objects than the user interface through which they are accessed. 2) Such a design stance facilitates the conceptualisation of objects that bridge the gap between the physical and the virtual world. 3) Many physical and virtual objects are manifested in both worlds already today. The existing conceptual link between these physical and virtual objects has only to be complemented with causal relations in order to reduce the costs in crossing the border between the physical and the virtual world.</p><p>A range of concepts are defined and discussed at length in order to frame the design space, including<i> physical-virtual environment gap, physical-virtual activity, physical-virtual artefact, </i>and<i> physical-virtual environment</i>.</p><p>Two conceptual models of physical-virtual space are presented as a result of adopting the physical-virtual design perspective: for the analysis of object logistics in the context of physical-virtual activities, and for describing structural properties of physical-virtual space respectively. A prototype system offering some degree of physical-virtual infrastructure is also presented.</p>
156

From Conceptual Links to Causal Relations — Physical-Virtual Artefacts in Mixed-Reality Space

Pederson, Thomas January 2003 (has links)
This thesis presents a set of concepts and a general design approach for designing Mixed Reality environments based on the idea that the physical (real) world and the virtual (digital) world are equally important and share many properties. Focus is on the design of a technology infrastructure intended to relieve people from some of the extra efforts currently needed when performing activities that make heavy use of both worlds. An important part of the proposed infrastructure is the idea of creating Physical-Virtual Artefacts, objects manifested in the physical and the virtual world at the same time. The presented work challenges the common view of Human-Computer Interaction as a research discipline mainly dealing with the design of “user interfaces” by proposing an alternative or complementary view, a physical-virtual design perspective, abstracting away the user interface, leaving only physical and virtual objects. There are at least three motives for adopting such a design perspective: 1) people well acquainted with specific (physical and virtual) environments are typically more concerned with the manipulation of (physical and virtual) objects than the user interface through which they are accessed. 2) Such a design stance facilitates the conceptualisation of objects that bridge the gap between the physical and the virtual world. 3) Many physical and virtual objects are manifested in both worlds already today. The existing conceptual link between these physical and virtual objects has only to be complemented with causal relations in order to reduce the costs in crossing the border between the physical and the virtual world. A range of concepts are defined and discussed at length in order to frame the design space, including physical-virtual environment gap, physical-virtual activity, physical-virtual artefact, and physical-virtual environment. Two conceptual models of physical-virtual space are presented as a result of adopting the physical-virtual design perspective: for the analysis of object logistics in the context of physical-virtual activities, and for describing structural properties of physical-virtual space respectively. A prototype system offering some degree of physical-virtual infrastructure is also presented.
157

3D visualisation of breast reconstruction using Microsoft HoloLens

Norberg, Amanda, Rask, Elliot January 2018 (has links)
The purpose of the project is to create a Mixed Reality (MR) application for the 3D visualisation of the result of a breast reconstruction surgery. The application is to be used before surgery to facilitate communication between patient an surgeon about the expected result. To this purpose Microsoft HoloLens is used, which is a pair of Mixed Reality (MR) glasses developed and manufactured by Microsoft that has a self-contained, holographic rendering computer. For the development of the MR application on the Hololens, MixedRealityToolkit-Unity is used which is a Unity-based toolkit available. The goal of the application is that the user can scan a torso of a patient, render a hologram of the torso and attach to it a prefabricated breast which possibly follows the patient's specification. To prepare a prefabricated breast, a 3D model of the breast is first created in the 3D modelling software Blender. It then gets its texture from a picture taken with the HoloLens camera. The picture is cropped to better fit the model and uploaded as a 2D texture which is then attached to the prefabricated breast, which is imported into Unity. To scan objects, the HoloLens’s operating system feature Surface Observer is used. The resulting mesh from the observer is cropped using a virtual cube, scaled, moved and rotated by the user. The cropped mesh is then smoothed using the Humphrey's Classes smoothing algorithm. To fuse the smoothed mesh with the prefabricated breast model, the Unity components: Colliders and Transforms are used. On a collision the breast's transform parent is set to the mesh’s transform, making the objects transforms behave depending on each other. The MR application has been developed and evaluated. The evaluation results show that the goal has been achieved successfully. The project demonstrates that the Microsoft HoloLens is well suited for developing such medical applications as breast reconstructive surgery visualisations. It can possibly be extended to other surgeries such as showing on a patient’s body how the scar will look after a heart surgery, or a cesarean section.
158

Parallel reality : tandem exploration of real and virtual environments

Davies, C. J. January 2016 (has links)
Alternate realities have fascinated mankind since early prehistory and with the advent of the computer and the smartphone we have seen the rise of many different categories of alternate reality that seek to augment, diminish, mix with or ultimately replace our familiar real world in order to expand our capabilities and our understanding. This thesis presents parallel reality as a new category of alternate reality which further addresses the vacancy problem that manifests in many previous alternate reality experiences. Parallel reality describes systems comprising two environments that the user may freely switch between, one real and the other virtual, both complete unto themselves. Parallel reality is framed within the larger ecosystem of previously explored alternate realities through a thorough review of existing categorisation techniques and taxonomies, leading to the introduction of the combined Milgram/Waterworth model and an extended definition of the vacancy problem for better visualising experience in alternate reality systems. Investigation into whether an existing state of the art alternate reality modality (Situated Simulations) could allow for parallel reality investigation via the Virtual Time Windows project was followed by the development of a bespoke parallel reality platform called Mirrorshades, which combined the modern virtual reality hardware of the Oculus Rift with the novel indoor positioning system of IndoorAtlas. Users were thereby granted the ability to walk through their real environment and to at any point switch their view to the equivalent vantage point within an immersive virtual environment. The benefits that such a system provides by granting users the ability to mitigate the effects of the extended vacancy problem and explore parallel real and virtual environments in tandem was experimentally shown through application to a use case within the realm of cultural heritage at a 15th century chapel. Evaluation of these user studies lead to the establishment of a number of best practice recommendations for future parallel reality endeavours.
159

Mixed Reality Assistenzsystem zur visuellen Qualitätsprüfung mit Hilfe digitaler Produktfertigungsinformationen

Adwernat, Stefan, Neges, Matthias 06 January 2020 (has links)
In der industriellen Fertigung unterliegen die Produkteigenschaften und -parameter, unabhängig vom eingesetzten Fertigungsverfahren, gewissen Streuungen. Im Rahmen der Qualitätsprüfung wird daher ermittelt, inwieweit die festgelegten Qualitätsanforderungen an das Produkt bzw. Werkstück trotz der Fertigungsstreuungen erfüllt werden (Brunner et al. 2011) [...] Insbesondere bei einer visuellen Prüfung durch den Menschen hängt das Ergebnis jedoch sehr stark vom jeweiligen Prüfwerker ab. Die wesentlichen Faktoren für die Erkennungsleistung sind Erfahrung, Qualifizierung und Ermüdung des Prüfers, Umgebungsbedingungen, wie Beleuchtung, Schmutz oder akustische Störfaktoren, aber auch die Anzahl und Gewichtung der zu bewertenden Merkmale (Keferstein et al. 2018). Infolge dessen kann die Zuverlässigkeit und Reproduzierbarkeit der Prüfergebnisse negativ beeinflusst werden. Gleiches gilt für die vollständige und konsistente Dokumentation der Sichtprüfung [...] Vor diesem Hintergrund wird ein Mixed Reality-basiertes Assistenzsystem entwickelt, welches den Prüfwerker bei der Durchführung und Dokumentation der visuellen Sichtprüfung unterstützen soll. Die Anforderungen dieses Ansatzes sind aus einem Kooperationsprojekt in der Automobilindustrie abgeleitet. Das dargestellte Assistenzsystem ist daher Teil von übergeordneten Aktivitäten im Zusammenhang mit 3D-Master und einer zeichnungsfreien Produktdokumentation. [...aus der Einleitung]
160

The AR in Architecture

Vu, Dieu An January 2019 (has links)
En vanlig metod inom arkitektonisk visualisering idag är produktion av stillbilder som skapas med 3D-modelleringsprogram. Med sådan avancerad teknik blir det enkelt och effektivt att styra och manipulera vad som visas på stillbilderna, vilket ökar säljbarheten av arkitektoniska projekt. Men vad händer om vi tar det ett steg längre, med hjälp av Alternative Reality-teknik? AR eller Augmented Reality kan vara en annan användbar visualiseringsmetod, men vilka konsekvenser kommer det med, speciellt för de icke-professionella användarna? Om vi inte tänker på vilka konsekvenser det kan ha, på samma sätt som med stillbilder, blir det bara ett annat verktyg för att öka säljbarheten för arkitektoniska projekt. Denna studie kommer därför att försöka svara på frågan ”Hur implementerar vi AR inom arkitektonisk visualisering på ett sätt som är gynnsamt för de icke-professionella användarna?”De centrala begreppen som bör tas till hänsyn när man talar om arkitektonisk visualisering är autonomi, tid, medborgarnas tidiga medverkan, ocularcentrism och konceptet av verklighet. Eftersom arkitekturen måste bero på vardagens sammanhang, bör visualiseringen inte stänga av världen för att skapa ett fint ideal som bara fungerar som falsk annonsering. Att stänga ut medborgarnas röster leder också till att skapa en metaforisk mur mellan människorna inom fältet och människorna utanför, vilket leder till förlust av utbyte av insikter och perspektiv. En av rösterna som talar starkt mot den autonoma synen är Jeremy Till, hans ord från boken Architecture depends kommer därför att spela en central roll i det teoretiska perspektivet av denna studie.För att svara på frågorna i studien kommer observationslinsen vändas till både den professionella sidan och den icke-professionella sidan angående ämnet Alternative Reality inom arkitektur. Detta görs via metoden cyber-etnografi, där Internet kommer att vara det öppna fältet att observera. Potentialerna för AR som uttrycks av de professionella kommer att användas för att jämföras med de icke-professionellas perspektiv och oron. Resultaten av observationerna kommer att användas till ett förslag av en AR-applikation, vilket är denna studies bidrag till diskussionen av vilka sätt AR kan genomföras för de icke-professionella användarnas skull. / A common method within architectural visualization today is the production of still images made with 3D-modeling software. With such advanced technology, it is made easy and efficient to control and manipulate what is shown on those still images, increasing the salability of architectural projects. But what if we take it a step further, using alternative reality technologies? AR, or Augmented Reality can be another useful visualization method, but what implications does it come with, especially for the non-professional users? If we do not consider the impacts it might have, similarly to still images, it will just turn into another tool to increase the salability of architectural projects. This study will therefore seek to answer the question of “How do we implement AR within architectural visualization in a way that is beneficial for the non-professional users?”The central concepts to consider when talking about architectural visualization are autonomy, time, early involvement of citizens, ocularcentrism and the concept of reality. As architecture has to depend on the contexts of our daily lives, the visualization should not shut out the world to create a pretty ideal that only serves as false advertisement. Shutting out the voices of the citizens also serves to create a metaphorical wall between the people within the field and the people outside of it, causing a loss of exchange of insights and perspectives. One of the voices that speak strongly against the autonomous view is Jeremy Till, his words from the book Architecture Depends will therefore play a central role in the theoretical perspective of this study.To answer the questions of this study, the observation lens will be turned to both the professional side and the non-professional side regarding the subject of alternative reality usage within architecture. This is done via the method of cyber-ethnography, in which the Internet will be the open field to observe. The potentials of AR that are expressed by the professionals will be taken to compare to the perspectives and worries of the non-professionals. The results of the observations will be of use towards a proposal of an AR application, which is this study’s contribution to the discussion of which ways AR can be implemented for the sake of the non-professional users.

Page generated in 0.142 seconds