• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 15
  • 10
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 167
  • 167
  • 60
  • 59
  • 54
  • 34
  • 31
  • 29
  • 29
  • 28
  • 20
  • 17
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Towards Real-time Mixed Reality Matting In Natural Scenes

Beato, Nicholas 01 January 2012 (has links)
In Mixed Reality scenarios, background replacement is a common way to immerse a user in a synthetic environment. Properly identifying the background pixels in an image or video is a dif- ficult problem known as matting. Proper alpha mattes usually come from human guidance, special hardware setups, or color dependent algorithms. This is a consequence of the under-constrained nature of the per pixel alpha blending equation. In constant color matting, research identifies and replaces a background that is a single color, known as the chroma key color. Unfortunately, the algorithms force a controlled physical environment and favor constant, uniform lighting. More generic approaches, such as natural image matting, have made progress finding alpha matte solutions in environments with naturally occurring backgrounds. However, even for the quicker algorithms, the generation of trimaps, indicating regions of known foreground and background pixels, normally requires human interaction or offline computation. This research addresses ways to automatically solve an alpha matte for an image in realtime, and by extension a video, using a consumer level GPU. It does so even in the context of noisy environments that result in less reliable constraints than found in controlled settings. To attack these challenges, we are particularly interested in automatically generating trimaps from depth buffers for dynamic scenes so that algorithms requiring more dense constraints may be used. The resulting computation is parallelizable so that it may run on a GPU and should work for natural images as well as chroma key backgrounds. Extra input may be required, but when this occurs, commodity hardware available in most Mixed Reality setups should be able to provide the input. This allows us to provide real-time alpha mattes for Mixed Reality scenarios that take place in relatively controlled environments. As a consequence, while monochromatic backdrops (such as green screens or retro-reflective material) aid the algorithm’s accuracy, they are not an explicit requirement. iii Finally we explore a sub-image based approach to parallelize an existing hierarchical approach on high resolution imagery. We show that locality can be exploited to significantly reduce the memory and compute requirements of previously necessary when computing alpha mattes of high resolution images. We achieve this using a parallelizable scheme that is both independent of the matting algorithm and image features. Combined, these research topics provide a basis for Mixed Reality scenarios using real-time natural image matting on high definition video sources.
112

Dimensions Of Identity

Kramer, Alice 01 January 2009 (has links)
Imagination and fantasy environments created by writers and artists have always drawn people into their worlds. Advances in technology have blurred the lines between reality and imagination. My interest has always been to question the validity of these worlds and their cultures and to transcend the evolving virtual dimension by fusing it with what we perceive to be reality.
113

Erick_Borders_MSET-Thesis_December-2022.pdf

Erick Samuel Borders (14272778) 20 December 2022 (has links)
<p>Fluid power education would benefit from the adoption of an alternative to traditional hands-on instructional methods. Hands-on education is invaluable because it offers students experience interacting with and controlling fluid power systems and components, but systems are typically space-consuming and expensive. The study sought to prove the viability of mixed reality (MR) as an alternative to traditional hands-on fluid power instruction through the creation of MR lab exercises. A summary of design methodology was created to demonstrate how virtual fluid power components were modeled and presented in a mixed reality environment. Data was collected from students enrolled at Purdue University who participated in traditional and mixed reality fluid power lab exercises. Student responses were expected to express a positive reception of mixed reality as a fluid power instructional tool. The study anticipated that utilizing mixed reality in a fluid power laboratory setting would increase student comprehension of fluid power concepts. Educational variables were limited by restricting testing to students within the advanced fluid power course of Purdue University’s Polytechnic Institute. Students in this course provided feedback that drew comparisons between traditional and mixed reality instructional methods. Labs were created to remain within the course schedule so as not to disrupt course curriculum. Data from Likert-type surveys were analyzed from pre- and post-lab questionnaires as well as student feedback from their experience after completing each mixed reality (MR) lab. Analysis showed that MR is a viable alternative to traditional hands-on instructional methods as students showed an increase in material comprehension of both fluid power components and concepts. Students perceived MR as a beneficial instructional tool but continued to show preference towards physical interactions with components. A combination of instructional methods is recommended.</p> <p>  </p>
114

Mobile Learning using Mixed Reality Games and a Conversational, Instructional and Motivational Paradigm. Design and implementation of technical language learning mobile games for the developing world with special attention to mixed reality games for the realization of a conversational, instructional and motivational paradigm.

Fotouhi-Ghazvini, Faranak January 2011 (has links)
Mobile learning has significant potential to be very influential in further and higher education. In this research a new definition for Mobile Educational Mixed Reality Games (MEMRG) is proposed based on a mobile learning environment. A questionnaire and a quantifying scale are utilised to assist the game developers in designing a MEMRG. A ¿Conversational Framework¿ is proposed as an appropriate psycho-pedagogical approach to teaching and learning for MEMRG. This methodology is based on the theme of a ¿conversation¿ between different actors of the learning community with the objective of building the architectural framework for MEMRG. Various elements responsible for instructing and motivating learners in educational games are utilised in an instructional-motivational model. User interface design for the games incorporates an efficient navigation system that uses contextual information, and allows the players to move seamlessly between real and virtual worlds. The implementation of MEMRG using the Java 2 Micro Edition (J2ME) platform iii is presented. The hardware and software specification for the MEMRG implementation and deployment are also discussed. MEMRG has produced improvements in the different cognitive processes of the learner, and also produced a deeper level of learning through enculturation, externalising ideas, and socialising. Learners¿ enjoyment, involvement, motivation, autonomy and metacognition skills have improved. This research will assist developers and teachers to gain an insight into learning paradigms which utilise mobile game environments that are formed by mixing real and virtual spaces, and provide them with a vision for effectively incorporating these games into formal and informal classroom sessions.
115

Shared Situation Awareness in Student Group Work When Using Immersive Technology

Bröring, Tabea January 2023 (has links)
Situation awareness (SA) describes how well a person perceives and understands their environment and the situation that they are in. When working in groups, shared SA describes how similarly the team members view and interpret the situation in a given environment. Immersive technology comprises technology that integrates virtual objects into the user’s reality of a physical world. It holds great potential for the application in educational contexts and collaborative settings like group projects. Immersive technology can increase engagement, make complex concepts more tangible, and increase media fluency. When immersive technology is introduced into a real-world setting, it creates a mixed reality with virtual and physical elements. In mixed reality collaborations, the complexity of elements in the environment can negatively affect the shared SA of the group members. The research problem of this thesis is that the intersection between shared SA and student group work that involves immersive technology is under-researched to this date. The research question is ”How is shared situation awareness in student group work formed when using immersive technology?”. A case study of a student group containing a participatory observation of several of their work sessions was carried out, and the obtained material was analyzed using sequential analysis. It was found that the students do not prioritize shared SA but work individually, dividing smaller subtasks among themselves and focusing on their own tasks first and foremost. Communication is used sparsely to stay updated about the other students’ work status, which helps to build shared SA. Communication also plays a crucial role in building shared SA when using immersive technology. It was also observed that the students prefer to use immersive technology in a way that allows more than one person to see the same virtual environment, as it is the case when two virtual reality (VR) headsets are connected to the same application.
116

Expand enabling robot grasping using mixed reality

San Blas Leal, César, Núñez Moreno, Julián January 2023 (has links)
The rapid advancements in Robotics and Mixed Reality (MR) have opened new avenues for intuitive human-robot interaction. In this thesis, an intuitive and accessible robot grasping application is developed using MR to enable programming using the operator’s hand movements, reducing the technical complexity associated with traditional programming. The developed application leverages the strengths of MR to provide users with an immersive and intuitive environment. It includes powerful tools such as QR code recognition for quick deployment of virtual objects or the utilization of a Virtual Station that can be placed at any desired location, allowing remote and safe control of the robot. Three modes have been implemented, including manual target placing and thorough editing of its properties, path recording of the user's hand trajectory, and real-time replication of the operator's hand movements by the robot.  To assess the effectiveness and intuitiveness of the developed application, a series of user tests are presented. These evaluations include user feedback and task completion time compared to traditional programming methods, which provide valuable insights into the application's usability, efficiency, and user satisfaction. The intuitiveness of the developed application democratizes robot programming, expanding accessibility to a wider range of users, including inexperienced operators and students. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p><p>Utbytesstudenter</p>
117

REALIDADE MISTA E MEIO EXPOSITIVO NA ARTE CONTEMPORÂNEA: INSITU<>INFLUXU / MIXED REALITY AND EXHIBITION MEDIUM IN CONTEMPORARY ART: INSITU<>INFLUX

Casimiro, Giovanna Graziosi 02 December 2015 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This research in the field of theory and history of contemporary art, study configuration modes of Mixed Reality in the exhibition place and the consolidation of Exhibition Medium, structured by technological dynamics. Aims to discuss the cycle of relations among public<>artwork<>mediums presented as an explanation that how the elements of the Exhibition Medium works, and it‟s result is the dynamic insitu<>influx. The interactive artworks ARART and Extinction, the exhibition WeARinMoMA, the Krakow National Museum, the Museum of London and the project Talking Statues, contribute to think the specific conditions of an interactive Exhibition Medium, which dynamic create a constant transitions of power and sensibility. Emerging from the context of Art and Technology, applications of Mixed Reality are analyzed in institutional places, whose examples help to classify and understand how the space unfolds through many realities and the constant transition between the virtual and the physical dimension. / Esta pesquisa, no campo da teoria e história da arte contemporânea, estuda modos de configuração da Realidade Mista no espaço de exposição e a consolidação do Meio Expositivo. As propostas dos termos ciclo de relações público<>obra<>meio (explica o modo como os elementos do Meio Expositivo se relacionam) e dinâmica insitu<>influxu abrem uma discussão pontual sobre o espaço artístico como meio interativo. A partir do contexto em Arte e Tecnologia, são analisados exemplos de aplicação de Realidade Mista em espaços institucionais, que apontam a pertinência das concepções apontadas. São analisadas as obras de arte ARART e Extinção, os museus MoMA, Museu Nacional da Cracóvia e Museu de Londres, e o projeto Talking Statues, desenvolvido na zona urbana de Londres. Os exemplos permitem compreender a maneira como o espaço se desdobra através de muitas realidades e também a permeabilidade entre o mundo virtual e o físico.
118

Jeux pédagogiques collaboratifs situés : conception et mise en oeuvre dirigées par les modèles

Delomier, Florent 10 December 2013 (has links)
Un jeu pédagogique constitue une déclinaison relative à l’apprentissage du concept de jeu sérieux (serious game). Ce type d'outil permet la ludification (gamification) de l'activité afin d'utiliser des éléments de jeu dans un contexte non ludique et conduit à catalyser l’attention, faire accroître l’engagement et augmenter la motivation des joueurs-apprenants dans les situations d’apprentissage. Les jeux pédagogiques reposent sur la mise en situation et l’immersion des apprenants, utilisant les ressorts ludiques dans des simulations axées vers la résolution de problèmes. Parmi des recherches antérieures, certains retours d’expériences font écho d’une trop grande artificialité de l’activité notamment par manque de contextualisation de l’apprentissage dans l’environnement d’utilisation des connaissances apprises. Nous avons proposé la mise en place un environnement mixte (physique et numérique) et l’utilisation de techniques collaboratives pour raffiner l’approche pédagogique. Ces orientations nous ont menés à la mise en place de ce que nous appelons des «Jeux Pédagogiques Collaboratifs Situés » (JPCS). Les deux questions de recherche qui nous ont été posées dans le cadre du projet SEGAREM et qui sont devenues les nôtres sont : 1/ comment accompagner les jeux sérieux par l’approche Réalité Augmentée (RA) et l'approche Interface Tangible (IT)? 2/ comment rendre la conception et la mise en œuvre des JPCS (Jeux Pédagogiques Collaboratifs Situés) plus explicite et plus systématique ? Les réponses que nous présentons dans cette thèse sont les suivantes : 1/ la conception et la mise en œuvre des pupitres interactifs supportant des objets réels augmentés, associés à un protocole de communication existant, proposant un support générique des techniques d’interaction détectée et de prise en compte du contexte physique d’utilisation ; 2/ une approche de production de JPCS se situant après l’étape de scénarisation ludo-pédagogique qui constitue notre cahier des charges. Nous avons basé notre approche sur des modèles pour permettre un support d’expression qui précise les caractéristiques des JPCS. Ces modèles sont soutenus par des éditeurs contextuels et produisent comme résultat des fichiers de descriptions en XML. La projection des descriptions obtenues sur une architecture générique d’exécution du JPCS permet une spécialisation pour obtenir une version exécutable. Dans les six modèles, certains sont adaptés des travaux antérieurs de l’équipe, d'autres issues de la littérature et les derniers sont directement proposés ici. Ces six modèles décrivent l’activité (un modèle d’orchestration de l’activité et un modèle de tâches), la structure de différents environnements, l’état initial de l’environnement et les conditions nécessaires d’un état final et les interactions possibles entre les joueurs et l’environnement. Nos travaux tant sur les pupitres que sur les modèles et le support d’exécution ont été concrétisés dans la mise en place de Lea(r)nIt. Ce JPCS avait pour but de consolider des acquis méthodologiques en Lean Manufacturing par l’utilisation et l’optimisation d’une chaîne de production simulée sur pupitres (supportant interactions tactiles, interactions tangibles et pouvant être assemblés) et sur téléphones mobiles (permettant la mobilité des joueurs-apprenants). / A Learning game is a declension of the serious game concept dedicated to the learning activity. A Learning game is based on a scenario and immersion of the learners with use of game mechanics on problem based simulation. The gamification concept is the use of game elements in a non-playful activity with as impact attention, motivation and engagement. However, some research feedback explains that too much artificiality on learning activity caused by a lack of contextualization of the activity on the professional environment. We propose to use Mixed Reality and Collaborative Supported Computer Work as technological solution to support situated and collaborative situation in aim to enhance pedagogical strategy and allow a better learning. We call it “Situated Collaborative Learning Game” (SCLG) as a concept of pedagogical tools to enhance learning of content with use of collaborative learning (when learners interactions is useful to learn), situated learning (when the environment context is meaningful) and human-physical objet interaction (with use of mixed reality, with kinesthetic and tangible interaction in augmented reality) and game based learning (when learner's motivation is improved by the learning activity). In these contexts, our two research questions are: 1 / How to create a serious games support by use of Augmented Reality (AR) approach and Tangible Interface (IT) approach? 2 / How to make design and development of SCLG (situated collaborative learning game) more explicit and systematic? We propose two solutions: 1/ the design and the production of four interactive desks with support of tangible interaction on and above the table. These devices are linked to a communication protocol which allows a generic support of technical interaction. 2/ A generic way to design the CSLG system, with integration of advanced human computer interaction support (as augmented reality and tangible interaction) and ubiquitous computing in Learning Games. For that, we propose, with a user centered oriented and model oriented design, a way to make a CSLG factory. For that, we propose use of six models to determinate the behavior of the CSLG. These six models describe learners’ activity (with use of three different models to follow the activity theory’s), the mixed game environment, deployment of entities on the environment, and human computer interactions. All of these models are linked by an orchestration model and can be project on a multi-agent multi-layers architecture by use of XML description file. We propose tools to help each step of our design and production process. Our work on interactive desks, on the six models and on the runtime support has been realized in the production of Lea(r)nIT. This SCLG consolidate methodological knowledge of Lean Manufacturing by use and optimization of a simulated chain production on four desks (which support touch and tangible interactions and can be assembled) and on mobile phones (to allow movement of learners).
119

Etudes de méthodes et outils pour la cohérence visuelle en réalité mixte appliquée au patrimoine / Studies of methods and tools for the really mixed visual coherence applied to the patrimony

Durand, Emmanuel 19 November 2013 (has links)
Le travail présenté dans ce mémoire a pour cadre le dispositif de réalité mixte ray-on, conçu par la société on-situ. Ce dispositif, dédié à la mise en valeur du patrimoine architectural et en particulier d'édifices historiques, est installé sur le lieu de l'édifice et propose à l'utilisateur une vision uchronique de celui-ci. Le parti pris étant celui du photo-réalisme, deux pistes ont été suivies : l'amélioration du mélange réel virtuel par la reproduction de l'éclairage réel sur les objets virtuels, et la mise en place d'une méthode de segmentation d'image résiliente aux changements lumineux.Pour la reproduction de l'éclairage, une méthode de rendu basé-image est utilisée et associée à une capture haute dynamique de l'environnement lumineux. Une attention particulière est portée pour que ces deux phases soient justes photométriquement et colorimétriquement. Pour évaluer la qualité de la chaîne de reproduction de l'éclairage, une scène test constituée d'une mire de couleur calibrée est mise en place, et capturée sous de multiples éclairages par un couple de caméra, l'une capturant une image de la mire, l'autre une image de l'environnement lumineux. L'image réelle est alors comparée au rendu virtuel de la même scène, éclairée par cette seconde image.La segmentation résiliente aux changements lumineux a été développée à partir d'une classe d'algorithmes de segmentation globale de l'image, considérant celle-ci comme un graphe où trouver la coupe minimale séparant l'arrière plan et l'avant plan. L'intervention manuelle nécessaire à ces algorithmes a été remplacée par une pré-segmentation de moindre qualité à partir d'une carte de profondeur, cette pré-segmentation étant alors utilisée comme une graîne pour la segmentation finale. / The work described in this report has as a target the mixed reality device ray-on, developed by the on-situ company. This device, dedicated to cultural heritage and specifically architectural heritage, is meant to be installed on-site and shows the user an uchronic view of its surroundings. As the chosen stance is to display photo-realistic images, two trails were followed: the improvement of the real-virtual merging by reproducing accurately the real lighting on the virtual objects, and the development of a real-time segmentation method which is resilient to lighting changes.Regarding lighting reproduction, an image-based rendering method is used in addition to a high dynamic range capture of the lighting environment. The emphasis is put on the photometric and colorimetric correctness of these two steps. To measure the quality of the lighting reproduction chain, a test scene is set up with a calibrated color checker, captured by a camera while another camera is grabbing the lighting environment. The image of the real scene is then compared to the simulation of the same scene, enlightened by the light probe.Segmentation resilient to lighting changes is developed from a set of global image segmentation methods, which consider an image as a graph where a cut of minimal energy has to be found between the foreground and the background. These methods being semi-automatic, the manual part is replaced by a low resolution pre-segmentation based on the depthmap of the scene which is used as a seed for the final segmentation.
120

one reality : augmenting the human experience through the combination of physical and digital worlds / Une réalité : augmenter l'expérience humaine à travers la convergence des mondes physiques et numériques

Roo, Joan sol 15 December 2017 (has links)
Alors que le numérique a longtemps été réservé à des usages experts, il fait aujourd’hui partie intégrante de notre quotidien, au point, qu’il devient difficile de considérer le monde physique dans lequel nous vivons indépendamment du monde numérique. Pourtant, malgré cette évolution, notre manière d’interagir avec le monde numérique a très peu évolué, et reste toujours principalement basé sur l’utilisation d’écrans, de claviers et de souris. Dans les nouveaux usages rendus possible par le numérique, ces interfaces peuvent se montrer inadaptées, et continuent à préserver la séparation entre le monde physique et le monde numérique. Au cours de cette thèse, nous nous sommes concentrés à rendre cette frontière entre mondes physique et numérique plus subtil au point de la faire disparaître. Cela est rendu possible en étendant la portée des outils numériques dans le monde physique, puis en concevant des artefacts hybrides (des objets aux propriétés physique et numérique), et enfin en permettant les transitions dans une réalité mixte (physique-numérique), laissant le choix du niveau d’immersion à l’utilisateur en fonction de ses besoins. L’objectif final de ce travail est d’augmenter l’expérience de la réalité. Cela comprend non seulement le support de l’interaction avec le monde extérieur, mais aussi avec notre monde intérieur. Cette thèse fournit aux lecteurs les informations contextuelles et les connaissances techniques requises pour pouvoir comprendre et concevoir des systèmes de réalité mixte. A partir de ces fondements, nos contributions, ayant pour but de fusionner le monde physique et le monde virtuel, sont présentées. Nous espérons que ce document inspirera et facilitera des travaux futurs ayant pour vision d’unifier le physique et le virtuel. / In recent history, computational devices evolved from simple calculators to now pervasive artefacts, with which we share most aspects of our lives, and it is hard to imagine otherwise. Yet, this change of the role of computers was not accompanied by an equivalent redefinition of the interaction paradigm: we still mostly depend on screens, keyboards and mice. Even when these legacy interfaces have been proven efficient for traditional tasks, we agree with those who argue that these interfaces are not necessarily fitting for their new roles. Even more so, traditional interfaces preserve the separation between digital and physical realms, now counterparts of our reality.During this PhD, we focused the dissolution of the separation between physical and digital, first by extending the reach of digital tools into the physical environment, followed by the creation of hybrid artefacts (physical-digital emulsions), to finally support the transition between different mixed realities, increasing immersion only when needed. The final objective of this work is to augment the experience of reality. This comprises not only the support of the interaction with the external world, but also with the internal one. This thesis provides the reader contextual information along with required technical knowledge to be able to understand and build mixed reality systems. Once the theoretical and practical knowledge is provided, our contributions towards the overarching goal of merging physical and digital realms are presented. We hope this document will inspire and help others to work towards a world where the physical and digital, and humans and their environment are not opposites, but instead all counterparts of a unified reality.

Page generated in 0.0817 seconds