• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 115
  • 15
  • 10
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 200
  • 200
  • 71
  • 68
  • 68
  • 45
  • 42
  • 38
  • 34
  • 33
  • 27
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Shared Situation Awareness in Student Group Work When Using Immersive Technology

Bröring, Tabea January 2023 (has links)
Situation awareness (SA) describes how well a person perceives and understands their environment and the situation that they are in. When working in groups, shared SA describes how similarly the team members view and interpret the situation in a given environment. Immersive technology comprises technology that integrates virtual objects into the user’s reality of a physical world. It holds great potential for the application in educational contexts and collaborative settings like group projects. Immersive technology can increase engagement, make complex concepts more tangible, and increase media fluency. When immersive technology is introduced into a real-world setting, it creates a mixed reality with virtual and physical elements. In mixed reality collaborations, the complexity of elements in the environment can negatively affect the shared SA of the group members. The research problem of this thesis is that the intersection between shared SA and student group work that involves immersive technology is under-researched to this date. The research question is ”How is shared situation awareness in student group work formed when using immersive technology?”. A case study of a student group containing a participatory observation of several of their work sessions was carried out, and the obtained material was analyzed using sequential analysis. It was found that the students do not prioritize shared SA but work individually, dividing smaller subtasks among themselves and focusing on their own tasks first and foremost. Communication is used sparsely to stay updated about the other students’ work status, which helps to build shared SA. Communication also plays a crucial role in building shared SA when using immersive technology. It was also observed that the students prefer to use immersive technology in a way that allows more than one person to see the same virtual environment, as it is the case when two virtual reality (VR) headsets are connected to the same application.
142

Expand enabling robot grasping using mixed reality

San Blas Leal, César, Núñez Moreno, Julián January 2023 (has links)
The rapid advancements in Robotics and Mixed Reality (MR) have opened new avenues for intuitive human-robot interaction. In this thesis, an intuitive and accessible robot grasping application is developed using MR to enable programming using the operator’s hand movements, reducing the technical complexity associated with traditional programming. The developed application leverages the strengths of MR to provide users with an immersive and intuitive environment. It includes powerful tools such as QR code recognition for quick deployment of virtual objects or the utilization of a Virtual Station that can be placed at any desired location, allowing remote and safe control of the robot. Three modes have been implemented, including manual target placing and thorough editing of its properties, path recording of the user's hand trajectory, and real-time replication of the operator's hand movements by the robot.  To assess the effectiveness and intuitiveness of the developed application, a series of user tests are presented. These evaluations include user feedback and task completion time compared to traditional programming methods, which provide valuable insights into the application's usability, efficiency, and user satisfaction. The intuitiveness of the developed application democratizes robot programming, expanding accessibility to a wider range of users, including inexperienced operators and students. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p><p>Utbytesstudenter</p>
143

<b>PROTOTYPING A LOW-COST VIRTUAL REALITY (VR) ROBOTIC SURGICAL TRAINER</b>

Abhinav Ajith (19180198) 20 July 2024 (has links)
<p dir="ltr">Robotic surgery has transformed the landscape of minimally invasive procedures, offering unmatched precision and quicker patient recovery times. Despite all these advancements, training surgeons to use these sophisticated surgical systems effectively remains a daunting challenge, primarily due to high costs, limited accessibility, increased learning curve, and inconsistent training quality. Existing training modalities are limited by the high costs of original training robots, logistical challenges, lack of emphasis on hand movements, the necessity of expert presence, and limited scalability and effectiveness. This thesis introduces TrainVR, a low-cost based training system designed to overcome these hurdles and enhance the skillset of surgical trainees. TrainVR integrates affordable Virtual Reality (VR) technology with enhanced fidelity, creating an engaging and realistic training environment. TrainVR is designed to simulate realistic surgical environments and procedures, focusing on the development of motor, cognitive, and spatial skills for tasks required for robotic surgery through computer vision algorithms, gamified environments, performance analytics, and supporting both asynchronous and remote expert-led training scenarios. This system features customizable training modules, enabling trainees to practice a wide array of surgical procedures in a safe, virtual setting. The device also focuses on the importance of user’s hand, clutch, and ergonomics during surgical training which is crucial based on feedback from surgeons. The development of TrainVR involved crafting detailed 3D models of surgical instruments and anatomical structures, by integrating hardware, software and designing a user-friendly interface. We conducted testing with different game environments which compare the performance of the users and provide insights to improve the learning. The thesis concludes by experimenting and proposing new configurations to improve the fidelity and hand tracking which should closely match with the experience provided by the present training simulators at a substantially lower cost. TrainVR’s scalable design and compatibility with standard VR hardware make it accessible to a wide range of institutions, including those with limited resources. By offering a cost-effective, immersive, and adaptive training solution, TrainVR aims to enhance surgical education and ultimately improve patient care outcomes.</p>
144

Implementation of Augmented Reality applications to recognize Automotive Vehicle using Microsoft HoloLens : Performance comparison of Vuforia 3-D recognition and QR-code recognition Microsoft HoloLens applications

Putta, Advaith January 2019 (has links)
Context. Volvo Construction Equipment is planning to use Microsoft Hololens as a tool for the on-site manager to keep a track on the automotive machines and obtain their corresponding work information. For that, a miniature site has been build at PDRL BTH consisting of three different automotive vehicles. We are developing Augmented Reality applications for Microsoft Hololens to recognize these automotive vehicles. There is a need to identify the most feasible recognition method that can be implemented using Microsoft Hololens. Objectives. In this study, we investigate which among the Vuforia 3-D recognition method and the feasible method is best suited for the Microsoft Hololens and we also find out the maximum distance at which an automotive vehicle can be recognized by the Microsoft Hololens. Methods. In this study, we conducted a literature review and the number of articles has been reviewed for IEEE Xplore, ACM Digital Library, Google Scholar and Scopus sources. Seventeen articles were selected for review after reading their titles and abstracts of articles obtained from the search. Two experiments were performed to find out the best recognition method of the Microsoft Hololens and the maximum distance at which an automotive vehicle can be recognized by the Microsoft Hololens. Results. QR-code recognition method is the best recognition method to be used by Microsoft Hololens for recognizing automotive vehicles in the range of one to two feet and Vuforia 3-D recognition method is recommended for more than two feet distance. Conclusions. We conclude that the QR-code recognition method is suitable for recognizing vehicles in the close range (1-2 feet) and Vuforia 3-D object recognition is suitable for recognition for distance over two feet. These two methods are different from each other. One used the 3-D scan of the vehicle to recognize the vehicle and the other uses image recognition (using unique QR-codes). We covered effect of distance on the recognition capability of the application and a lot of work has to be done in terms of how does the QR-code size effects the maximum distance at which an automotive vehicle can be recognized. We conclude that there is a need for further experimentation in order to find out the impact of QR-code size on the maximum recognition distance.
145

REALIDADE MISTA E MEIO EXPOSITIVO NA ARTE CONTEMPORÂNEA: INSITU<>INFLUXU / MIXED REALITY AND EXHIBITION MEDIUM IN CONTEMPORARY ART: INSITU<>INFLUX

Casimiro, Giovanna Graziosi 02 December 2015 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This research in the field of theory and history of contemporary art, study configuration modes of Mixed Reality in the exhibition place and the consolidation of Exhibition Medium, structured by technological dynamics. Aims to discuss the cycle of relations among public<>artwork<>mediums presented as an explanation that how the elements of the Exhibition Medium works, and it‟s result is the dynamic insitu<>influx. The interactive artworks ARART and Extinction, the exhibition WeARinMoMA, the Krakow National Museum, the Museum of London and the project Talking Statues, contribute to think the specific conditions of an interactive Exhibition Medium, which dynamic create a constant transitions of power and sensibility. Emerging from the context of Art and Technology, applications of Mixed Reality are analyzed in institutional places, whose examples help to classify and understand how the space unfolds through many realities and the constant transition between the virtual and the physical dimension. / Esta pesquisa, no campo da teoria e história da arte contemporânea, estuda modos de configuração da Realidade Mista no espaço de exposição e a consolidação do Meio Expositivo. As propostas dos termos ciclo de relações público<>obra<>meio (explica o modo como os elementos do Meio Expositivo se relacionam) e dinâmica insitu<>influxu abrem uma discussão pontual sobre o espaço artístico como meio interativo. A partir do contexto em Arte e Tecnologia, são analisados exemplos de aplicação de Realidade Mista em espaços institucionais, que apontam a pertinência das concepções apontadas. São analisadas as obras de arte ARART e Extinção, os museus MoMA, Museu Nacional da Cracóvia e Museu de Londres, e o projeto Talking Statues, desenvolvido na zona urbana de Londres. Os exemplos permitem compreender a maneira como o espaço se desdobra através de muitas realidades e também a permeabilidade entre o mundo virtual e o físico.
146

Jeux pédagogiques collaboratifs situés : conception et mise en oeuvre dirigées par les modèles

Delomier, Florent 10 December 2013 (has links)
Un jeu pédagogique constitue une déclinaison relative à l’apprentissage du concept de jeu sérieux (serious game). Ce type d'outil permet la ludification (gamification) de l'activité afin d'utiliser des éléments de jeu dans un contexte non ludique et conduit à catalyser l’attention, faire accroître l’engagement et augmenter la motivation des joueurs-apprenants dans les situations d’apprentissage. Les jeux pédagogiques reposent sur la mise en situation et l’immersion des apprenants, utilisant les ressorts ludiques dans des simulations axées vers la résolution de problèmes. Parmi des recherches antérieures, certains retours d’expériences font écho d’une trop grande artificialité de l’activité notamment par manque de contextualisation de l’apprentissage dans l’environnement d’utilisation des connaissances apprises. Nous avons proposé la mise en place un environnement mixte (physique et numérique) et l’utilisation de techniques collaboratives pour raffiner l’approche pédagogique. Ces orientations nous ont menés à la mise en place de ce que nous appelons des «Jeux Pédagogiques Collaboratifs Situés » (JPCS). Les deux questions de recherche qui nous ont été posées dans le cadre du projet SEGAREM et qui sont devenues les nôtres sont : 1/ comment accompagner les jeux sérieux par l’approche Réalité Augmentée (RA) et l'approche Interface Tangible (IT)? 2/ comment rendre la conception et la mise en œuvre des JPCS (Jeux Pédagogiques Collaboratifs Situés) plus explicite et plus systématique ? Les réponses que nous présentons dans cette thèse sont les suivantes : 1/ la conception et la mise en œuvre des pupitres interactifs supportant des objets réels augmentés, associés à un protocole de communication existant, proposant un support générique des techniques d’interaction détectée et de prise en compte du contexte physique d’utilisation ; 2/ une approche de production de JPCS se situant après l’étape de scénarisation ludo-pédagogique qui constitue notre cahier des charges. Nous avons basé notre approche sur des modèles pour permettre un support d’expression qui précise les caractéristiques des JPCS. Ces modèles sont soutenus par des éditeurs contextuels et produisent comme résultat des fichiers de descriptions en XML. La projection des descriptions obtenues sur une architecture générique d’exécution du JPCS permet une spécialisation pour obtenir une version exécutable. Dans les six modèles, certains sont adaptés des travaux antérieurs de l’équipe, d'autres issues de la littérature et les derniers sont directement proposés ici. Ces six modèles décrivent l’activité (un modèle d’orchestration de l’activité et un modèle de tâches), la structure de différents environnements, l’état initial de l’environnement et les conditions nécessaires d’un état final et les interactions possibles entre les joueurs et l’environnement. Nos travaux tant sur les pupitres que sur les modèles et le support d’exécution ont été concrétisés dans la mise en place de Lea(r)nIt. Ce JPCS avait pour but de consolider des acquis méthodologiques en Lean Manufacturing par l’utilisation et l’optimisation d’une chaîne de production simulée sur pupitres (supportant interactions tactiles, interactions tangibles et pouvant être assemblés) et sur téléphones mobiles (permettant la mobilité des joueurs-apprenants). / A Learning game is a declension of the serious game concept dedicated to the learning activity. A Learning game is based on a scenario and immersion of the learners with use of game mechanics on problem based simulation. The gamification concept is the use of game elements in a non-playful activity with as impact attention, motivation and engagement. However, some research feedback explains that too much artificiality on learning activity caused by a lack of contextualization of the activity on the professional environment. We propose to use Mixed Reality and Collaborative Supported Computer Work as technological solution to support situated and collaborative situation in aim to enhance pedagogical strategy and allow a better learning. We call it “Situated Collaborative Learning Game” (SCLG) as a concept of pedagogical tools to enhance learning of content with use of collaborative learning (when learners interactions is useful to learn), situated learning (when the environment context is meaningful) and human-physical objet interaction (with use of mixed reality, with kinesthetic and tangible interaction in augmented reality) and game based learning (when learner's motivation is improved by the learning activity). In these contexts, our two research questions are: 1 / How to create a serious games support by use of Augmented Reality (AR) approach and Tangible Interface (IT) approach? 2 / How to make design and development of SCLG (situated collaborative learning game) more explicit and systematic? We propose two solutions: 1/ the design and the production of four interactive desks with support of tangible interaction on and above the table. These devices are linked to a communication protocol which allows a generic support of technical interaction. 2/ A generic way to design the CSLG system, with integration of advanced human computer interaction support (as augmented reality and tangible interaction) and ubiquitous computing in Learning Games. For that, we propose, with a user centered oriented and model oriented design, a way to make a CSLG factory. For that, we propose use of six models to determinate the behavior of the CSLG. These six models describe learners’ activity (with use of three different models to follow the activity theory’s), the mixed game environment, deployment of entities on the environment, and human computer interactions. All of these models are linked by an orchestration model and can be project on a multi-agent multi-layers architecture by use of XML description file. We propose tools to help each step of our design and production process. Our work on interactive desks, on the six models and on the runtime support has been realized in the production of Lea(r)nIT. This SCLG consolidate methodological knowledge of Lean Manufacturing by use and optimization of a simulated chain production on four desks (which support touch and tangible interactions and can be assembled) and on mobile phones (to allow movement of learners).
147

Etudes de méthodes et outils pour la cohérence visuelle en réalité mixte appliquée au patrimoine / Studies of methods and tools for the really mixed visual coherence applied to the patrimony

Durand, Emmanuel 19 November 2013 (has links)
Le travail présenté dans ce mémoire a pour cadre le dispositif de réalité mixte ray-on, conçu par la société on-situ. Ce dispositif, dédié à la mise en valeur du patrimoine architectural et en particulier d'édifices historiques, est installé sur le lieu de l'édifice et propose à l'utilisateur une vision uchronique de celui-ci. Le parti pris étant celui du photo-réalisme, deux pistes ont été suivies : l'amélioration du mélange réel virtuel par la reproduction de l'éclairage réel sur les objets virtuels, et la mise en place d'une méthode de segmentation d'image résiliente aux changements lumineux.Pour la reproduction de l'éclairage, une méthode de rendu basé-image est utilisée et associée à une capture haute dynamique de l'environnement lumineux. Une attention particulière est portée pour que ces deux phases soient justes photométriquement et colorimétriquement. Pour évaluer la qualité de la chaîne de reproduction de l'éclairage, une scène test constituée d'une mire de couleur calibrée est mise en place, et capturée sous de multiples éclairages par un couple de caméra, l'une capturant une image de la mire, l'autre une image de l'environnement lumineux. L'image réelle est alors comparée au rendu virtuel de la même scène, éclairée par cette seconde image.La segmentation résiliente aux changements lumineux a été développée à partir d'une classe d'algorithmes de segmentation globale de l'image, considérant celle-ci comme un graphe où trouver la coupe minimale séparant l'arrière plan et l'avant plan. L'intervention manuelle nécessaire à ces algorithmes a été remplacée par une pré-segmentation de moindre qualité à partir d'une carte de profondeur, cette pré-segmentation étant alors utilisée comme une graîne pour la segmentation finale. / The work described in this report has as a target the mixed reality device ray-on, developed by the on-situ company. This device, dedicated to cultural heritage and specifically architectural heritage, is meant to be installed on-site and shows the user an uchronic view of its surroundings. As the chosen stance is to display photo-realistic images, two trails were followed: the improvement of the real-virtual merging by reproducing accurately the real lighting on the virtual objects, and the development of a real-time segmentation method which is resilient to lighting changes.Regarding lighting reproduction, an image-based rendering method is used in addition to a high dynamic range capture of the lighting environment. The emphasis is put on the photometric and colorimetric correctness of these two steps. To measure the quality of the lighting reproduction chain, a test scene is set up with a calibrated color checker, captured by a camera while another camera is grabbing the lighting environment. The image of the real scene is then compared to the simulation of the same scene, enlightened by the light probe.Segmentation resilient to lighting changes is developed from a set of global image segmentation methods, which consider an image as a graph where a cut of minimal energy has to be found between the foreground and the background. These methods being semi-automatic, the manual part is replaced by a low resolution pre-segmentation based on the depthmap of the scene which is used as a seed for the final segmentation.
148

one reality : augmenting the human experience through the combination of physical and digital worlds / Une réalité : augmenter l'expérience humaine à travers la convergence des mondes physiques et numériques

Roo, Joan sol 15 December 2017 (has links)
Alors que le numérique a longtemps été réservé à des usages experts, il fait aujourd’hui partie intégrante de notre quotidien, au point, qu’il devient difficile de considérer le monde physique dans lequel nous vivons indépendamment du monde numérique. Pourtant, malgré cette évolution, notre manière d’interagir avec le monde numérique a très peu évolué, et reste toujours principalement basé sur l’utilisation d’écrans, de claviers et de souris. Dans les nouveaux usages rendus possible par le numérique, ces interfaces peuvent se montrer inadaptées, et continuent à préserver la séparation entre le monde physique et le monde numérique. Au cours de cette thèse, nous nous sommes concentrés à rendre cette frontière entre mondes physique et numérique plus subtil au point de la faire disparaître. Cela est rendu possible en étendant la portée des outils numériques dans le monde physique, puis en concevant des artefacts hybrides (des objets aux propriétés physique et numérique), et enfin en permettant les transitions dans une réalité mixte (physique-numérique), laissant le choix du niveau d’immersion à l’utilisateur en fonction de ses besoins. L’objectif final de ce travail est d’augmenter l’expérience de la réalité. Cela comprend non seulement le support de l’interaction avec le monde extérieur, mais aussi avec notre monde intérieur. Cette thèse fournit aux lecteurs les informations contextuelles et les connaissances techniques requises pour pouvoir comprendre et concevoir des systèmes de réalité mixte. A partir de ces fondements, nos contributions, ayant pour but de fusionner le monde physique et le monde virtuel, sont présentées. Nous espérons que ce document inspirera et facilitera des travaux futurs ayant pour vision d’unifier le physique et le virtuel. / In recent history, computational devices evolved from simple calculators to now pervasive artefacts, with which we share most aspects of our lives, and it is hard to imagine otherwise. Yet, this change of the role of computers was not accompanied by an equivalent redefinition of the interaction paradigm: we still mostly depend on screens, keyboards and mice. Even when these legacy interfaces have been proven efficient for traditional tasks, we agree with those who argue that these interfaces are not necessarily fitting for their new roles. Even more so, traditional interfaces preserve the separation between digital and physical realms, now counterparts of our reality.During this PhD, we focused the dissolution of the separation between physical and digital, first by extending the reach of digital tools into the physical environment, followed by the creation of hybrid artefacts (physical-digital emulsions), to finally support the transition between different mixed realities, increasing immersion only when needed. The final objective of this work is to augment the experience of reality. This comprises not only the support of the interaction with the external world, but also with the internal one. This thesis provides the reader contextual information along with required technical knowledge to be able to understand and build mixed reality systems. Once the theoretical and practical knowledge is provided, our contributions towards the overarching goal of merging physical and digital realms are presented. We hope this document will inspire and help others to work towards a world where the physical and digital, and humans and their environment are not opposites, but instead all counterparts of a unified reality.
149

Labyrinth psychotica : simulating psychotic phenomena

Kanary Nikolova, Jennifer January 2016 (has links)
This thesis forms a valuable tool of analysis, as well as an important reference guide to anyone interested in communicating, expressing, representing, simulating and or imagining what it is like to experience psychotic phenomena. Understanding what it is like to experience psychotic phenomena is difficult. Those who have experience with it find it hard to describe, and those who do not have that experience find it hard to envision. Yet, the ability to understand is crucial to the interaction with a person struggling with psychotic experiences, and for this help is needed. In recent years, the psychosis simulation projects Mindstorm, Paved with Fear, Virtual Hallucinations and Living With Schizophrenia have been developed as teaching and awareness tools for mental health workers, police, students and family members, so that they can better understand psychotic phenomena. These multimedia projects aim to improve understanding of what a person in psychosis is going through. This thesis represents a journey into taking a closer look at their designs and comparing them to biographical and professional literature. In doing so, throughout the chapters, a set of considerations and design challenges have been created that need to be taken into account when simulating psychosis. After a series of artistic case study labyrinths, Suicide Pigeon, Intruder, and Intruder 2.0, two final ‘do-it-yourself-psychosis’ projects have been created that have taken the aspects collected into account: The Labyrinth and The Wearable. Together these two projects form experiences that may be considered analogous to psychotic experiences. My original contribution to knowledge lies, on the one hand, within the function that both The Labyrinth and The Wearable have on a person’s ability to gain a better understanding of what it feels like to be in psychosis, and on the other hand within the background information provided on the context and urgency of psychosis simulation, how the existing simulations may be improved, and how labyrinthine installation art may contribute to these improvements.
150

Augmented virtuality:transforming real human activity into virtual environments

Pouke, M. (Matti) 11 August 2015 (has links)
Abstract The topic of this work is the transformation of real-world human activity into virtual environments. More specifically, the topic is the process of identifying various aspects of visible human activity with sensor networks and studying the different ways how the identified activity can be visualized in a virtual environment. The transformation of human activities into virtual environments is a rather new research area. While there is existing research on sensing and visualizing human activity in virtual environments, the focus of the research is carried out usually within a specific type of human activity, such as basic actions and locomotion. However, different types of sensors can provide very different human activity data, as well as lend itself to very different use-cases. This work is among the first to study the transformation of human activities on a larger scale, comparing various types of transformations from multiple theoretical viewpoints. This work utilizes constructs built for use-cases that require the transformation of human activity for various purposes. Each construct is a mixed reality application that utilizes a different type of source data and visualizes human activity in a different way. The constructs are evaluated from practical as well as theoretical viewpoints. The results imply that different types of activity transformations have significantly different characteristics. The most distinct theoretical finding is that there is a relationship between the level of detail of the transformed activity, specificity of the sensors involved and the extent of world knowledge required to transform the activity. The results also provide novel insights into using human activity transformations for various practical purposes. Transformations are evaluated as control devices for virtual environments, as well as in the context of visualization and simulation tools in elderly home care and urban studies. / Tiivistelmä Tämän väitöskirjatyön aiheena on ihmistoiminnan muuntaminen todellisesta maailmasta virtuaalitodellisuuteen. Työssä käsitellään kuinka näkyvästä ihmistoiminnasta tunnistetaan sensoriverkkojen avulla erilaisia ominaisuuksia ja kuinka nämä ominaisuudet voidaan esittää eri tavoin virtuaaliympäristöissä. Ihmistoiminnan muuntaminen virtuaaliympäristöihin on kohtalaisen uusi tutkimusalue. Olemassa oleva tutkimus keskittyy yleensä kerrallaan vain tietyntyyppisen ihmistoiminnan, kuten perustoimintojen tai liikkumisen, tunnistamiseen ja visualisointiin. Erilaiset anturit ja muut datalähteet pystyvät kuitenkin tuottamaan hyvin erityyppistä dataa ja siten soveltuvat hyvin erilaisiin käyttötapauksiin. Tämä työ tutkii ensimmäisten joukossa ihmistoiminnan tunnistamista ja visualisointia virtuaaliympäristössä laajemmassa mittakaavassa ja useista teoreettisista näkökulmista tarkasteltuna. Työssä hyödynnetään konstrukteja jotka on kehitetty eri käyttötapauksia varten. Konstruktit ovat sekoitetun todellisuuden sovelluksia joissa hyödynnetään erityyppistä lähdedataa ja visualisoidaan ihmistoimintaa eri tavoin. Konstrukteja arvioidaan sekä niiden käytännön sovellusalueen, että erilaisten teoreettisten viitekehysten kannalta. Tulokset viittaavat siihen, että erilaisilla muunnoksilla on selkeästi erityyppiset ominaisuudet. Selkein teoreettinen löydös on, että mitä yksityiskohtaisemmasta toiminnasta on kyse, sitä vähemmän tunnistuksessa voidaan hyödyntää kontekstuaalista tietoa tai tavanomaisia datalähteitä. Tuloksissa tuodaan myös uusia näkökulmia ihmistoiminnan visualisoinnin hyödyntämisestä erilaisissa käytännön sovelluskohteissa. Sovelluskohteina toimivat ihmiskehon käyttäminen ohjauslaitteena sekä ihmistoiminnan visualisointi ja simulointi kotihoidon ja kaupunkisuunnittelun sovellusalueilla.

Page generated in 0.0442 seconds