• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 15
  • 10
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 70
  • 63
  • 63
  • 42
  • 38
  • 37
  • 33
  • 31
  • 27
  • 18
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Mixed-Reality for Enhanced Robot Teleoperation / Mixed-Reality zur verbesserten Fernbedienung von Robotern

Sauer, Markus January 2010 (has links) (PDF)
In den letzten Jahren ist die Forschung in der Robotik soweit fortgeschritten, dass die Mensch-Maschine Schnittstelle zunehmend die kritischste Komponente für eine hohe Gesamtperformanz von Systemen zur Navigation und Koordination von Robotern wird. In dieser Dissertation wird untersucht wie Mixed-Reality Technologien für Nutzerschnittstellen genutzt werden können, um diese Gesamtperformanz zu erhöhen. Hierzu werden Konzepte und Technologien entwickelt, die durch Evaluierung mit Nutzertest ein optimiertes und anwenderbezogenes Design von Mixed-Reality Nutzerschnittstellen ermöglichen. Er werden somit sowohl die technische Anforderungen als auch die menschlichen Faktoren für ein konsistentes Systemdesign berücksichtigt. Nach einer detaillierten Problemanalyse und der Erstellung eines Systemmodels, das den Menschen als Schlüsselkomponente mit einbezieht, wird zunächst die Anwendung der neuartigen 3D-Time-of-Flight Kamera zur Navigation von Robotern, aber auch für den Einsatz in Mixed-Reality Schnittstellen analysiert und optimiert. Weiterhin wird gezeigt, wie sich der Netzwerkverkehr des Videostroms als wichtigstes Informationselement der meisten Nutzerschnittstellen für die Navigationsaufgabe auf der Netzwerk Applikationsebene in typischen Multi-Roboter Netzwerken mit dynamischen Topologien und Lastsituation optimieren lässt. Hierdurch ist es möglich in sonst in sonst typischen Ausfallszenarien den Videostrom zu erhalten und die Bildrate zu stabilisieren. Diese fortgeschrittenen Technologien werden dann auch dem entwickelten Konzept der generischen 3D Mixed Reality Schnittselle eingesetzt. Dieses Konzept ermöglicht eine integrierte 3D Darstellung der verfügbaren Information, so dass räumliche Beziehungen von Informationen aufrechterhalten werden und somit die Anzahl der mentalen Transformationen beim menschlichen Bediener reduziert wird. Gleichzeitig werden durch diesen Ansatz auch immersive Stereo Anzeigetechnologien unterstützt, welche zusätzlich das räumliche Verständnis der entfernten Situation fördern. Die in der Dissertation vorgestellten und evaluierten Ansätze nutzen auch die Tatsache, dass sich eine lokale Autonomie von Robotern heute sehr robust realisieren lässt. Dies wird zum Beispiel zur Realisierung eines Assistenzsystems mit variabler Autonomie eingesetzt. Hierbei erhält der Fernbediener über eine Kraftrückkopplung kombiniert mit einer integrierten Augmented Reality Schnittstelle, einen Eindruck über die Situation am entfernten Arbeitsbereich, aber auch über die aktuelle Navigationsintention des Roboters. Die durchgeführten Nutzertests belegen die signifikante Steigerung der Navigationsperformanz durch den entwickelten Ansatz. Die robuste lokale Autonomie ermöglicht auch den in der Dissertation eingeführten Ansatz der prädiktiven Mixed-Reality Schnittstelle. Die durch diesen Ansatz entkoppelte Regelschleife über den Menschen ermöglicht es die Sichtbarkeit von unvermeidbaren Systemverzögerungen signifikant zu reduzieren. Zusätzlich können durch diesen Ansatz beide für die Navigation hilfreichen Blickwinkel in einer 3D-Nutzerschnittstelle kombiniert werden – der exozentrische Blickwinkel und der egozentrische Blickwinkel als Augmented Reality Sicht. / With the progress in robotics research the human machine interfaces reach more and more the status of being the major limiting factor for the overall system performance of a system for remote navigation and coordination of robots. In this monograph it is elaborated how mixed reality technologies can be applied for the user interfaces in order to increase the overall system performance. Concepts, technologies, and frameworks are developed and evaluated in user studies which enable for novel user-centered approaches to the design of mixed-reality user interfaces for remote robot operation. Both the technological requirements and the human factors are considered to achieve a consistent system design. Novel technologies like 3D time-of-flight cameras are investigated for the application in the navigation tasks and for the application in the developed concept of a generic mixed reality user interface. In addition it is shown how the network traffic of a video stream can be shaped on application layer in order to reach a stable frame rate in dynamic networks. The elaborated generic mixed reality framework enables an integrated 3D graphical user interface. The realized spatial integration and visualization of available information reduces the demand for mental transformations for the human operator and supports the use of immersive stereo devices. The developed concepts make also use of the fact that local robust autonomy components can be realized and thus can be incorporated as assistance systems for the human operators. A sliding autonomy concept is introduced combining force and visual augmented reality feedback. The force feedback component allows rendering the robot's current navigation intention to the human operator, such that a real sliding autonomy with seamless transitions is achieved. The user-studies prove the significant increase in navigation performance by application of this concept. The generic mixed reality user interface together with robust local autonomy enables a further extension of the teleoperation system to a short-term predictive mixed reality user interface. With the presented concept of operation, it is possible to significantly reduce the visibility of system delays for the human operator. In addition, both advantageous characteristics of a 3D graphical user interface for robot teleoperation- an exocentric view and an augmented reality view – can be combined.
82

Comparing MR/VR implementations in flight training simulation

Wang, Kexin January 2023 (has links)
The use of Extended Reality(XR) technologies for training is gaining popularity, and flight training is one field that has begun experimenting with the best implementation for their needs. Both Virtual Reality (VR) and Mixed Reality (MR) have been used in flight simulators, but it is unclear which is the better fit. The research question is: Which implementation (VR, MR) fits better regarding psychological and ergonomic fidelity for flight training simulation? A fidelity/validity framework is used to measure and compare VR and MR in a prototype flight training simulation. Participants from the Swedish Air Force Combat Simulation Center (FLSC) with experience in extended reality (XR) and flight training simulations took part in the study. The results showed that participants performed better and reported a preference for mixed reality (MR) over virtual reality (VR), citing factors such as ease of adaptation, movements, and concentration. The thematic analysis identified three themes: naturalness, intuitiveness, and precision. Based on these findings, MR is deemed a better fit for flight training, offering a higher level of psychological and ergonomic fidelity than VR. This study can inform future research on XR and flight training simulations and inform the incorporation of XR technologies in the design of training simulations.
83

Assessing the affect on short-term memory in students by comparing a serial recall Augmented Reality game and a card version.

Nyman, Oskar, Dorell, Linus January 2022 (has links)
Background AR technology has been increasing across domains in cognitive activities, i.e. learning. Although there are studies that try to examine the quality AR can bring into different territories of human culture, such as educational settings, few studies aspire to determine how AR can have on human memory. Particularly short-term memory. Objectives This research aims to assess the affect on short-term memory through an AR and Analog game. Methods The method proposed for this thesis work is a user study in a controlled environment to gather data for the results. In order to test the hypothesis, a quantitative approach was selected as two versions of the same game were compared. A within-participant experiment was designed. Results The results from the experiment indicate that AR has a lower score on average compared to its non-virtual counterpart. Conclusion Overall, our findings suggest that AR does not have a significant affect on short-term memory with digits.
84

Exploring Auditive Story Worlds : Design Sensitivities for Multi-linear Real Time, Mixed Reality, Interactive Storytelling Systems / Utforskande av auditiva berättelsevärldar : Designinsikter för interaktivt, multilinjärt historieberättande i blandad verklighet och realtid

Blomkvist Rova, Ariel January 2020 (has links)
Like reading a book, stories told orally or acted out auditory invite subjective co-construction of narrative events through imagination. While an interesting characteristic, audio-based, fictive storytelling is not well explored in HCI. Eavesdropper is a prototype system for Mixed Reality Audionarrative, the stage being a miniature house and the actors residing in a spatialized, virtual, audio world. This work accounts for development and evaluation of some contextually unconventional, properties of one current iteration of the system, aimed at facilitating an exploratory mindset: sections of the narrative unfolding in parallel, controlled by parameters the user is only partially aware of. Through qualitative evaluation with users, I report on how these properties affected the way a story world was experienced, explored and interpreted. The findings, coupled with reflections on the design choices made, are then synthesized to a set of design sensitivities meant to inform and spur discussion and further inquiry into similar systems exploring audio as the primary mean for conveying narrative. / En historia som berättas muntligt, eller drama som ageras ut med rösten (t.ex. radioteater), låter lyssnaren subjektivt konstruera händelser som bilder i den egna fantasin. Det här är en intressant egenskap hos ljudbaserat berättande men inte särskilt utforskat inom Människa-datorinteraktion. Speciellt inte vad gäller fiktiva berättelser för ett underhållningssyfte. Eavesdropper är ett experimentellt, Mixed Reality-system där det dramatiska berättandet har sin hemvist i den virtuella sfären men kontrolleras genom positionering i det fysiska rummet. I den aktuella iterationen är den fysiska platsen ett fullt inrett modellhus i liten skala som agerar scen åt skådespelarnas röster. I den här uppsatsen beskriver jag utvecklings- och utvärderingsarbetet med den aktuella iterationen där särskilt fokus lagts på några berättartekniskt och designmässigt icke-konventionella egenskaper: Användaren uppmuntras att subjektivt utforska berättelsevärlden och dramats olika trådar löper parallellt i realtid utan synbar logik. Genom kvalitativa utvärderingsmetoder så undersöker jag hur dessa egenskaper påverkar hur användare upplever, utforskar och tolkar berättelsevärlden. Resultaten tillsammans med reflektion över designprocessen ligger slutligen till grund för en uppsättning mjuka riktlinjer vilka är menade att användas för att orientera framtida diskussioner om, eller experiment med, liknade system där ljudmedia är huvudsaklig bärare av dramatiskt berättande
85

A 2D video player for Virtual Reality and Mixed Reality / En 2D-videospelare för Virtual Reality och Mixed Reality

Filip, Mori January 2017 (has links)
While 3D degree video in recent times have been object of research, 2D flat frame videos in virtual environments (VE) seemingly have not received the same amount of attention. Specifically, 2D video playback in Virtual Reality (VR) and Mixed Reality (MR) appears to lack exploration in both features and qualities of resolution, audio and interaction, which finally are contributors of presence. This paper reflects on the definitions of Virtual Reality and Mixed Reality, while extending known concepts of immersion and presence to 2D videos in VEs. Relevant attributes of presence that can applied to 2D videos were then investigated in the literature. The main problem was to find out the components and processes of the playback software in VR and MR with company request features and delimitations in consideration, and possibly, how to adjust those components to induce a greater presence within primarily the 2D video, and secondary the VE, although the mediums of visual information indeed are related and thus influence each other. The thesis work took place at Advrty, a company developing a brand advertising platform for VR and MR. The exploration and testing of the components, was done through the increment of a creating a basic standalone 2D video player, then through a second increment by implementing a video player into VR and MR. Comparisons with the proof-of-concept video players in VR and MR as well as the standalone video player were made. The results of the study show a feasible way of making a video player for VR and MR. In the discussion of the work, open source libraries in a commercial software; the technical limitations of the current VR and MR Head-mounted Displays (HMD); relevant presence inducing attributes as well as the choice of method were reflected upon. / Medan 360 graders video under senare tid varit föremål för studier, så verkar inte traditionella rektangulära 2D videos i virtuella miljöer ha fått samma uppmärksamhet. Mer specifikt, 2D videouppspelning i Virtual Reality (VR) och Mixed Reality (MR) verkar sakna utforskning i egenskaper som upplösning, ljud och interaktion, som slutligen bidrar till ”presence” i videon och den virtuella miljön. Det här pappret reflekterar över definitionerna VR och MR, samtidigt som den utökar de kända koncepten ”immersion” och ”presence” för 2D video i virtuella miljöer. Relevanta attribut till ”presence” som kan appliceras på 2D video utreddes sedan med hjälp av litteraturen. Det huvudsakliga problemet var att ta reda på komponenterna och processerna i den mjukvara som skall spela upp video i VR och MR med företagsönskemål och avgränsningar i åtanke, och möjligen, hur man kan justera dessa komponenter för att utöka närvaron i framförallt 2D video och sekundärt den virtuella miljön, även om dessa medium är relaterade och kan påverka varandra. Examensarbetet tog plats på Advrty, ett företag som utvecklar en annonseringsplattform för VR och MR. Utveckling och framtagande av komponenterna, var gjorda genom inkrementell utveckling där en enklare 2D videospelare skapades, sedan genom en andra inkrementell fas där videospelaren implementerades i VR och MR. Jämförelser med proof-of-concept-videospelaren i VR och MR samt den enklare videospelaren gjordes. I diskussionen om arbetet, gjordes reflektioner på användningen av open source-bibliotek i en kommersiell applikation, de tekniska begränsningarna i nuvarande VR och MR Head-mounted displays, relevanta ”presence” inducerande attribut samt val av metod för utvecklingen av videospelaren.
86

Object Placement in AR without Occluding Artifacts in Reality / Placering av objekt i AR utan att dölja objekt i verkligheten

Sténson, Carl January 2017 (has links)
Placement of virtual objects in Augmented Reality is often done without regarding the artifacts in the physical environment. This thesis investigates how placement can be done with the artifacts included. It only considers placement of wall mounted objects. Through the development of two prototypes, using detected edges in RGB-images in combination with volumetric properties to identify the artifacts, arreas will be suggested for placement of virtual objects. The first prototype analyze each triangle in the model, which is an intensive and with low precision on the localization of the physical artifacts. The second prototype analyzed the detected RGB-edges in world space, which proved to detect the features with precise localization and a reduce calculation time. The second prototype manages this in a controlled setting. However, a more challenging environment would possibly pose other issues. In conclusion, placement in relation to volumetric and edge information from images in the environment is possible and could enhance the experience of being in a mixed reality, where physical and virtual objects coexist in the same world. / Placering av virtuella objekt i Augumented Reality görs ofta utan att ta hänsyn till objekt i den fysiska miljön. Den här studien utreder hur placering kan göras med hänsyn till den fysiska miljön och dess objekt. Den behandlar enbart placering av objekt på vertikala ytor. För undersökningen utvecklas två prototyper som använder sig av kantigenkänning i foton samt en volymmetrisk representation av den fysiska miljön. I denna miljö föreslår prototyperna var placering av objekt kan ske. Den första prototypen analyserar varje triangel i den volymmetriska representationen av rummet, vilket visade sig vara krävande och med låg precision av lokaliseringen av objekt i miljön. Den andra prototypen analyserar de detekterade kanterna i fotona och projicerar dem till deras positioner i miljön. Vilket var något som visade sig hitta objekt i rummet med god precision samt snabbare än den första prototypen. Den andra prototypen lyckas med detta i en kontrollerad miljö. I en mer komplex och utmanande miljö kan problem uppstå. Placering av objekt i Augumented Reality med hänsyn till både en volymmetrisk och texturerad representation av en miljö kan uppnås. Placeringen kan då ske på ett mer naturligt sätt och därmed förstärka upplevelsen av att virtuella och verkliga objekt befinner sig i samma värld.
87

Mixed Reality Tailored to the Visually-Impaired

Omary, Danah M 08 1900 (has links)
The goal of the proposed device and software architecture is to apply the functionality of mixed reality (MR) in order to make a virtual environment that is more accessible to the visually-impaired. We propose a glove-based system for MR that will use finger and hand movement tracking along with tactile feedback so that the visually-impaired can interact with and obtain a more detailed sense of virtual objects and potentially even virtual environments. The software architecture makes current MR frameworks more accessible by augmenting the existing software and extensive 3D model libraries with both the interfacing of the glove-based system and the audibly navigable user interface (UI) of a virtual environment we have developed. We implemented a circuit with finger flexion/extension tracking for all 5 fingers of a single hand and variable vibration intensities for the vibromotors on all 5 fingertips of a single hand. The virtual environment can be hosted on a Windows 10 application. The virtual hand and its fingers can be moved with the system's input and the virtual fingertips touching the virtual objects trigger vibration motors (vibromotors) to vibrate while the virtual objects are being touched. A rudimentary implementation of picking up and moving virtual objects inside the virtual environment is also implemented. In addition to the vibromotor responses, text to speech (TTS) is also implemented in the application for when virtual fingertips touch virtual objects and other relevant events in the virtual environment.
88

The perceived effectiveness of mixed reality experiences in a master of arts in teaching (MAT) program for science, technology, engineering, and mathematics degreed individuals

Speir, Chana 01 January 2015 (has links)
The purpose of this study was to examine the perceived effectiveness of mixed reality experiences on resident teachers who successfully completed an undergraduate Science, Technology, Engineering, or Mathematics (STEM) degree and were enrolled in a Master of Arts in Teaching (MAT) degree program as part of RTP3 at a large research university in Orlando, Florida. The population for this study consisted of those selected to be in the RTP3, which included being in the Masters in the Art of Teaching (MAT) and becoming a middle or high school science, mathematics, or engineering teacher. The resident teachers experienced mixed reality as a method of practice on two occasions. The first was to introduce a lesson with avatar middle school students and a second time to conduct a parent conference with an avatar parent. This study was focused on the resident teachers' perceptions of (a) the effectiveness of mixed reality in the lesson experience and parent conference, (b) the coach's helpfulness after the lesson introduction experience and the parent conference experience, and (c) the extent to which the resident teachers believe that their confidence was increased and they were prepared for future classroom instruction and parent interactions through the use of mixed reality. Data were gathered with a feedback form with Likert-type items and open ended items completed immediately upon completion of each experience, as well as an additional open response document completed at a later time after reflection on the entire experience. The researcher analyzed the two qualitative data sources independently to determine trends and themes. Findings in this study were that the mixed-reality laboratory experience did have a positive effect on the perceptions of the resident teachers regarding their level of preparedness. They were more confident and comfortable teaching a lesson and conducting a parent conference after practicing both experiences with the avatars. Resident teachers overwhelmingly responded that the mixed reality experiences should remain a part of the MAT pedagogy and that they gained insight and confidence through the mixed reality practice. ?
89

A Wearable Head-mounted Projection Display

Martins, Ricardo F. 01 January 2010 (has links)
Conventional head-mounted projection displays (HMPDs) contain of a pair of miniature projection lenses, beamsplitters, and miniature displays mounted on the helmet, as well as a retro-reflective screen placed strategically in the environment. We have extened the HMPD technology integrating the screen into a fully mobile embodiment. Some initial efforts of demonstrating this technology has been captured followed by an investigation of the diffraction effects versus image degradation caused by integrating the retro-reflective screen within the HMPD. The key contribution of this research is the conception and development of a mobileHMPD (M-HMPD). We have included an extensive analysis of macro- and microscopic properties that encompass the retro-reflective screen. Furthermore, an evaluation of the overall performance of the optics will be assessed in both object space for the optical designer and visual space for the possible users of this technology. This research effort will also be focused on conceiving a mobile M-HMPD aimed for dual indoor/outdoor applications. The M-HMPD shares the known advantage such as ultralightweight optics (i.e. 8g per eye), unperceptible distortion (i.e. ≤ 2.5%), and lightweight headset (i.e. ≤ 2.5 lbs) compared with eyepiece type head-mounted displays (HMDs) of equal eye relief and field of view. In addition, the M-HMPD also presents an advantage over the preexisting HMPD in that it does not require a retro-reflective screen placed strategically in the environment. This newly developed M-HMPD has the ability to project clear images at three different locations within near- or far-field observation depths without loss of image quality. This particular M-HMPD embodiment was targeted to mixed reality, augmented reality, and wearable display applications.
90

Training Professional School Counseling Students To Facilitate A Classroom Guidance Lesson And Strengthen Classroom Management Skills Using A Mixed Reality Environment

Gonzalez, Tiphanie 01 January 2011 (has links)
According to the ASCA National Model, school counselors are expected to deliver classroom guidance lessons; yet, there has been little emphasis on graduate coursework targeting the development and implementation of guidance curriculum lessons in PSC training. A national study conducted by Perusse, Goodnough and Noel (2001) was conducted looking at how counselor educators were training “entry level school counseling students” in the skills needed for them to be successful as PSCs. They found that of the 189 school counseling programs surveyed only 3% offered a guidance curriculum course and 13.2% offered a foundations in education course. Inferring that many of programs surveyed did not have a course specific to classroom guidance and/or classroom management. A classroom guidance curriculum is a developmental, systematic method by which students receive structured lessons that address academic, career, and personal/social competencies (ASCA, 2005). Classroom guidance lessons provide a forum for school counselors to address such student needs as educational resources, postsecondary opportunities, school transitions, bullying, violence prevention, social-emotional development, and academic competence in a classroom environment (Akos & Levitt, 2002; Akos, Cockman & Strickland, 2007; Gerler & Anderson, 1986). Through classroom guidance, school counselors can interact with many of the students that they would normally not see on a day-to-day basis while providing information, building awareness and having discussions on topics that affect these student populations every day. The present study seeks to explore the use of an innovative method for training PSCs in classroom guidance and classroom management. This method iv involves the use of a mixed reality simulation that allows PSC students to learn and practice classroom guidance skills in a simulated environment.

Page generated in 0.0706 seconds