Spelling suggestions: "subject:"hiperreality"" "subject:"cities’reality""
81 |
Near-Field Depth Perception in Optical See-Though Augmented RealitySingh, Gurjot 17 August 2013 (has links)
Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality.
|
82 |
ManiLoco: A Locomotion Method to Aid Concurrent Object Manipulation in Virtual RealityDayu Wan (13104111) 15 July 2022 (has links)
<p>In Virtual Reality (VR), users often need to explore a large virtual space within a limited physical space. However, as one of the most popular and commonly-used methods for such room-scale problems, teleport always relies on hand-based controllers. In applications that require consistent hand interaction, such teleport methods may conflict with the users' hand operation, and make them uncomfortable, thus affecting their experience. </p>
<p>To alleviate these limitations, this research designs and implements a new interactive object-based VR locomotion method, ManiLoco, as an eye- and foot-based low-cost method. This research also evaluates ManiLoco and compares it with state-of-the-art Point & Teleport and Gaze Teleport methods in a within-subject experiment with 14 participants.</p>
<p>The results confirm the viability of the method and its possibility in such applications. ManiLoco makes the users feel much more comfortable with their hands and focus more on the hand interaction in the application while maintaining efficiency and presence. Further, the users' trajectory maps indicate that ManiLoco, despite the introduction of walking, can be applicable to room-scale tracking space. Finally, as a locomotion method only relied on VR head-mounted display (HMD) and software detection, ManiLoco can be easily applied to any VR applications as a plugin.</p>
|
83 |
Teleoperation Interfaces in Human-Robot Teams / Benutzerschnittstellen für Teleoperation in Mensch-Roboter TeamsDriewer, Frauke January 2008 (has links) (PDF)
Diese Arbeit beschäftigt sich mit der Verbesserung von Mensch-Roboter Interaktion in Mensch-Roboter Teams für Teleoperation Szenarien, wie z.B. robotergestützte Feuerwehreinsätze. Hierbei wird ein Konzept und eine Architektur für ein System zur Unterstützung von Teleoperation von Mensch-Roboter Teams vorgestellt. Die Anforderungen an Informationsaustausch und -verarbeitung, insbesondere für die Anwendung Rettungseinsatz, werden ausgearbeitet. Weiterhin wird das Design der Benutzerschnittstellen für Mensch-Roboter Teams dargestellt und Prinzipien für Teleoperation-Systeme und Benutzerschnittstellen erarbeitet. Alle Studien und Ansätze werden in einem Prototypen-System implementiert und in verschiedenen Benutzertests abgesichert. Erweiterungsmöglichkeiten zum Einbinden von 3D Sensordaten und die Darstellung auf Stereovisualisierungssystemen werden gezeigt. / This work deals with teams in teleoperation scenarios, where one human team partner (supervisor) guides and controls multiple remote entities (either robotic or human) and coordinates their tasks. Such a team needs an appropriate infrastructure for sharing information and commands. The robots need to have a level of autonomy, which matches the assigned task. The humans in the team have to be provided with autonomous support, e.g. for information integration. Design and capabilities of the human-robot interfaces will strongly influence the performance of the team as well as the subjective feeling of the human team partners. Here, it is important to elaborate the information demand as well as how information is presented. Such human-robot systems need to allow the supervisor to gain an understanding of what is going on in the remote environment (situation awareness) by providing the necessary information. This includes achieving fast assessment of the robot´s or remote human´s state. Processing, integration and organization of data as well as suitable autonomous functions support decision making and task allocation and help to decrease the workload in this multi-entity teleoperation task. Interaction between humans and robots is improved by a common world model and a responsive system and robots. The remote human profits from a simplified user interface providing exactly the information needed for the actual task at hand. The topic of this thesis is the investigation of such teleoperation interfaces in human-robot teams, especially for high-risk, time-critical, and dangerous tasks. The aim is to provide a suitable human-robot team structure as well as analyze the demands on the user interfaces. On one side, it will be looked on the theoretical background (model, interactions, and information demand). On the other side, real implementations for system, robots, and user interfaces are presented and evaluated as testbeds for the claimed requirements. Rescue operations, more precisely fire-fighting, was chosen as an exemplary application scenario for this work. The challenges in such scenarios are high (highly dynamic environments, high risk, time criticality etc.) and it can be expected that results can be transferred to other applications, which have less strict requirements. The present work contributes to the introduction of human-robot teams in task-oriented scenarios, such as working in high risk domains, e.g. fire-fighting. It covers the theoretical background of the required system, the analysis of related human factors concepts, as well as discussions on implementation. An emphasis is placed on user interfaces, their design, requirements and user testing, as well as on the used techniques (three-dimensional sensor data representation, mixed reality, and user interface design guidelines). Further, the potential integration of 3D sensor data as well as the visualization on stereo visualization systems is introduced.
|
84 |
Mixed-Reality for Enhanced Robot Teleoperation / Mixed-Reality zur verbesserten Fernbedienung von RoboternSauer, Markus January 2010 (has links) (PDF)
In den letzten Jahren ist die Forschung in der Robotik soweit fortgeschritten, dass die Mensch-Maschine Schnittstelle zunehmend die kritischste Komponente für eine hohe Gesamtperformanz von Systemen zur Navigation und Koordination von Robotern wird. In dieser Dissertation wird untersucht wie Mixed-Reality Technologien für Nutzerschnittstellen genutzt werden können, um diese Gesamtperformanz zu erhöhen. Hierzu werden Konzepte und Technologien entwickelt, die durch Evaluierung mit Nutzertest ein optimiertes und anwenderbezogenes Design von Mixed-Reality Nutzerschnittstellen ermöglichen. Er werden somit sowohl die technische Anforderungen als auch die menschlichen Faktoren für ein konsistentes Systemdesign berücksichtigt. Nach einer detaillierten Problemanalyse und der Erstellung eines Systemmodels, das den Menschen als Schlüsselkomponente mit einbezieht, wird zunächst die Anwendung der neuartigen 3D-Time-of-Flight Kamera zur Navigation von Robotern, aber auch für den Einsatz in Mixed-Reality Schnittstellen analysiert und optimiert. Weiterhin wird gezeigt, wie sich der Netzwerkverkehr des Videostroms als wichtigstes Informationselement der meisten Nutzerschnittstellen für die Navigationsaufgabe auf der Netzwerk Applikationsebene in typischen Multi-Roboter Netzwerken mit dynamischen Topologien und Lastsituation optimieren lässt. Hierdurch ist es möglich in sonst in sonst typischen Ausfallszenarien den Videostrom zu erhalten und die Bildrate zu stabilisieren. Diese fortgeschrittenen Technologien werden dann auch dem entwickelten Konzept der generischen 3D Mixed Reality Schnittselle eingesetzt. Dieses Konzept ermöglicht eine integrierte 3D Darstellung der verfügbaren Information, so dass räumliche Beziehungen von Informationen aufrechterhalten werden und somit die Anzahl der mentalen Transformationen beim menschlichen Bediener reduziert wird. Gleichzeitig werden durch diesen Ansatz auch immersive Stereo Anzeigetechnologien unterstützt, welche zusätzlich das räumliche Verständnis der entfernten Situation fördern. Die in der Dissertation vorgestellten und evaluierten Ansätze nutzen auch die Tatsache, dass sich eine lokale Autonomie von Robotern heute sehr robust realisieren lässt. Dies wird zum Beispiel zur Realisierung eines Assistenzsystems mit variabler Autonomie eingesetzt. Hierbei erhält der Fernbediener über eine Kraftrückkopplung kombiniert mit einer integrierten Augmented Reality Schnittstelle, einen Eindruck über die Situation am entfernten Arbeitsbereich, aber auch über die aktuelle Navigationsintention des Roboters. Die durchgeführten Nutzertests belegen die signifikante Steigerung der Navigationsperformanz durch den entwickelten Ansatz. Die robuste lokale Autonomie ermöglicht auch den in der Dissertation eingeführten Ansatz der prädiktiven Mixed-Reality Schnittstelle. Die durch diesen Ansatz entkoppelte Regelschleife über den Menschen ermöglicht es die Sichtbarkeit von unvermeidbaren Systemverzögerungen signifikant zu reduzieren. Zusätzlich können durch diesen Ansatz beide für die Navigation hilfreichen Blickwinkel in einer 3D-Nutzerschnittstelle kombiniert werden – der exozentrische Blickwinkel und der egozentrische Blickwinkel als Augmented Reality Sicht. / With the progress in robotics research the human machine interfaces reach more and more the status of being the major limiting factor for the overall system performance of a system for remote navigation and coordination of robots. In this monograph it is elaborated how mixed reality technologies can be applied for the user interfaces in order to increase the overall system performance. Concepts, technologies, and frameworks are developed and evaluated in user studies which enable for novel user-centered approaches to the design of mixed-reality user interfaces for remote robot operation. Both the technological requirements and the human factors are considered to achieve a consistent system design. Novel technologies like 3D time-of-flight cameras are investigated for the application in the navigation tasks and for the application in the developed concept of a generic mixed reality user interface. In addition it is shown how the network traffic of a video stream can be shaped on application layer in order to reach a stable frame rate in dynamic networks. The elaborated generic mixed reality framework enables an integrated 3D graphical user interface. The realized spatial integration and visualization of available information reduces the demand for mental transformations for the human operator and supports the use of immersive stereo devices. The developed concepts make also use of the fact that local robust autonomy components can be realized and thus can be incorporated as assistance systems for the human operators. A sliding autonomy concept is introduced combining force and visual augmented reality feedback. The force feedback component allows rendering the robot's current navigation intention to the human operator, such that a real sliding autonomy with seamless transitions is achieved. The user-studies prove the significant increase in navigation performance by application of this concept. The generic mixed reality user interface together with robust local autonomy enables a further extension of the teleoperation system to a short-term predictive mixed reality user interface. With the presented concept of operation, it is possible to significantly reduce the visibility of system delays for the human operator. In addition, both advantageous characteristics of a 3D graphical user interface for robot teleoperation- an exocentric view and an augmented reality view – can be combined.
|
85 |
Comparing MR/VR implementations in flight training simulationWang, Kexin January 2023 (has links)
The use of Extended Reality(XR) technologies for training is gaining popularity, and flight training is one field that has begun experimenting with the best implementation for their needs. Both Virtual Reality (VR) and Mixed Reality (MR) have been used in flight simulators, but it is unclear which is the better fit. The research question is: Which implementation (VR, MR) fits better regarding psychological and ergonomic fidelity for flight training simulation? A fidelity/validity framework is used to measure and compare VR and MR in a prototype flight training simulation. Participants from the Swedish Air Force Combat Simulation Center (FLSC) with experience in extended reality (XR) and flight training simulations took part in the study. The results showed that participants performed better and reported a preference for mixed reality (MR) over virtual reality (VR), citing factors such as ease of adaptation, movements, and concentration. The thematic analysis identified three themes: naturalness, intuitiveness, and precision. Based on these findings, MR is deemed a better fit for flight training, offering a higher level of psychological and ergonomic fidelity than VR. This study can inform future research on XR and flight training simulations and inform the incorporation of XR technologies in the design of training simulations.
|
86 |
Assessing the affect on short-term memory in students by comparing a serial recall Augmented Reality game and a card version.Nyman, Oskar, Dorell, Linus January 2022 (has links)
Background AR technology has been increasing across domains in cognitive activities, i.e. learning. Although there are studies that try to examine the quality AR can bring into different territories of human culture, such as educational settings, few studies aspire to determine how AR can have on human memory. Particularly short-term memory. Objectives This research aims to assess the affect on short-term memory through an AR and Analog game. Methods The method proposed for this thesis work is a user study in a controlled environment to gather data for the results. In order to test the hypothesis, a quantitative approach was selected as two versions of the same game were compared. A within-participant experiment was designed. Results The results from the experiment indicate that AR has a lower score on average compared to its non-virtual counterpart. Conclusion Overall, our findings suggest that AR does not have a significant affect on short-term memory with digits.
|
87 |
Exploring Auditive Story Worlds : Design Sensitivities for Multi-linear Real Time, Mixed Reality, Interactive Storytelling Systems / Utforskande av auditiva berättelsevärldar : Designinsikter för interaktivt, multilinjärt historieberättande i blandad verklighet och realtidBlomkvist Rova, Ariel January 2020 (has links)
Like reading a book, stories told orally or acted out auditory invite subjective co-construction of narrative events through imagination. While an interesting characteristic, audio-based, fictive storytelling is not well explored in HCI. Eavesdropper is a prototype system for Mixed Reality Audionarrative, the stage being a miniature house and the actors residing in a spatialized, virtual, audio world. This work accounts for development and evaluation of some contextually unconventional, properties of one current iteration of the system, aimed at facilitating an exploratory mindset: sections of the narrative unfolding in parallel, controlled by parameters the user is only partially aware of. Through qualitative evaluation with users, I report on how these properties affected the way a story world was experienced, explored and interpreted. The findings, coupled with reflections on the design choices made, are then synthesized to a set of design sensitivities meant to inform and spur discussion and further inquiry into similar systems exploring audio as the primary mean for conveying narrative. / En historia som berättas muntligt, eller drama som ageras ut med rösten (t.ex. radioteater), låter lyssnaren subjektivt konstruera händelser som bilder i den egna fantasin. Det här är en intressant egenskap hos ljudbaserat berättande men inte särskilt utforskat inom Människa-datorinteraktion. Speciellt inte vad gäller fiktiva berättelser för ett underhållningssyfte. Eavesdropper är ett experimentellt, Mixed Reality-system där det dramatiska berättandet har sin hemvist i den virtuella sfären men kontrolleras genom positionering i det fysiska rummet. I den aktuella iterationen är den fysiska platsen ett fullt inrett modellhus i liten skala som agerar scen åt skådespelarnas röster. I den här uppsatsen beskriver jag utvecklings- och utvärderingsarbetet med den aktuella iterationen där särskilt fokus lagts på några berättartekniskt och designmässigt icke-konventionella egenskaper: Användaren uppmuntras att subjektivt utforska berättelsevärlden och dramats olika trådar löper parallellt i realtid utan synbar logik. Genom kvalitativa utvärderingsmetoder så undersöker jag hur dessa egenskaper påverkar hur användare upplever, utforskar och tolkar berättelsevärlden. Resultaten tillsammans med reflektion över designprocessen ligger slutligen till grund för en uppsättning mjuka riktlinjer vilka är menade att användas för att orientera framtida diskussioner om, eller experiment med, liknade system där ljudmedia är huvudsaklig bärare av dramatiskt berättande
|
88 |
A 2D video player for Virtual Reality and Mixed Reality / En 2D-videospelare för Virtual Reality och Mixed RealityFilip, Mori January 2017 (has links)
While 3D degree video in recent times have been object of research, 2D flat frame videos in virtual environments (VE) seemingly have not received the same amount of attention. Specifically, 2D video playback in Virtual Reality (VR) and Mixed Reality (MR) appears to lack exploration in both features and qualities of resolution, audio and interaction, which finally are contributors of presence. This paper reflects on the definitions of Virtual Reality and Mixed Reality, while extending known concepts of immersion and presence to 2D videos in VEs. Relevant attributes of presence that can applied to 2D videos were then investigated in the literature. The main problem was to find out the components and processes of the playback software in VR and MR with company request features and delimitations in consideration, and possibly, how to adjust those components to induce a greater presence within primarily the 2D video, and secondary the VE, although the mediums of visual information indeed are related and thus influence each other. The thesis work took place at Advrty, a company developing a brand advertising platform for VR and MR. The exploration and testing of the components, was done through the increment of a creating a basic standalone 2D video player, then through a second increment by implementing a video player into VR and MR. Comparisons with the proof-of-concept video players in VR and MR as well as the standalone video player were made. The results of the study show a feasible way of making a video player for VR and MR. In the discussion of the work, open source libraries in a commercial software; the technical limitations of the current VR and MR Head-mounted Displays (HMD); relevant presence inducing attributes as well as the choice of method were reflected upon. / Medan 360 graders video under senare tid varit föremål för studier, så verkar inte traditionella rektangulära 2D videos i virtuella miljöer ha fått samma uppmärksamhet. Mer specifikt, 2D videouppspelning i Virtual Reality (VR) och Mixed Reality (MR) verkar sakna utforskning i egenskaper som upplösning, ljud och interaktion, som slutligen bidrar till ”presence” i videon och den virtuella miljön. Det här pappret reflekterar över definitionerna VR och MR, samtidigt som den utökar de kända koncepten ”immersion” och ”presence” för 2D video i virtuella miljöer. Relevanta attribut till ”presence” som kan appliceras på 2D video utreddes sedan med hjälp av litteraturen. Det huvudsakliga problemet var att ta reda på komponenterna och processerna i den mjukvara som skall spela upp video i VR och MR med företagsönskemål och avgränsningar i åtanke, och möjligen, hur man kan justera dessa komponenter för att utöka närvaron i framförallt 2D video och sekundärt den virtuella miljön, även om dessa medium är relaterade och kan påverka varandra. Examensarbetet tog plats på Advrty, ett företag som utvecklar en annonseringsplattform för VR och MR. Utveckling och framtagande av komponenterna, var gjorda genom inkrementell utveckling där en enklare 2D videospelare skapades, sedan genom en andra inkrementell fas där videospelaren implementerades i VR och MR. Jämförelser med proof-of-concept-videospelaren i VR och MR samt den enklare videospelaren gjordes. I diskussionen om arbetet, gjordes reflektioner på användningen av open source-bibliotek i en kommersiell applikation, de tekniska begränsningarna i nuvarande VR och MR Head-mounted displays, relevanta ”presence” inducerande attribut samt val av metod för utvecklingen av videospelaren.
|
89 |
Object Placement in AR without Occluding Artifacts in Reality / Placering av objekt i AR utan att dölja objekt i verklighetenSténson, Carl January 2017 (has links)
Placement of virtual objects in Augmented Reality is often done without regarding the artifacts in the physical environment. This thesis investigates how placement can be done with the artifacts included. It only considers placement of wall mounted objects. Through the development of two prototypes, using detected edges in RGB-images in combination with volumetric properties to identify the artifacts, arreas will be suggested for placement of virtual objects. The first prototype analyze each triangle in the model, which is an intensive and with low precision on the localization of the physical artifacts. The second prototype analyzed the detected RGB-edges in world space, which proved to detect the features with precise localization and a reduce calculation time. The second prototype manages this in a controlled setting. However, a more challenging environment would possibly pose other issues. In conclusion, placement in relation to volumetric and edge information from images in the environment is possible and could enhance the experience of being in a mixed reality, where physical and virtual objects coexist in the same world. / Placering av virtuella objekt i Augumented Reality görs ofta utan att ta hänsyn till objekt i den fysiska miljön. Den här studien utreder hur placering kan göras med hänsyn till den fysiska miljön och dess objekt. Den behandlar enbart placering av objekt på vertikala ytor. För undersökningen utvecklas två prototyper som använder sig av kantigenkänning i foton samt en volymmetrisk representation av den fysiska miljön. I denna miljö föreslår prototyperna var placering av objekt kan ske. Den första prototypen analyserar varje triangel i den volymmetriska representationen av rummet, vilket visade sig vara krävande och med låg precision av lokaliseringen av objekt i miljön. Den andra prototypen analyserar de detekterade kanterna i fotona och projicerar dem till deras positioner i miljön. Vilket var något som visade sig hitta objekt i rummet med god precision samt snabbare än den första prototypen. Den andra prototypen lyckas med detta i en kontrollerad miljö. I en mer komplex och utmanande miljö kan problem uppstå. Placering av objekt i Augumented Reality med hänsyn till både en volymmetrisk och texturerad representation av en miljö kan uppnås. Placeringen kan då ske på ett mer naturligt sätt och därmed förstärka upplevelsen av att virtuella och verkliga objekt befinner sig i samma värld.
|
90 |
Mixed Reality Tailored to the Visually-ImpairedOmary, Danah M 08 1900 (has links)
The goal of the proposed device and software architecture is to apply the functionality of mixed reality (MR) in order to make a virtual environment that is more accessible to the visually-impaired. We propose a glove-based system for MR that will use finger and hand movement tracking along with tactile feedback so that the visually-impaired can interact with and obtain a more detailed sense of virtual objects and potentially even virtual environments. The software architecture makes current MR frameworks more accessible by augmenting the existing software and extensive 3D model libraries with both the interfacing of the glove-based system and the audibly navigable user interface (UI) of a virtual environment we have developed. We implemented a circuit with finger flexion/extension tracking for all 5 fingers of a single hand and variable vibration intensities for the vibromotors on all 5 fingertips of a single hand. The virtual environment can be hosted on a Windows 10 application. The virtual hand and its fingers can be moved with the system's input and the virtual fingertips touching the virtual objects trigger vibration motors (vibromotors) to vibrate while the virtual objects are being touched. A rudimentary implementation of picking up and moving virtual objects inside the virtual environment is also implemented. In addition to the vibromotor responses, text to speech (TTS) is also implemented in the application for when virtual fingertips touch virtual objects and other relevant events in the virtual environment.
|
Page generated in 0.0751 seconds