Spelling suggestions: "subject:"augmentedreality"" "subject:"augmentedreality’s""
191 |
Impact of context switching and focal distance switching on human performance in all augmented reality systemArefin, Mohammed Safayet 01 May 2020 (has links)
Most current augmented reality (AR) displays present content at a fixed focal demand. At the same time, real-world stimuli can occur at a variety of focal distances. To integrate information, users need to switch eye focus between virtual and real-world information continuously. Previously, Gabbard, Mehra, and Swan (2018) examined these issues, using a text-based visual search task on a monocular AR display. This thesis replicated and extended the previous experiment by including a new experimental variable stereopsis (stereo, mono) and fully crossing the variables of context switching and focal distance switching, using AR haploscope. The results from the monocular condition indicate successful replication, which is consistent with the hypothesis that the findings are a general property of AR. The outcome of the stereo condition supports the same adverse effects of context switching and focal distance switching. Further, participants have better performance and less eye fatigue in the stereo condition compared to the monocular condition.
|
192 |
Egocentric Depth Perception in Optical See-Through Augmented RealityJones, James Adam 11 August 2007 (has links)
Augmented Reality (AR) is a method of mixing computer-generated graphics with real-world environments. In AR, observers retain the ability to see their physical surroundings while additional (augmented) information is depicted as simulated graphical objects matched to the real-world view. In the following experiments, optical see-through head-mounted displays (HMDs) were used to present observers with both Augmented and Virtual Reality environments. Observers were presented with varied real, virtual, and combined stimuli with and without the addition of motion parallax. The apparent locations of the stimuli were then measured using quantitative methods of egocentric depth judgment. The data collected from these experiments were then used to determine how observers perceived egocentric depth with respect to both real-world and virtual objects.
|
193 |
Augmented Reality Visualization of Building Information ModelLai, Yuchen 11 August 2017 (has links)
No description available.
|
194 |
Layered Space: Toward an Architecture of SuperimpositionSambuco, Adam J. 24 September 2018 (has links)
No description available.
|
195 |
Challenges of Using Augmented Reality to Teach Magnetic Field Concepts and RepresentationsKumar, Aakash January 2022 (has links)
Many efforts to reform science educational standards and structure have placed an emphasis on directing learners to communicate about concepts using external representations (ERs). Techniques to develop competencies with ERs often ask learners to develop understanding outside of a physical context while concurrently making connections back to the context—a very challenging task that often results in incomplete learning.
This dissertation work is presented in part as a journal article and presents a study that compared the effectiveness of a computer simulation to an augmented reality (AR) simulation for developing magnetic field conceptual and representational knowledge. The AR technology provides a feature called a dynamic overlay that can present ERs in a real-world context. The study was done with six classes of ninth grade physics students and evaluated learning, proficiency of exploration, and intrinsic motivation to engage with the activity and technology.
Results from this study show that contrary to expectations, students who used AR performed similarly to students who used the computer simulation conceptual and representational knowledge assessment. However, students who engaged with AR demonstrated worse exploration on average and had lower levels of intrinsic motivation. These outcomes provide evidence to the difficulty of using AR for teaching the ERs of challenging concepts and the complexities of implementing novel technologies into a standard classroom environment.
|
196 |
Glanceable AR: Towards a Pervasive and Always-On Augmented Reality FutureLu, Feiyu 06 July 2023 (has links)
Augmented reality head-worn displays (AR HWDs) have the potential to assist personal computing and the acquisition of everyday information. With advancements in hardware and tracking, these devices are becoming increasingly lightweight and powerful. They could eventually have the same form factor as normal pairs of eyeglasses, be worn all-day, overlaying information pervasively on top of the real-world anywhere and anytime to continuously assist people’s tasks. However, unlike traditional mobile devices, AR HWDs are worn on the head and always visible. If designed without care, the displayed virtual information could also be distracting, overwhelming, and take away the user’s attention from important real- world tasks. In this dissertation, we research methods for appropriate information displays and interactions with future all-day AR HWDs by seeking answers to four questions: (1) how to mitigate distractions of AR content to the users; (2) how to prevent AR content from occluding the real-world environment; (3) how to support scalable on-the-go access to AR content; and (4) how everyday users perceive using AR systems for daily information acquisition tasks. Our work builds upon a theory we developed called Glanceable AR, in which digital information is displayed outside the central field of view of the AR display to minimize distractions, but can be accessed through a quick glance. Through five projects covering seven studies, this work provides theoretical and empirical knowledge to prepare us for a pervasive yet unobtrusive everyday AR future, in which the overlaid AR information is easily accessible, non-invasive, responsive, and supportive. / Doctor of Philosophy / Augmented reality (AR) refers to a technology in which digital information is overlaid on the real-world environment. This provides great potential for everyday uses, because users can view and interact with digital apps anywhere and anytime even when physical screens are unavailable. However, depending on how the digital information is displayed, it could quickly occupy the user’s view, block the real-world environment, and distract or overwhelm users. In this dissertation work, we research ways to deliver and interact with virtual information displayed in AR head-worn displays (HWDs). Our solution centers around the Glanceable AR concept, in which digital information is displayed in the periphery of users’ views to remain unobtrusive, but can be accessed through a glance when needed. Through empirical evaluations, we researched the feasibility of such solutions, and distilled lessons learned for future deployment of AR systems in people’s everyday lives.
|
197 |
Feed Me: an in-situ Augmented Reality Annotation Tool for Computer VisionIlo, Cedrick K. 02 July 2019 (has links)
The power of today's technology has enabled the combination of Computer Vision (CV) and Augmented Reality (AR) to allow users to interface with digital artifacts between indoor and outdoor activities. For example, AR systems can feed images of the local environment to a trained neural network for object detection. However, sometimes these algorithms can misclassify an object. In these cases, users want to correct the model's misclassification by adding labels to unrecognized objects, or re-classifying recognized objects. Depending on the number of corrections, an in-situ annotation may be a tedious activity for the user. This research will focus on how in-situ AR annotation can aid CV classification and what combination of voice and gesture techniques are efficient and usable for this task. / Master of Science / The power of today’s technology has allowed the ability of new inventions such as computer vision and Augmented Reality to work together seamlessly. The reason why computer scientists rave so much about computer vision is that it can enable a computer to see the world as humans do. With the rising popularity of Niantic’s Pokemon Go, Augmented Reality has become a new research area that researchers around the globe have taken part in to make it more stable and as useful as its next of kin virtual reality. For example, Augmented Reality can support users in gaining a better understanding of their environment by overlaying digital content into their field of view. Combining Computer Vision with Augmented Reality could aid the user further by detecting, registering, and tracking objects in the environment. However, sometimes a Computer Vision algorithm can falsely detect an object in a scene. In such cases, we wish to use Augmented Reality as a medium to update the Computer Vision’s object detection algorithm in-situ, meaning in place. With this idea, a user will be able to annotate all the objects within the camera’s view that were not detected by the object detection model and update any in-accurate classification of the objects. This research will primarily focus on visual feedback for in-situ annotation and the user experience of the Feed Me voice and gesture interface.
|
198 |
Designing Cultural Heritage Experiences for Head-Worn Augmented RealityGutkowski, Nicolas Joshua 27 May 2021 (has links)
History education is important, as it provides context for current events today. Cultural heritage sites, such as historic buildings, ruins, or archaeological digs can provide a glimpse into the past. The use of different technologies, including augmented and virtual reality, to teach history has expanded. Augmented reality (AR) in particular can be used to enhance real artifacts and places to allow for deeper understanding. However, the experiences born out of these efforts primarily aim to enhance museum visits and are presented as handheld experiences on smartphones or tablets. The use of head-worn augmented reality for on-site history education is a gap. There is a need to examine how on-site historical experiences should be designed for AR headsets. This work aims to explore best practices of creating such experiences through a case study on the Solitude AR Tour. Additionally comparisons between designing for head-worn AR and handheld AR are presented. / Master of Science / There is a need for the general public to be informed on historical events which have shaped the present day. Informal education through museums or guided tours around historical sites provides an engaging method for people to become more knowledgeable on the details of a time period or a place's past. The use of augmented reality, which is the enhancement of the real-world through virtual content visible through some sort of display such as a smartphone, has been applied to history education in these settings. The educational apps created focus on adding onto museum exhibits, rather than historical locations such as buildings or other structures. Additionally they have focused on using smartphones or tablets as the medium for virtual content, rather than headsets, which involves wearing a display rather than holding one. This work aims to address the lack of headset-based, on-site history experiences by posing questions about what methods work best for designing such an app. Comparisons to handheld design are also made to provide information on how the approach differs.
|
199 |
Effects of Augmented Reality Head-up Display Graphics’ Perceptual Form on Driver Spatial Knowledge AcquisitionDe Oliveira Faria, Nayara 16 December 2019 (has links)
In this study, we investigated whether modifying augmented reality head-up display (AR HUD) graphics’ perceptual form influences spatial learning of the environment. We employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium-fidelity driving simulator at the COGENT lab at Virginia Tech. Two different navigation cues systems were compared: world-relative and screen-relative. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. We captured empirical data regarding changes in driving behaviors, glance behaviors, spatial knowledge acquisition (measured in terms of landmark and route knowledge), reported workload, and usability of the interface.
Results showed that both screen-relative and world-relative AR head-up display interfaces have similar impact on the levels of spatial knowledge acquired; suggesting that world-relative AR graphics may be used for navigation with no comparative reduction in spatial knowledge acquisition. Even though our initial assumption that the conformal AR HUD interface would draw drivers’ attention to a specific part of the display was correct, this type of interface was not helpful to increase spatial knowledge acquisition. This finding contrasts a common perspective in the AR community that conformal, world-relative graphics are inherently more effective than screen-relative graphics. We suggest that simple, screen-fixed designs may indeed be effective in certain contexts.
Finally, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR HUD interfaces; with conformal-graphics demanding more visual attention from drivers. We showed that the distribution of visual attention allocation was that the world-relative condition was typically associated with fewer glances in total, but glances of longer duration. / M.S. / As humans, we develop mental representations of our surroundings as we move through and learn about our environment. When navigating via car, developing robust mental representations (spatial knowledge) of the environment is crucial in situations where technology fails, or we need to find locations not included in a navigation system’s database. Over-reliance on traditional in-vehicle navigation devices has been shown to negatively impact our ability to navigate based on our own internal knowledge. Recently, the automotive industry has been developing new in-vehicle devices that have the potential to promote more active navigation and potentially enhance spatial knowledge acquisition. Vehicles with augmented reality (AR) graphics delivered via head-up displays (HUDs) present navigation information directly within drivers’ forward field of view, allowing drivers to gather information needed without looking away from the road. While this AR navigation technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. In this work, we present a user study that examines how screen-relative and world-relative AR HUD interface designs affect drivers’ spatial knowledge acquisition.
Results showed that both screen-relative and world-relative AR head-up display interfaces have similar impact on the levels of spatial knowledge acquired; suggesting that world-relative AR graphics may be used for navigation with no comparative reduction in spatial knowledge acquisition. However, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR HUD interfaces; with conformal-graphics demanding more visual attention from drivers
|
200 |
Precueing Manual Tasks in Augmented and Virtual RealityLiu, Jen-Shuo January 2024 (has links)
Work on Virtual Reality (VR) and Augmented Reality (AR) task interaction and visualization paradigms has typically focused on providing information about the current task step (a cue) immediately before or during its performance. For sequential tasks that involve multiple steps, providing information about the next step (a precue) might also benefit the user. Some research has shown the advantages of simultaneously providing a cue and a precue in path-following tasks. We explore the use of precues in VR and AR for both path-following and object-manipulation tasks involving rotation. We address the effectiveness of different numbers and kinds of precues for different tasks. To achieve this, we conducted a series of user studies:
First, we investigate whether it would be possible to improve efficiency by precueing information about multiple upcoming steps before completing the current step in a planar path-following task. To accomplish this, we developed a VR user study comparing task completion time and subjective metrics for different levels and styles of precueing. Our task-guidance visualizations vary the precueing level (number of steps precued in advance) and style (whether the path to a target is communicated through a line to the target, and whether the place of a target is communicated through graphics at the target). Participants in our study performed best when given two to three precues for visualizations using lines to show the path to targets. However, performance degraded when four precues were used. On the other hand, participants performed best with only one precue for visualizations without lines, showing only the places of targets, and performance degraded when a second precue was given. In addition, participants performed better using visualizations with lines than ones without lines.
Second, we extend the idea of precueing information about multiple steps to a more complex task, whose subtasks involve moving to and picking up a physical object, moving that object to a designated place in the same plane while rotating it to a specific angle in the plane, and depositing it. We conducted two user studies to examine how people accomplish this task while wearing an AR headset, guided by different visualizations that cue and precue movement and rotation. Participants performed best when given movement information for two successive subtasks (one movement precue) and rotation information for a single subtask (no rotation precue). In addition, participants performed best when the visualization of how much to rotate was split across the manipulated object and its destination.
Third, we investigate whether and how much precued rotation information might improve user performance in AR. We consider two unimanual tasks: one requires a participant to make sequential rotations of a single physical object in a plane, and the other requires the participant to move their hand between multiple such objects to rotate them in the plane in sequence. We conducted a user study to explore these two tasks using circular arrows to communicate rotation. In the single-object task, we examined the impact of number of precues and visualization style on participant performance. Results show that precues could improve performance and that arrows with highlighted heads and tails, with each rotation destination aligned with the next origin, yielded the shortest completion time on average. In the multiple-object task, we explored whether rotation precues can be helpful in conjunction with movement precues. Here, using a rotation cue without rotation precues in conjunction with a movement cue and movement precues performed the best, implying that rotation precues were not helpful when movement was also required.
Fourth, we address sequential tasks involving 3DoF rotations and 3DoF translations in headset AR. In each step, a participant picks up a physical object, rotates it in 3D while translating it in 3D, and deposits it in a target 6DoF pose. We designed and compared two types of visualizations for cueing and precueing steps in such a task: Action-based visualizations show the actions needed to carry out a step and goal-based visualizations show the desired end state of a step. We conducted a user study to evaluate these visualizations and their efficacy for precueing. Participants performed better with goal-based visualizations than with action-based visualizations, and most effectively with goal-based visualizations aligned with the Euler axis. However, only a few of our participants benefited from precues, possibly because of the cognitive load of 3D rotations.
In summary, we showed that using precueing can improve the speed at which participants perform different types of tasks. In our VR path-following task, participants were able to benefit from two to three precues using lines to show the path to targets. In our object-manipulation task with 2DoF movement and 1DoF rotation, participants performed best when given movement information for two successive subtasks and rotation information for a single subtask. Further, in our later study focusing on rotation, we found that participants were able to use rotation precues in our single-object task, while in the multiple-object task, rotation precues were not beneficial to participants. Finally, in a study on a sequential 6DoF task, participants performed better with goal-based visualizations than with action-based visualizations.
|
Page generated in 0.1023 seconds