• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 518
  • 107
  • 87
  • 38
  • 36
  • 34
  • 19
  • 14
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1007
  • 1007
  • 294
  • 201
  • 186
  • 153
  • 150
  • 139
  • 127
  • 123
  • 117
  • 99
  • 99
  • 94
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Egocentric Depth Perception in Optical See-Through Augmented Reality

Jones, James Adam 11 August 2007 (has links)
Augmented Reality (AR) is a method of mixing computer-generated graphics with real-world environments. In AR, observers retain the ability to see their physical surroundings while additional (augmented) information is depicted as simulated graphical objects matched to the real-world view. In the following experiments, optical see-through head-mounted displays (HMDs) were used to present observers with both Augmented and Virtual Reality environments. Observers were presented with varied real, virtual, and combined stimuli with and without the addition of motion parallax. The apparent locations of the stimuli were then measured using quantitative methods of egocentric depth judgment. The data collected from these experiments were then used to determine how observers perceived egocentric depth with respect to both real-world and virtual objects.
192

Augmented Reality Visualization of Building Information Model

Lai, Yuchen 11 August 2017 (has links)
No description available.
193

Layered Space: Toward an Architecture of Superimposition

Sambuco, Adam J. 24 September 2018 (has links)
No description available.
194

Challenges of Using Augmented Reality to Teach Magnetic Field Concepts and Representations

Kumar, Aakash January 2022 (has links)
Many efforts to reform science educational standards and structure have placed an emphasis on directing learners to communicate about concepts using external representations (ERs). Techniques to develop competencies with ERs often ask learners to develop understanding outside of a physical context while concurrently making connections back to the context—a very challenging task that often results in incomplete learning. This dissertation work is presented in part as a journal article and presents a study that compared the effectiveness of a computer simulation to an augmented reality (AR) simulation for developing magnetic field conceptual and representational knowledge. The AR technology provides a feature called a dynamic overlay that can present ERs in a real-world context. The study was done with six classes of ninth grade physics students and evaluated learning, proficiency of exploration, and intrinsic motivation to engage with the activity and technology. Results from this study show that contrary to expectations, students who used AR performed similarly to students who used the computer simulation conceptual and representational knowledge assessment. However, students who engaged with AR demonstrated worse exploration on average and had lower levels of intrinsic motivation. These outcomes provide evidence to the difficulty of using AR for teaching the ERs of challenging concepts and the complexities of implementing novel technologies into a standard classroom environment.
195

Feed Me: an in-situ Augmented Reality Annotation Tool for Computer Vision

Ilo, Cedrick K. 02 July 2019 (has links)
The power of today's technology has enabled the combination of Computer Vision (CV) and Augmented Reality (AR) to allow users to interface with digital artifacts between indoor and outdoor activities. For example, AR systems can feed images of the local environment to a trained neural network for object detection. However, sometimes these algorithms can misclassify an object. In these cases, users want to correct the model's misclassification by adding labels to unrecognized objects, or re-classifying recognized objects. Depending on the number of corrections, an in-situ annotation may be a tedious activity for the user. This research will focus on how in-situ AR annotation can aid CV classification and what combination of voice and gesture techniques are efficient and usable for this task. / Master of Science / The power of today’s technology has allowed the ability of new inventions such as computer vision and Augmented Reality to work together seamlessly. The reason why computer scientists rave so much about computer vision is that it can enable a computer to see the world as humans do. With the rising popularity of Niantic’s Pokemon Go, Augmented Reality has become a new research area that researchers around the globe have taken part in to make it more stable and as useful as its next of kin virtual reality. For example, Augmented Reality can support users in gaining a better understanding of their environment by overlaying digital content into their field of view. Combining Computer Vision with Augmented Reality could aid the user further by detecting, registering, and tracking objects in the environment. However, sometimes a Computer Vision algorithm can falsely detect an object in a scene. In such cases, we wish to use Augmented Reality as a medium to update the Computer Vision’s object detection algorithm in-situ, meaning in place. With this idea, a user will be able to annotate all the objects within the camera’s view that were not detected by the object detection model and update any in-accurate classification of the objects. This research will primarily focus on visual feedback for in-situ annotation and the user experience of the Feed Me voice and gesture interface.
196

Designing Cultural Heritage Experiences for Head-Worn Augmented Reality

Gutkowski, Nicolas Joshua 27 May 2021 (has links)
History education is important, as it provides context for current events today. Cultural heritage sites, such as historic buildings, ruins, or archaeological digs can provide a glimpse into the past. The use of different technologies, including augmented and virtual reality, to teach history has expanded. Augmented reality (AR) in particular can be used to enhance real artifacts and places to allow for deeper understanding. However, the experiences born out of these efforts primarily aim to enhance museum visits and are presented as handheld experiences on smartphones or tablets. The use of head-worn augmented reality for on-site history education is a gap. There is a need to examine how on-site historical experiences should be designed for AR headsets. This work aims to explore best practices of creating such experiences through a case study on the Solitude AR Tour. Additionally comparisons between designing for head-worn AR and handheld AR are presented. / Master of Science / There is a need for the general public to be informed on historical events which have shaped the present day. Informal education through museums or guided tours around historical sites provides an engaging method for people to become more knowledgeable on the details of a time period or a place's past. The use of augmented reality, which is the enhancement of the real-world through virtual content visible through some sort of display such as a smartphone, has been applied to history education in these settings. The educational apps created focus on adding onto museum exhibits, rather than historical locations such as buildings or other structures. Additionally they have focused on using smartphones or tablets as the medium for virtual content, rather than headsets, which involves wearing a display rather than holding one. This work aims to address the lack of headset-based, on-site history experiences by posing questions about what methods work best for designing such an app. Comparisons to handheld design are also made to provide information on how the approach differs.
197

Effects of Augmented Reality Head-up Display Graphics’ Perceptual Form on Driver Spatial Knowledge Acquisition

De Oliveira Faria, Nayara 16 December 2019 (has links)
In this study, we investigated whether modifying augmented reality head-up display (AR HUD) graphics’ perceptual form influences spatial learning of the environment. We employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium-fidelity driving simulator at the COGENT lab at Virginia Tech. Two different navigation cues systems were compared: world-relative and screen-relative. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. We captured empirical data regarding changes in driving behaviors, glance behaviors, spatial knowledge acquisition (measured in terms of landmark and route knowledge), reported workload, and usability of the interface. Results showed that both screen-relative and world-relative AR head-up display interfaces have similar impact on the levels of spatial knowledge acquired; suggesting that world-relative AR graphics may be used for navigation with no comparative reduction in spatial knowledge acquisition. Even though our initial assumption that the conformal AR HUD interface would draw drivers’ attention to a specific part of the display was correct, this type of interface was not helpful to increase spatial knowledge acquisition. This finding contrasts a common perspective in the AR community that conformal, world-relative graphics are inherently more effective than screen-relative graphics. We suggest that simple, screen-fixed designs may indeed be effective in certain contexts. Finally, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR HUD interfaces; with conformal-graphics demanding more visual attention from drivers. We showed that the distribution of visual attention allocation was that the world-relative condition was typically associated with fewer glances in total, but glances of longer duration. / M.S. / As humans, we develop mental representations of our surroundings as we move through and learn about our environment. When navigating via car, developing robust mental representations (spatial knowledge) of the environment is crucial in situations where technology fails, or we need to find locations not included in a navigation system’s database. Over-reliance on traditional in-vehicle navigation devices has been shown to negatively impact our ability to navigate based on our own internal knowledge. Recently, the automotive industry has been developing new in-vehicle devices that have the potential to promote more active navigation and potentially enhance spatial knowledge acquisition. Vehicles with augmented reality (AR) graphics delivered via head-up displays (HUDs) present navigation information directly within drivers’ forward field of view, allowing drivers to gather information needed without looking away from the road. While this AR navigation technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. In this work, we present a user study that examines how screen-relative and world-relative AR HUD interface designs affect drivers’ spatial knowledge acquisition. Results showed that both screen-relative and world-relative AR head-up display interfaces have similar impact on the levels of spatial knowledge acquired; suggesting that world-relative AR graphics may be used for navigation with no comparative reduction in spatial knowledge acquisition. However, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR HUD interfaces; with conformal-graphics demanding more visual attention from drivers
198

Glanceable AR: Towards a Pervasive and Always-On Augmented Reality Future

Lu, Feiyu 06 July 2023 (has links)
Augmented reality head-worn displays (AR HWDs) have the potential to assist personal computing and the acquisition of everyday information. With advancements in hardware and tracking, these devices are becoming increasingly lightweight and powerful. They could eventually have the same form factor as normal pairs of eyeglasses, be worn all-day, overlaying information pervasively on top of the real-world anywhere and anytime to continuously assist people’s tasks. However, unlike traditional mobile devices, AR HWDs are worn on the head and always visible. If designed without care, the displayed virtual information could also be distracting, overwhelming, and take away the user’s attention from important real- world tasks. In this dissertation, we research methods for appropriate information displays and interactions with future all-day AR HWDs by seeking answers to four questions: (1) how to mitigate distractions of AR content to the users; (2) how to prevent AR content from occluding the real-world environment; (3) how to support scalable on-the-go access to AR content; and (4) how everyday users perceive using AR systems for daily information acquisition tasks. Our work builds upon a theory we developed called Glanceable AR, in which digital information is displayed outside the central field of view of the AR display to minimize distractions, but can be accessed through a quick glance. Through five projects covering seven studies, this work provides theoretical and empirical knowledge to prepare us for a pervasive yet unobtrusive everyday AR future, in which the overlaid AR information is easily accessible, non-invasive, responsive, and supportive. / Doctor of Philosophy / Augmented reality (AR) refers to a technology in which digital information is overlaid on the real-world environment. This provides great potential for everyday uses, because users can view and interact with digital apps anywhere and anytime even when physical screens are unavailable. However, depending on how the digital information is displayed, it could quickly occupy the user’s view, block the real-world environment, and distract or overwhelm users. In this dissertation work, we research ways to deliver and interact with virtual information displayed in AR head-worn displays (HWDs). Our solution centers around the Glanceable AR concept, in which digital information is displayed in the periphery of users’ views to remain unobtrusive, but can be accessed through a glance when needed. Through empirical evaluations, we researched the feasibility of such solutions, and distilled lessons learned for future deployment of AR systems in people’s everyday lives.
199

An Empirical Study of the Effects of Context-Switch, Object Distance, and Focus Depth on Human Performance in Augmented Reality

Gupta, Divya 21 June 2004 (has links)
Augmented reality provides its user with additional information not available through the natural real-world environment. This additional information displayed to the user potentially poses a risk of perceptual and cognitive load and vision-based difficulties. The presence of real-world objects together with virtual augmenting information requires the user to repeatedly switch eye focus between the two in order to extract information from both environments. Switching eye focus may result in additional time on user tasks and lower task accuracy. Thus, one of the goals of this research was to understand the impact of switching eye focus between real-world and virtual information on user task performance. Secondly, focus depth, which is an important parameter and a depth cue, may affect the user's view of the augmented world. If focus depth is not adjusted properly, it may result in vision-based difficulties and reduce speed, accuracy, and comfort while using an augmented reality display. Thus, the second goal of this thesis was to study the effect of focus depth on task performance in augmented reality systems. In augmented reality environments, real-world and virtual information are found at different distances from the user. To focus at different depths, the user's eye needs to accommodate and converge, which may strain the eye and degrade performance on tasks. However, no research in augmented reality has explored this issue. Hence, the third goal of this thesis was to determine if distance of virtual information from the user impacts task performance. To accomplish these goals, a 3x3x3 within subjects design was used. The experimental task for the study required the user to repeatedly switch eye focus between the virtual text and real-world text. A monocular see-through head- mounted display was used for this research. Results of this study revealed that switching between real-world and virtual information in augmented reality is extremely difficult when information is displayed at optical infinity. Virtual information displayed at optical infinity may be unsuitable for tasks of the nature used in this research. There was no impact of focus depth on user task performance and hence it is preliminarily recommended that manufacturers of head-mounted displays may only need to make fixed focus depth displays; this clearly merits additional intensive research. Further, user task performance was better when focus depth, virtual information, and real-world information were all at the same distance from the user as compared to conditions when they were mismatched. Based on this result we recommend presenting virtual information at the same distance as real-world information of interest. / Master of Science
200

Cloud-based augmented reality as a disruptive technology for Higher Education

Mohamad, A.M., Kamaruddin, S., Hamin, Z., Wan Rosli, Wan R., Omar, M.F., Mohd Saufi, N.N. 25 September 2023 (has links)
No / Augmented reality (AR) within the context of higher education is an approach to engage students with experiential learning by utilising AR technology. This paper discusses the process undertaken by a teacher in higher education in designing and implementing cloud-based AR lesson for the students. The methodology engaged was case study at one institution of higher learning in Malaysia. The AR teaching process involves six stages, beginning with the selection of the course, followed by selection of the topic, designing of the AR teaching plan and the implementation of the AR lesson. Upon completion of the implementation of the AR lesson, the teacher and students would provide reflection of their experiences. The process concludes by the improvement of the AR teaching plan by the teacher. The study found that cloud based has indeed disrupted higher education in terms of providing richer learning experiences to the students, as well as enhanced teaching practices for the teachers. Hopefully, this paper would provide insights into the practices of AR teaching and learning approach for teachers in general, and within the context of higher education in particular. It is also intended that the six-steps process outlined in this paper becomes a reference and be duplicated by teachers at large who might be interested to design and implement AR lessons for their own courses.

Page generated in 0.0958 seconds