• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 525
  • 107
  • 87
  • 38
  • 36
  • 35
  • 19
  • 15
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1016
  • 1016
  • 295
  • 204
  • 186
  • 154
  • 152
  • 140
  • 128
  • 125
  • 117
  • 100
  • 100
  • 96
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Towards System Agnostic Calibration of Optical See-Through Head-Mounted Displays for Augmented Reality

Moser, Kenneth R 12 August 2016 (has links)
This dissertation examines the developments and progress of spatial calibration procedures for Optical See-Through (OST) Head-Mounted Display (HMD) devices for visual Augmented Reality (AR) applications. Rapid developments in commercial AR systems have created an explosion of OST device options for not only research and industrial purposes, but also the consumer market as well. This expansion in hardware availability is equally matched by a need for intuitive standardized calibration procedures that are not only easily completed by novice users, but which are also readily applicable across the largest range of hardware options. This demand for robust uniform calibration schemes is the driving motive behind the original contributions offered within this work. A review of prior surveys and canonical description for AR and OST display developments is provided before narrowing the contextual scope to the research questions evolving within the calibration domain. Both established and state of the art calibration techniques and their general implementations are explored, along with prior user study assessments and the prevailing evaluation metrics and practices employed within. The original contributions begin with a user study evaluation comparing and contrasting the accuracy and precision of an established manual calibration method against a state of the art semi-automatic technique. This is the first formal evaluation of any non-manual approach and provides insight into the current usability limitations of present techniques and the complexities of next generation methods yet to be solved. The second study investigates the viability of a user-centric approach to OST HMD calibration through novel adaptation of manual calibration to consumer level hardware. Additional contributions describe the development of a complete demonstration application incorporating user-centric methods, a novel strategy for visualizing both calibration results and registration error from the user’s perspective, as well as a robust intuitive presentation style for binocular manual calibration. The final study provides further investigation into the accuracy differences observed between user-centric and environment-centric methodologies. The dissertation concludes with a summarization of the contribution outcomes and their impact on existing AR systems and research endeavors, as well as a short look ahead into future extensions and paths that continued calibration research should explore.
72

Evaluating Collaborative Cues for Remote Affinity Diagramming Tasks in Augmented Reality

Llorens, Nathaniel Roman 03 September 2021 (has links)
This thesis documents the design and implementation of an augmented reality (AR) application that could be extended to support group brainstorming tasks remotely. Additionally, it chronicles our investigation into the helpfulness of traditional collaborative cues in this novel application of augmented reality. We implemented IdeaSpace, an interactive application that emulates an affinity diagramming environment on an AR headset. In our application, users can organize and manipulate virtual sticky notes around a central virtual board. We performed a user study, with each session requiring users to perform an affinity diagramming clustering task with and without common collaborative cues. Our results indicate that the presence or absence of cues has little effect on this task, or that other factors played a larger role than cue condition, such as learning effects. Our results also show that our application's usability could be improved. We conclude this document with a discussion of our results and the design implications that may arise from them. / Master of Science / Our project was aimed at creating an app for modern augmented reality headsets that could help people perform group brainstorming sessions remotely from each other. We were also interested in finding out the benefits or downsides of some of the design decisions that recent research in remote augmented reality recommends, such as lines showing where a user is focusing and visualizations for a user's head and hands. In our app, which we dubbed IdeaSpace, users were faced with a virtual corkboard and a number of virtual sticky notes, similar to what they might expect in a traditional brainstorming session. We ran three-person study sessions comparing design techniques recommended by literature to an absence of such techniques and did not find they helped much in our task. We also found that our application was not as usable as we had hoped and could be improved in future iterations. We conclude our paper discussing what our results might mean and what can be learned for the future.
73

Integrating Traditional Tools to Enable Rapid Ideation in an Augmented Reality Virtual Environment

Phan, Tam Xuan 10 June 2021 (has links)
This paper presents a design, implementation, and evaluation of an augmented reality virtual environment to support collaborative brainstorming sessions. We specifically support brainstorming in the form of ideation on sticky notes, a common method to organize a large number of ideas in space with sticky notes on a board. Our environment allows users to integrate physical pen and paper used in a brainstorming session with the support of augmented reality headsets, so that we can support further interaction modes and remote collaboration as well. We use an AR HMD to capture images containing notes, detect and crop them with a remote server, then spawn the detected notes in to enable virtual viewing and manipulation. We evaluate our input method for generating notes in a user study In doing so, we attempt to determine whether traditional input tools like pen and paper can be seamlessly integrated into augmented reality, and see if these tools improve efficiency and comprehensibility over previous augmented reality input methods. / Master of Science / Collaborative brainstorming sessions often involve rapid ideation and outputting those ideas on physical sticky notes with others. We built a virtual environment, IdeaSpace, to support collaborative brainstorming in augmented reality head-mounted devices. To support the activities of rapid ideation and creating notes to express those ideas, we developed an input method for creating virtual note objects for augmented reality collaborative brain-storming sessions. We allow users to use traditional tools like pens and sticky notes to write out their notes, then scan them in using device cameras by uttering a voice command. We evaluated this input method to determine the advantages and disadvantages it brings to rapid ideation in augmented reality, and how it affects comprehensibility compared to existing gesture-based input methods in augmented reality. We found that our pen and paper input method outperformed our own baseline gesture input method in efficiency, comfort, usability, and comprehensibility when creating virtual notes. While we cannot conclude that our experiment proved that pen and paper is outright better than all gesture-based input methods, we can safely say pen and paper can be a valuable input method in augmented reality brainstorming for creating notes.
74

Students' Perceptions of Learning Environment and Achievement with Augmented Reality Technology

Alenezi, Abdulilah Farhan H 05 1900 (has links)
The purpose of the study was to examine the impact of using AR in the Computer Architecture unit for male 11th grade students in a school in the eastern area of Arar City in Saudi Arabia through monitoring its impact on student achievement and students' perceptions of the learning environment. Two research questions are explored: What is the effect of using AR on student achievement, and what are students' perceptions of the learning environment when they use AR? Two instruments were used to collect the data in this study: an achievement test taken from the official teacher book issued by the Ministry of Education in Saudi Arabia and the Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI) modified questionnaire "actual form." Statistical analyses employed to answer the first research question included an independent-samples t-test and descriptive statistics. To investigate the second research question, descriptive statistics and a paired t-test were used. These results from the first question indicate a statistically significant difference (p < 0.05) between the two groups' mean values: the students who used AR achieved a higher level of learning compared to the students who learned in the traditional way. The study found that using AR helped the students to increase their achievements through many aspects, one of which was being able to feel in contact with objects and events that were physically out of their reach. In addition, AR offered a safe environment for learning and training away from potential and real dangers. The results for the second research question show statically significant increases in seven out of eight TROFLEI scales. This suggests that there was a positive feeling among the students regarding the teacher's interaction and his interest in providing equal opportunities to the students to answer the questions.
75

Intuitive Roboterprogrammierung durch Augmented Reality

Matour, Mohammad-Ehsan, Winkler, Alexander 12 February 2024 (has links)
In diesem Artikel wird ein innovativer Ansatz zur intuitiven, kraftgesteuerten Bahnplanung bei kollaborativen Robotern mithilfe von Augmented Reality (AR)-Technologie vorgestellt. Eine benutzerfreundliche Schnittstelle gewährt dem Bediener die volle Kontrolle über den Roboterarm. Durch die Verwendung eines Mixed-Reality-Head-Mounted Displays (HMD), wird virtueller Inhalt überlagert, was eine nahtlose Interaktion mit dem Robotersystem ermöglicht. Die Schnittstelle liefert umfassende Daten zum Roboterstatus, einschließlich Gelenkpositionen, Geschwindigkeiten und auf den Flansch wirkende Kräfte. Der Bediener kann Bewegungsbefehle im Gelenk- und im kartesischen Raum erteilen, intuitiv Pfade planen und kraftgesteuerte Bewegungen ausführen, indem Kontrollpunkte um ein Objekt festgelegt werden. Visuelles Feedback in Form von Schiebereglern ermöglicht die Anpassung der auf das Objekt wirkenden Kräfte. Diese Schieberegler erlauben eine dynamische und intuitive Kraftregulierung im kartesischen Raum und minimieren die Notwendigkeit umfangreicher Programmierung. Ein virtuelles Robotermodell in der Arbeitsumgebung bietet zudem eine Bewegungsvorschau. Die Schnittstelle zwischen Mensch und Roboter sowie der virtuelle Inhalt werden mithilfe von Unity3D erstellt, während die Datenübertragung durch das Robot Operating System (ROS) erfolgt. Dieser Ansatz bietet eine intuitive und sichere Methode zur Steuerung kollaborativer Roboter. Der vorgeschlagene Ansatz hat Potenzial, die Roboterprogrammierung zu vereinfachen, deren Effizienz zu steigern und die Sicherheit in verschiedenen Anwendungen mit kollaborativen Robotern zu verbessern.
76

AN INITIAL PROTOTYPE FOR CURVED LIGHT IN AUGMENTED REALITY

Zhong, Ning 23 April 2015 (has links)
No description available.
77

Application of Augmented Reality to Dimensional and Geometric Inspection

Chung, Kyung Ho 03 April 2002 (has links)
Ensuring inspection performance is not a trivial design problem, because inspection is a complex and difficult task that tends to be error-prone, whether performed by human or by automated machines. Due to economical or technological reasons, human inspectors are responsible for inspection functions in many cases. Humans, however, are rarely perfect. A system of manual inspection was found to be approximately 80-90% effective, thus allowing non-confirming parts to be processed (Harris & Chaney, 1969; Drury, 1975). As the attributes of interest or the variety of products increases, the complexity of an inspection task increases. The inspection system becomes less effective because of the sensory and cognitive limitations of human inspectors. Any means that can support or aid the human inspectors is necessary to compensate for inspection difficulty. Augmented reality offers a new approach in designing an inspection system as a means to augment the cognitive capability of inspectors. To realize the potential benefits of AR, however the design of AR-aided inspection requires a through understanding of the inspection process as well as AR technology. The cognitive demands of inspection and the capabilities of AR to aid inspectors need to be evaluated to decide when and how to use AR for a dimensional inspection. The objectives of this study are to improve the performance of a dimensional inspection task by using AR and to develop guidelines in designing an AR-aided inspection system. The performance of four inspection methods (i.e., manual, 2D-aided, 3D-aided, and AR-aided inspections) was compared in terms of inspection time and measurement accuracy. The results suggest that AR might be an effective tool that reduces inspection time. However, the measuring accuracy was basically the same across all inspection methods. The questionnaire results showed that the AR and 3D-aided inspection conditions are preferred over the manual and 2D-aided inspection. Based on the results, four design guidelines were formed in using AR technology for a dimensional inspection. / Ph. D.
78

Enhancing Security and Privacy in Head-Mounted Augmented Reality Systems Using Eye Gaze

Corbett, Matthew 22 April 2024 (has links)
Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. Specifically, head-mounted AR devices can accurately sense and understand their environment through an increasingly powerful array of sensors such as cameras, depth sensors, eye gaze trackers, microphones, and inertial sensors. The ability of these devices to collect this information presents both challenges and opportunities to improve existing security and privacy techniques in this domain. Specifically, eye gaze tracking is a ready-made capability to analyze user intent, emotions, and vulnerability, and as an input mechanism. However, modern AR devices lack systems to address their unique security and privacy issues. Problems such as lacking local pairing mechanisms usable while immersed in AR environments, bystander privacy protections, and the increased vulnerability to shoulder surfing while wearing AR devices all lack viable solutions. In this dissertation, I explore how readily available eye gaze sensor data can be used to improve existing methods for assuring information security and protecting the privacy of those near the device. My research has presented three new systems, BystandAR, ShouldAR, and GazePair that each leverage user eye gaze to improve security and privacy expectations in or with Augmented Reality. As these devices grow in power and number, such solutions are necessary to prevent perception failures that hindered earlier devices. The work in this dissertation is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful devices. / Doctor of Philosophy / Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. The ability of these devices to collect information presents challenges and opportunities to improve existing security and privacy techniques in this domain. In this dissertation, I explore how readily available eye gaze sensor data can be used to improve existing methods for assuring security and protecting the privacy of those near the device. My research has presented three new systems, BystandAR, ShouldAR, and GazePair that each leverage user eye gaze to improve security and privacy expectations in or with Augmented Reality. As these devices grow in power and number, such solutions are necessary to prevent perception failures that hindered earlier devices. The work in this dissertation is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful devices.
79

Communicating expertise in system operation and fault diagnosis to non-experts

Staderman, William P. 01 May 2003 (has links)
The use of systems that span many knowledge domains is becoming more common as technology advances, requiring expert-performance in a domain from users who are usually not experts in that domain. This study examined a means of communicating expertise (in system operation and fault diagnosis) to non-experts and furthering the understanding of expert mental models. It has been suggested that conceptions of abstract models of system-functions distinguish expert performance from non-expert performance (Hanisch, Kramer, and Hulin, 1991). This study examined the effects on performance of augmenting a simple control panel device with a model of the functions of the device, interacting with the model, and augmenting the device with graphically superimposed procedural indicators (directions). The five augmented display conditions studied were: Device Only, Device + Model, Device + Procedural Indicators, Interactive Model, and Interactive Model + Procedural Indicators. The device and displays were presented on a PC workstation. Performance measures (speed and accuracy) and subjective measures (questionnaires, NASA TLX, and structured interviews) were collected. It was expected that participants who interact with the device + procedural indicators would exhibit the shortest performance time and least errors; however, those who interacted with the simplest display (device only) were fastest and exhibited the least errors. Results of this study are discussed in terms of building a mental model and identifying situations that require a mental model. / Ph. D.
80

HD4AR: High-Precision Mobile Augmented Reality Using Image-Based Localization

Miranda, Paul Nicholas 05 June 2012 (has links)
Construction projects require large amounts of cyber-information, such as 3D models, in order to achieve success. Unfortunately, this information is typically difficult for construction field personnel to access and use on-site, due to the highly mobile nature of the job and hazardous work environments. Field personnel rely on carrying around large stacks of construction drawings, diagrams, and specifications, or traveling to a trailer to look up information electronically, reducing potential project efficiency. This thesis details my work on Hybrid 4-Dimensional Augmented Reality, known as HD4AR, a mobile augmented reality system for construction projects that provides high-precision visualization of semantically-rich 3D cyber-information over real-world imagery. The thesis examines the challenges related to augmenting reality on a construction site, describes how HD4AR overcomes these challenges, and empirically evaluates the capabilities of HD4AR. / Master of Science

Page generated in 0.0799 seconds