• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 104
  • 87
  • 38
  • 36
  • 33
  • 19
  • 14
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 989
  • 989
  • 291
  • 196
  • 183
  • 151
  • 146
  • 135
  • 126
  • 120
  • 116
  • 99
  • 93
  • 92
  • 91
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Towards System Agnostic Calibration of Optical See-Through Head-Mounted Displays for Augmented Reality

Moser, Kenneth R 12 August 2016 (has links)
This dissertation examines the developments and progress of spatial calibration procedures for Optical See-Through (OST) Head-Mounted Display (HMD) devices for visual Augmented Reality (AR) applications. Rapid developments in commercial AR systems have created an explosion of OST device options for not only research and industrial purposes, but also the consumer market as well. This expansion in hardware availability is equally matched by a need for intuitive standardized calibration procedures that are not only easily completed by novice users, but which are also readily applicable across the largest range of hardware options. This demand for robust uniform calibration schemes is the driving motive behind the original contributions offered within this work. A review of prior surveys and canonical description for AR and OST display developments is provided before narrowing the contextual scope to the research questions evolving within the calibration domain. Both established and state of the art calibration techniques and their general implementations are explored, along with prior user study assessments and the prevailing evaluation metrics and practices employed within. The original contributions begin with a user study evaluation comparing and contrasting the accuracy and precision of an established manual calibration method against a state of the art semi-automatic technique. This is the first formal evaluation of any non-manual approach and provides insight into the current usability limitations of present techniques and the complexities of next generation methods yet to be solved. The second study investigates the viability of a user-centric approach to OST HMD calibration through novel adaptation of manual calibration to consumer level hardware. Additional contributions describe the development of a complete demonstration application incorporating user-centric methods, a novel strategy for visualizing both calibration results and registration error from the user’s perspective, as well as a robust intuitive presentation style for binocular manual calibration. The final study provides further investigation into the accuracy differences observed between user-centric and environment-centric methodologies. The dissertation concludes with a summarization of the contribution outcomes and their impact on existing AR systems and research endeavors, as well as a short look ahead into future extensions and paths that continued calibration research should explore.
72

Evaluating Collaborative Cues for Remote Affinity Diagramming Tasks in Augmented Reality

Llorens, Nathaniel Roman 03 September 2021 (has links)
This thesis documents the design and implementation of an augmented reality (AR) application that could be extended to support group brainstorming tasks remotely. Additionally, it chronicles our investigation into the helpfulness of traditional collaborative cues in this novel application of augmented reality. We implemented IdeaSpace, an interactive application that emulates an affinity diagramming environment on an AR headset. In our application, users can organize and manipulate virtual sticky notes around a central virtual board. We performed a user study, with each session requiring users to perform an affinity diagramming clustering task with and without common collaborative cues. Our results indicate that the presence or absence of cues has little effect on this task, or that other factors played a larger role than cue condition, such as learning effects. Our results also show that our application's usability could be improved. We conclude this document with a discussion of our results and the design implications that may arise from them. / Master of Science / Our project was aimed at creating an app for modern augmented reality headsets that could help people perform group brainstorming sessions remotely from each other. We were also interested in finding out the benefits or downsides of some of the design decisions that recent research in remote augmented reality recommends, such as lines showing where a user is focusing and visualizations for a user's head and hands. In our app, which we dubbed IdeaSpace, users were faced with a virtual corkboard and a number of virtual sticky notes, similar to what they might expect in a traditional brainstorming session. We ran three-person study sessions comparing design techniques recommended by literature to an absence of such techniques and did not find they helped much in our task. We also found that our application was not as usable as we had hoped and could be improved in future iterations. We conclude our paper discussing what our results might mean and what can be learned for the future.
73

Integrating Traditional Tools to Enable Rapid Ideation in an Augmented Reality Virtual Environment

Phan, Tam Xuan 10 June 2021 (has links)
This paper presents a design, implementation, and evaluation of an augmented reality virtual environment to support collaborative brainstorming sessions. We specifically support brainstorming in the form of ideation on sticky notes, a common method to organize a large number of ideas in space with sticky notes on a board. Our environment allows users to integrate physical pen and paper used in a brainstorming session with the support of augmented reality headsets, so that we can support further interaction modes and remote collaboration as well. We use an AR HMD to capture images containing notes, detect and crop them with a remote server, then spawn the detected notes in to enable virtual viewing and manipulation. We evaluate our input method for generating notes in a user study In doing so, we attempt to determine whether traditional input tools like pen and paper can be seamlessly integrated into augmented reality, and see if these tools improve efficiency and comprehensibility over previous augmented reality input methods. / Master of Science / Collaborative brainstorming sessions often involve rapid ideation and outputting those ideas on physical sticky notes with others. We built a virtual environment, IdeaSpace, to support collaborative brainstorming in augmented reality head-mounted devices. To support the activities of rapid ideation and creating notes to express those ideas, we developed an input method for creating virtual note objects for augmented reality collaborative brain-storming sessions. We allow users to use traditional tools like pens and sticky notes to write out their notes, then scan them in using device cameras by uttering a voice command. We evaluated this input method to determine the advantages and disadvantages it brings to rapid ideation in augmented reality, and how it affects comprehensibility compared to existing gesture-based input methods in augmented reality. We found that our pen and paper input method outperformed our own baseline gesture input method in efficiency, comfort, usability, and comprehensibility when creating virtual notes. While we cannot conclude that our experiment proved that pen and paper is outright better than all gesture-based input methods, we can safely say pen and paper can be a valuable input method in augmented reality brainstorming for creating notes.
74

SLAM-based Dense Surface Reconstruction in Monocular Minimally Invasive Surgery and its Application to Augmented Reality

Chen, L., Tang, W., John, N.W., Wan, Tao Ruan, Zhang, J.J. 08 February 2018 (has links)
Yes / While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. Methods A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations based on a robust 3D calibration. Results We demonstrate the clinical relevance of our proposed system through two examples: a) measurement of the surface; b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54mm, which compare favourably with previous approaches. Second, \textit{in vivo} laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. Conclusions The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are effective and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes.
75

Students' Perceptions of Learning Environment and Achievement with Augmented Reality Technology

Alenezi, Abdulilah Farhan H 05 1900 (has links)
The purpose of the study was to examine the impact of using AR in the Computer Architecture unit for male 11th grade students in a school in the eastern area of Arar City in Saudi Arabia through monitoring its impact on student achievement and students' perceptions of the learning environment. Two research questions are explored: What is the effect of using AR on student achievement, and what are students' perceptions of the learning environment when they use AR? Two instruments were used to collect the data in this study: an achievement test taken from the official teacher book issued by the Ministry of Education in Saudi Arabia and the Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI) modified questionnaire "actual form." Statistical analyses employed to answer the first research question included an independent-samples t-test and descriptive statistics. To investigate the second research question, descriptive statistics and a paired t-test were used. These results from the first question indicate a statistically significant difference (p < 0.05) between the two groups' mean values: the students who used AR achieved a higher level of learning compared to the students who learned in the traditional way. The study found that using AR helped the students to increase their achievements through many aspects, one of which was being able to feel in contact with objects and events that were physically out of their reach. In addition, AR offered a safe environment for learning and training away from potential and real dangers. The results for the second research question show statically significant increases in seven out of eight TROFLEI scales. This suggests that there was a positive feeling among the students regarding the teacher's interaction and his interest in providing equal opportunities to the students to answer the questions.
76

Intuitive Roboterprogrammierung durch Augmented Reality

Matour, Mohammad-Ehsan, Winkler, Alexander 12 February 2024 (has links)
In diesem Artikel wird ein innovativer Ansatz zur intuitiven, kraftgesteuerten Bahnplanung bei kollaborativen Robotern mithilfe von Augmented Reality (AR)-Technologie vorgestellt. Eine benutzerfreundliche Schnittstelle gewährt dem Bediener die volle Kontrolle über den Roboterarm. Durch die Verwendung eines Mixed-Reality-Head-Mounted Displays (HMD), wird virtueller Inhalt überlagert, was eine nahtlose Interaktion mit dem Robotersystem ermöglicht. Die Schnittstelle liefert umfassende Daten zum Roboterstatus, einschließlich Gelenkpositionen, Geschwindigkeiten und auf den Flansch wirkende Kräfte. Der Bediener kann Bewegungsbefehle im Gelenk- und im kartesischen Raum erteilen, intuitiv Pfade planen und kraftgesteuerte Bewegungen ausführen, indem Kontrollpunkte um ein Objekt festgelegt werden. Visuelles Feedback in Form von Schiebereglern ermöglicht die Anpassung der auf das Objekt wirkenden Kräfte. Diese Schieberegler erlauben eine dynamische und intuitive Kraftregulierung im kartesischen Raum und minimieren die Notwendigkeit umfangreicher Programmierung. Ein virtuelles Robotermodell in der Arbeitsumgebung bietet zudem eine Bewegungsvorschau. Die Schnittstelle zwischen Mensch und Roboter sowie der virtuelle Inhalt werden mithilfe von Unity3D erstellt, während die Datenübertragung durch das Robot Operating System (ROS) erfolgt. Dieser Ansatz bietet eine intuitive und sichere Methode zur Steuerung kollaborativer Roboter. Der vorgeschlagene Ansatz hat Potenzial, die Roboterprogrammierung zu vereinfachen, deren Effizienz zu steigern und die Sicherheit in verschiedenen Anwendungen mit kollaborativen Robotern zu verbessern.
77

AN INITIAL PROTOTYPE FOR CURVED LIGHT IN AUGMENTED REALITY

Zhong, Ning 23 April 2015 (has links)
No description available.
78

Immersive Space to Think: Immersive Analytics for Sensemaking with Non-Quantitative Datasets

Lisle, Lorance Richard 09 February 2023 (has links)
Analysts often work with large complex non-quantitative datasets in order to better understand concepts, themes, and other forms of insight contained within them. As defined by Pirolli and Card, this act of sensemaking is cognitively difficult, and is performed iteratively and repetitively through various stages of understanding. Immersive analytics has purported to assist with this process through putting users in virtual environments that allows them to sift through and explore data in three-dimensional interactive settings. Most previous research, however, has focused on quantitative data, where users are interacting with mostly numerical representations of data. We designed Immersive Space to Think, an immersive analytics approach to assist users perform the act of sensemaking with non-quantitative datasets, affording analysts the ability to manipulate data artifacts, annotate them, search through them, and present their findings. We performed several studies to understand and refine our approach and how it affects users sensemaking strategies. An exploratory virtual reality study found that users place documents in 2.5-dimensional structures, where we saw semicircular, environmental, and planar layouts. The environmental layout, in particular, used features of the environment as scaffolding for users' sensemaking process. In a study comparing levels of mixed reality as defined by Milgram-Kishino's Reality-Virtuality Continuum, we found that an augmented virtuality solution best fits users' preferences while still supporting external tools. Lastly, we explored how users deal with varying amounts of space and three-dimensional user interaction techniques in a comparative study comparing small virtual monitors, large virtual monitors, and a seated-version implementation of Immersive Space to Think. Our participants found IST best supported the task of sensemaking, with evidence that users leveraged spatial memory and utilized depth to denote additional meaning in the immersive condition. Overall, Immersive Space to Think affords an effective sensemaking three-dimensional space using 3D user interaction techniques that can leverage embodied cognition and spatial memory which aids the users understanding. / Doctor of Philosophy / Humans are constantly trying to make sense of the world around them. Whether they're a detective trying to understand what happened at a crime scene or a shopper trying to find the best office chair, people are consuming vast quantities of data to assist them with their choices. This process can be difficult, and people are often returning to various pieces of data repeatedly to remember why they are making the choice they decided upon. With the advent of cheap virtual reality products, researchers have pursued the technology as a way for people to better understand large sets of data. However, most mixed reality applications looking into this problem focus on numerical data, whereas a lot of the data people process is multimedia or text-based in nature. We designed and developed a mixed reality approach for analyzing this type of data called Immersive Space to Think. Our approach allows users to look at and move various documents around in a virtual environment, take notes or highlight those documents, search those documents, and create reports that summarize what they've learned. We also performed several studies to investigate and evolve our design. First, we ran a study in virtual reality to understand how users interact with documents using Immersive Space to Think. We found users arranging documents around themselves in a semicircular or flat plane pattern, or using various cues in the virtual environment as a way to organize the document set. Furthermore, we performed a study to understand user preferences with augmented and virtual reality. We found a mix of the two, also known as augmented virtuality, would best support user preferences and ability. Lastly, we ran two comparative studies to understand how three dimensional space and interaction affects user strategies. We ran a small user study looking at how a single student uses a desktop computer with a single display as well as immersive space to think to write essays. We found that they wrote essays with a better understanding of the source data with Immersive Space to Think than the desktop setup. We conducted a larger study where we compared a small virtual monitor simulating a traditional desktop screen, a large virtual monitor simulating a monitor 8 times the size of traditional desktop monitors, and immersive space to think. We found participants engaged with documents more in Immersive Space to Think, and used the space to denote importance for documents. Overall, Immersive Space to Think provides a compelling environment that assists users in understanding sets of documents.
79

Application of Augmented Reality to Dimensional and Geometric Inspection

Chung, Kyung Ho 03 April 2002 (has links)
Ensuring inspection performance is not a trivial design problem, because inspection is a complex and difficult task that tends to be error-prone, whether performed by human or by automated machines. Due to economical or technological reasons, human inspectors are responsible for inspection functions in many cases. Humans, however, are rarely perfect. A system of manual inspection was found to be approximately 80-90% effective, thus allowing non-confirming parts to be processed (Harris & Chaney, 1969; Drury, 1975). As the attributes of interest or the variety of products increases, the complexity of an inspection task increases. The inspection system becomes less effective because of the sensory and cognitive limitations of human inspectors. Any means that can support or aid the human inspectors is necessary to compensate for inspection difficulty. Augmented reality offers a new approach in designing an inspection system as a means to augment the cognitive capability of inspectors. To realize the potential benefits of AR, however the design of AR-aided inspection requires a through understanding of the inspection process as well as AR technology. The cognitive demands of inspection and the capabilities of AR to aid inspectors need to be evaluated to decide when and how to use AR for a dimensional inspection. The objectives of this study are to improve the performance of a dimensional inspection task by using AR and to develop guidelines in designing an AR-aided inspection system. The performance of four inspection methods (i.e., manual, 2D-aided, 3D-aided, and AR-aided inspections) was compared in terms of inspection time and measurement accuracy. The results suggest that AR might be an effective tool that reduces inspection time. However, the measuring accuracy was basically the same across all inspection methods. The questionnaire results showed that the AR and 3D-aided inspection conditions are preferred over the manual and 2D-aided inspection. Based on the results, four design guidelines were formed in using AR technology for a dimensional inspection. / Ph. D.
80

Enhancing Security and Privacy in Head-Mounted Augmented Reality Systems Using Eye Gaze

Corbett, Matthew 22 April 2024 (has links)
Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. Specifically, head-mounted AR devices can accurately sense and understand their environment through an increasingly powerful array of sensors such as cameras, depth sensors, eye gaze trackers, microphones, and inertial sensors. The ability of these devices to collect this information presents both challenges and opportunities to improve existing security and privacy techniques in this domain. Specifically, eye gaze tracking is a ready-made capability to analyze user intent, emotions, and vulnerability, and as an input mechanism. However, modern AR devices lack systems to address their unique security and privacy issues. Problems such as lacking local pairing mechanisms usable while immersed in AR environments, bystander privacy protections, and the increased vulnerability to shoulder surfing while wearing AR devices all lack viable solutions. In this dissertation, I explore how readily available eye gaze sensor data can be used to improve existing methods for assuring information security and protecting the privacy of those near the device. My research has presented three new systems, BystandAR, ShouldAR, and GazePair that each leverage user eye gaze to improve security and privacy expectations in or with Augmented Reality. As these devices grow in power and number, such solutions are necessary to prevent perception failures that hindered earlier devices. The work in this dissertation is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful devices. / Doctor of Philosophy / Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. The ability of these devices to collect information presents challenges and opportunities to improve existing security and privacy techniques in this domain. In this dissertation, I explore how readily available eye gaze sensor data can be used to improve existing methods for assuring security and protecting the privacy of those near the device. My research has presented three new systems, BystandAR, ShouldAR, and GazePair that each leverage user eye gaze to improve security and privacy expectations in or with Augmented Reality. As these devices grow in power and number, such solutions are necessary to prevent perception failures that hindered earlier devices. The work in this dissertation is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful devices.

Page generated in 0.0995 seconds