• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 343
  • 76
  • 71
  • 37
  • 36
  • 29
  • 13
  • 12
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 738
  • 738
  • 213
  • 173
  • 136
  • 116
  • 104
  • 99
  • 98
  • 97
  • 93
  • 83
  • 80
  • 77
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Real-Time Recognition of Planar Targets on Mobile Devices. A Framework for Fast and Robust Homography Estimation

Bazargani, Hamid January 2014 (has links)
The present thesis is concerned with the problem of robust pose estimation for planar targets in the context of real-time mobile vision. As a consequence of this research, individual developments made in isolation by earlier researchers are here considered together. Several adaptations to the existing algorithms are undertaken yielding a unified framework for robust pose estimation. This framework is specifically designed to meet the growing demand for fast and robust estimation on power-constrained platforms. For robust recognition of targets at very low computational costs, we employ feature based methods which are based on local binary descriptors allowing fast feature matching at run-time. The matching set is then fed to a robust parameter estimation algorithm in order to obtain a reliable homography. On the basis of our experimental results, it can be concluded that reliable homography estimates can be obtained using a device-friendly implementation of the Gaussian Elimination algorithm. We also show in this thesis that our simplified approach can significantly improve the homography estimation step in a hypothesize-and-verify scheme. The author's attention is focused not only on developing fast algorithms for the recognition framework but also on the optimized implementation of such algorithms. Any other recognition framework would similarly benefit from our optimized implementation.
32

Towards System Agnostic Calibration of Optical See-Through Head-Mounted Displays for Augmented Reality

Moser, Kenneth R 12 August 2016 (has links)
This dissertation examines the developments and progress of spatial calibration procedures for Optical See-Through (OST) Head-Mounted Display (HMD) devices for visual Augmented Reality (AR) applications. Rapid developments in commercial AR systems have created an explosion of OST device options for not only research and industrial purposes, but also the consumer market as well. This expansion in hardware availability is equally matched by a need for intuitive standardized calibration procedures that are not only easily completed by novice users, but which are also readily applicable across the largest range of hardware options. This demand for robust uniform calibration schemes is the driving motive behind the original contributions offered within this work. A review of prior surveys and canonical description for AR and OST display developments is provided before narrowing the contextual scope to the research questions evolving within the calibration domain. Both established and state of the art calibration techniques and their general implementations are explored, along with prior user study assessments and the prevailing evaluation metrics and practices employed within. The original contributions begin with a user study evaluation comparing and contrasting the accuracy and precision of an established manual calibration method against a state of the art semi-automatic technique. This is the first formal evaluation of any non-manual approach and provides insight into the current usability limitations of present techniques and the complexities of next generation methods yet to be solved. The second study investigates the viability of a user-centric approach to OST HMD calibration through novel adaptation of manual calibration to consumer level hardware. Additional contributions describe the development of a complete demonstration application incorporating user-centric methods, a novel strategy for visualizing both calibration results and registration error from the user’s perspective, as well as a robust intuitive presentation style for binocular manual calibration. The final study provides further investigation into the accuracy differences observed between user-centric and environment-centric methodologies. The dissertation concludes with a summarization of the contribution outcomes and their impact on existing AR systems and research endeavors, as well as a short look ahead into future extensions and paths that continued calibration research should explore.
33

Integrating Traditional Tools to Enable Rapid Ideation in an Augmented Reality Virtual Environment

Phan, Tam Xuan 10 June 2021 (has links)
This paper presents a design, implementation, and evaluation of an augmented reality virtual environment to support collaborative brainstorming sessions. We specifically support brainstorming in the form of ideation on sticky notes, a common method to organize a large number of ideas in space with sticky notes on a board. Our environment allows users to integrate physical pen and paper used in a brainstorming session with the support of augmented reality headsets, so that we can support further interaction modes and remote collaboration as well. We use an AR HMD to capture images containing notes, detect and crop them with a remote server, then spawn the detected notes in to enable virtual viewing and manipulation. We evaluate our input method for generating notes in a user study In doing so, we attempt to determine whether traditional input tools like pen and paper can be seamlessly integrated into augmented reality, and see if these tools improve efficiency and comprehensibility over previous augmented reality input methods. / Master of Science / Collaborative brainstorming sessions often involve rapid ideation and outputting those ideas on physical sticky notes with others. We built a virtual environment, IdeaSpace, to support collaborative brainstorming in augmented reality head-mounted devices. To support the activities of rapid ideation and creating notes to express those ideas, we developed an input method for creating virtual note objects for augmented reality collaborative brain-storming sessions. We allow users to use traditional tools like pens and sticky notes to write out their notes, then scan them in using device cameras by uttering a voice command. We evaluated this input method to determine the advantages and disadvantages it brings to rapid ideation in augmented reality, and how it affects comprehensibility compared to existing gesture-based input methods in augmented reality. We found that our pen and paper input method outperformed our own baseline gesture input method in efficiency, comfort, usability, and comprehensibility when creating virtual notes. While we cannot conclude that our experiment proved that pen and paper is outright better than all gesture-based input methods, we can safely say pen and paper can be a valuable input method in augmented reality brainstorming for creating notes.
34

SLAM-based Dense Surface Reconstruction in Monocular Minimally Invasive Surgery and its Application to Augmented Reality

Chen, L., Tang, W., John, N.W., Wan, Tao Ruan, Zhang, J.J. 08 February 2018 (has links)
Yes / While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. Methods A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations based on a robust 3D calibration. Results We demonstrate the clinical relevance of our proposed system through two examples: a) measurement of the surface; b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54mm, which compare favourably with previous approaches. Second, \textit{in vivo} laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. Conclusions The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are effective and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes.
35

Intuitive Roboterprogrammierung durch Augmented Reality

Matour, Mohammad-Ehsan, Winkler, Alexander 12 February 2024 (has links)
In diesem Artikel wird ein innovativer Ansatz zur intuitiven, kraftgesteuerten Bahnplanung bei kollaborativen Robotern mithilfe von Augmented Reality (AR)-Technologie vorgestellt. Eine benutzerfreundliche Schnittstelle gewährt dem Bediener die volle Kontrolle über den Roboterarm. Durch die Verwendung eines Mixed-Reality-Head-Mounted Displays (HMD), wird virtueller Inhalt überlagert, was eine nahtlose Interaktion mit dem Robotersystem ermöglicht. Die Schnittstelle liefert umfassende Daten zum Roboterstatus, einschließlich Gelenkpositionen, Geschwindigkeiten und auf den Flansch wirkende Kräfte. Der Bediener kann Bewegungsbefehle im Gelenk- und im kartesischen Raum erteilen, intuitiv Pfade planen und kraftgesteuerte Bewegungen ausführen, indem Kontrollpunkte um ein Objekt festgelegt werden. Visuelles Feedback in Form von Schiebereglern ermöglicht die Anpassung der auf das Objekt wirkenden Kräfte. Diese Schieberegler erlauben eine dynamische und intuitive Kraftregulierung im kartesischen Raum und minimieren die Notwendigkeit umfangreicher Programmierung. Ein virtuelles Robotermodell in der Arbeitsumgebung bietet zudem eine Bewegungsvorschau. Die Schnittstelle zwischen Mensch und Roboter sowie der virtuelle Inhalt werden mithilfe von Unity3D erstellt, während die Datenübertragung durch das Robot Operating System (ROS) erfolgt. Dieser Ansatz bietet eine intuitive und sichere Methode zur Steuerung kollaborativer Roboter. Der vorgeschlagene Ansatz hat Potenzial, die Roboterprogrammierung zu vereinfachen, deren Effizienz zu steigern und die Sicherheit in verschiedenen Anwendungen mit kollaborativen Robotern zu verbessern.
36

AN INITIAL PROTOTYPE FOR CURVED LIGHT IN AUGMENTED REALITY

Zhong, Ning 23 April 2015 (has links)
No description available.
37

Immersive Space to Think: Immersive Analytics for Sensemaking with Non-Quantitative Datasets

Lisle, Lorance Richard 09 February 2023 (has links)
Analysts often work with large complex non-quantitative datasets in order to better understand concepts, themes, and other forms of insight contained within them. As defined by Pirolli and Card, this act of sensemaking is cognitively difficult, and is performed iteratively and repetitively through various stages of understanding. Immersive analytics has purported to assist with this process through putting users in virtual environments that allows them to sift through and explore data in three-dimensional interactive settings. Most previous research, however, has focused on quantitative data, where users are interacting with mostly numerical representations of data. We designed Immersive Space to Think, an immersive analytics approach to assist users perform the act of sensemaking with non-quantitative datasets, affording analysts the ability to manipulate data artifacts, annotate them, search through them, and present their findings. We performed several studies to understand and refine our approach and how it affects users sensemaking strategies. An exploratory virtual reality study found that users place documents in 2.5-dimensional structures, where we saw semicircular, environmental, and planar layouts. The environmental layout, in particular, used features of the environment as scaffolding for users' sensemaking process. In a study comparing levels of mixed reality as defined by Milgram-Kishino's Reality-Virtuality Continuum, we found that an augmented virtuality solution best fits users' preferences while still supporting external tools. Lastly, we explored how users deal with varying amounts of space and three-dimensional user interaction techniques in a comparative study comparing small virtual monitors, large virtual monitors, and a seated-version implementation of Immersive Space to Think. Our participants found IST best supported the task of sensemaking, with evidence that users leveraged spatial memory and utilized depth to denote additional meaning in the immersive condition. Overall, Immersive Space to Think affords an effective sensemaking three-dimensional space using 3D user interaction techniques that can leverage embodied cognition and spatial memory which aids the users understanding. / Doctor of Philosophy / Humans are constantly trying to make sense of the world around them. Whether they're a detective trying to understand what happened at a crime scene or a shopper trying to find the best office chair, people are consuming vast quantities of data to assist them with their choices. This process can be difficult, and people are often returning to various pieces of data repeatedly to remember why they are making the choice they decided upon. With the advent of cheap virtual reality products, researchers have pursued the technology as a way for people to better understand large sets of data. However, most mixed reality applications looking into this problem focus on numerical data, whereas a lot of the data people process is multimedia or text-based in nature. We designed and developed a mixed reality approach for analyzing this type of data called Immersive Space to Think. Our approach allows users to look at and move various documents around in a virtual environment, take notes or highlight those documents, search those documents, and create reports that summarize what they've learned. We also performed several studies to investigate and evolve our design. First, we ran a study in virtual reality to understand how users interact with documents using Immersive Space to Think. We found users arranging documents around themselves in a semicircular or flat plane pattern, or using various cues in the virtual environment as a way to organize the document set. Furthermore, we performed a study to understand user preferences with augmented and virtual reality. We found a mix of the two, also known as augmented virtuality, would best support user preferences and ability. Lastly, we ran two comparative studies to understand how three dimensional space and interaction affects user strategies. We ran a small user study looking at how a single student uses a desktop computer with a single display as well as immersive space to think to write essays. We found that they wrote essays with a better understanding of the source data with Immersive Space to Think than the desktop setup. We conducted a larger study where we compared a small virtual monitor simulating a traditional desktop screen, a large virtual monitor simulating a monitor 8 times the size of traditional desktop monitors, and immersive space to think. We found participants engaged with documents more in Immersive Space to Think, and used the space to denote importance for documents. Overall, Immersive Space to Think provides a compelling environment that assists users in understanding sets of documents.
38

Enhancing Security and Privacy in Head-Mounted Augmented Reality Systems Using Eye Gaze

Corbett, Matthew 22 April 2024 (has links)
Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. Specifically, head-mounted AR devices can accurately sense and understand their environment through an increasingly powerful array of sensors such as cameras, depth sensors, eye gaze trackers, microphones, and inertial sensors. The ability of these devices to collect this information presents both challenges and opportunities to improve existing security and privacy techniques in this domain. Specifically, eye gaze tracking is a ready-made capability to analyze user intent, emotions, and vulnerability, and as an input mechanism. However, modern AR devices lack systems to address their unique security and privacy issues. Problems such as lacking local pairing mechanisms usable while immersed in AR environments, bystander privacy protections, and the increased vulnerability to shoulder surfing while wearing AR devices all lack viable solutions. In this dissertation, I explore how readily available eye gaze sensor data can be used to improve existing methods for assuring information security and protecting the privacy of those near the device. My research has presented three new systems, BystandAR, ShouldAR, and GazePair that each leverage user eye gaze to improve security and privacy expectations in or with Augmented Reality. As these devices grow in power and number, such solutions are necessary to prevent perception failures that hindered earlier devices. The work in this dissertation is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful devices. / Doctor of Philosophy / Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. The ability of these devices to collect information presents challenges and opportunities to improve existing security and privacy techniques in this domain. In this dissertation, I explore how readily available eye gaze sensor data can be used to improve existing methods for assuring security and protecting the privacy of those near the device. My research has presented three new systems, BystandAR, ShouldAR, and GazePair that each leverage user eye gaze to improve security and privacy expectations in or with Augmented Reality. As these devices grow in power and number, such solutions are necessary to prevent perception failures that hindered earlier devices. The work in this dissertation is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful devices.
39

Communicating expertise in system operation and fault diagnosis to non-experts

Staderman, William P. 01 May 2003 (has links)
The use of systems that span many knowledge domains is becoming more common as technology advances, requiring expert-performance in a domain from users who are usually not experts in that domain. This study examined a means of communicating expertise (in system operation and fault diagnosis) to non-experts and furthering the understanding of expert mental models. It has been suggested that conceptions of abstract models of system-functions distinguish expert performance from non-expert performance (Hanisch, Kramer, and Hulin, 1991). This study examined the effects on performance of augmenting a simple control panel device with a model of the functions of the device, interacting with the model, and augmenting the device with graphically superimposed procedural indicators (directions). The five augmented display conditions studied were: Device Only, Device + Model, Device + Procedural Indicators, Interactive Model, and Interactive Model + Procedural Indicators. The device and displays were presented on a PC workstation. Performance measures (speed and accuracy) and subjective measures (questionnaires, NASA TLX, and structured interviews) were collected. It was expected that participants who interact with the device + procedural indicators would exhibit the shortest performance time and least errors; however, those who interacted with the simplest display (device only) were fastest and exhibited the least errors. Results of this study are discussed in terms of building a mental model and identifying situations that require a mental model. / Ph. D.
40

HD4AR: High-Precision Mobile Augmented Reality Using Image-Based Localization

Miranda, Paul Nicholas 05 June 2012 (has links)
Construction projects require large amounts of cyber-information, such as 3D models, in order to achieve success. Unfortunately, this information is typically difficult for construction field personnel to access and use on-site, due to the highly mobile nature of the job and hazardous work environments. Field personnel rely on carrying around large stacks of construction drawings, diagrams, and specifications, or traveling to a trailer to look up information electronically, reducing potential project efficiency. This thesis details my work on Hybrid 4-Dimensional Augmented Reality, known as HD4AR, a mobile augmented reality system for construction projects that provides high-precision visualization of semantically-rich 3D cyber-information over real-world imagery. The thesis examines the challenges related to augmenting reality on a construction site, describes how HD4AR overcomes these challenges, and empirically evaluates the capabilities of HD4AR. / Master of Science

Page generated in 0.0679 seconds