• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 103
  • 87
  • 38
  • 36
  • 33
  • 19
  • 14
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 988
  • 988
  • 291
  • 196
  • 183
  • 151
  • 146
  • 135
  • 126
  • 120
  • 116
  • 99
  • 93
  • 92
  • 91
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Virtual primitives for the representation of features and objects in a remote telepresence environment

Wheeler, Alison January 2000 (has links)
This thesis presents the development of a set of novel graphical tools Known as 'virtual primitives' that allow the user of a stereoscopic telepresence system to actively and intuitively model features in a remote environment. The virtual primitives provide visual feedback during the model creating process in the form of a dynamic wireframe of the primitive overlaid and registered with the real object. The operator can immediately see the effect of his decisions and if necessary make minor corrections to improve the fit of the primitive during its generation. Virtual primitives are a generic augmented reality (AR) tool and their applications extend past the modelling of a workspace for telerobot operation to other remote tasks such as visual inspection, surveying and collaborative design. An AR system has been developed and integrated with the existing Surrey Telepresence System. The graphical overlays are generated using virtual reality software and combined with the video images. To achieve a one-to-one correspondence between the real and virtual worlds the AR system is calibrated using a simple pinhole camera model and standard calibration algorithm. An average RMS registration error between the video and graphical images of less than one framegrabber pixel is achieved. An assessment of a virtual pointer confirms that this level of accuracy is acceptable for use with the virtual primitives. The concept of the virtual primitives has been evaluated in an experiment to model three test objects. The results show that using a virtual primitive was superior in accuracy and task completion time to using a pointer alone. Finally, a case study on the remote inspection of sewers demonstrates the advantages of virtual primitives in a real application. It confirms that the use of virtual primitives significantly reduces the subjective nature of the task, offers an increase in precision by an order of magnitude over conventional inspection methods, and provides additional useful data on the characteristics of the sewer features not previously available.
32

Evaluating the Effectiveness of Augmented Reality and Wearable Computing for a Manufacturing Assembly Task

Baird, Kevin Michael 14 July 1999 (has links)
The focus of this research was to examine how effectively augmented reality (AR) displays, generated with a wearable computer, could be used for aiding an operator performing a manufacturing assembly task. The research concentrated on comparing two technologies for generating augmented reality displays (opaque vs. see-through), with two current types of assembly instructions (a traditional assembly instruction manual vs. computer aided instruction). The study was used to evaluate the effectiveness of the wearable based augmented reality compared to traditional instruction methods, and was also used to compare two types of AR displays in the context of an assembly task. For the experiment, 15 subjects were asked to assemble a computer motherboard using the four types of instruction: paper manual, computer aided, an opaque AR display, and a see-through AR display. The study was run as a within subjects design, where subjects were randomly assigned the order of instruction media. For the AR conditions, the augmented environments were generated with a wearable computer, and viewed through two types of monocular, head-mounted displays (HMD). The first type of HMD was a monocular opaque HMD, and the second was a monocular see-though HMD. Prior to the experiment, all subjects performed a brief training session teaching them how to insert the various components of the motherboard in their respective slots. The time of assembly and assembly errors were measured for each type of media, and a questionnaire was administered to each subject at the end of each condition, and at the end of the experiment to determine the usability of the four instructional media. The results of the experiment indicated that both augmented reality conditions were more effective instructional aids for the assembly task than either the paper instruction manual or the computer aided instruction. The see-through HMD resulted in the fastest assembly times followed by the opaque HMD, the computer aided instruction, and the paper instructions respectively. In addition, subjects made fewer errors using the AR conditions compared to the other two types of instructional media. However, while the two AR conditions were a more effective instructional media when time was the response measure, there were still some important usability issues associated with the AR technology that were not present in the non-AR conditions. Many of the subjects indicated that both types of HMDs were uncomfortable, and over half expressed concerns about poor image contrast with the see-through HMDs. Finally, this thesis discusses the results of this study as well as implications for the design and use of AR and wearable computers for manufacturing assembly tasks. / Master of Science
33

Challenges of a Pose Computation Augmented Reality Game Application

Wang, Chiu Ni 12 September 2011 (has links)
No description available.
34

Assisting Spatial Referencing for Collaborative Augmented Reality

Li, Yuan 27 May 2022 (has links)
Spatial referencing denotes the act of referring to a location or an object in space. Since it is often essential in different collaborative activities, good support for spatial referencing could lead to exceptional collaborative experience and performance. Augmented Reality (AR) aims to enhance daily activities and tasks in the real world, including various collaborations and social interactions. Good support for accurate and rapid spatial referencing in collaborative AR often requires detailed environment 3D information, which can be critical for the system to acquire as constrained by current technology. This dissertation seeks to address the issues related to spatial referencing in collaborative AR through 3D user interface design and different experiments. Specifically, we start with investigating the impact of poor spatial referencing on close-range, co-located AR collaborations. Next, we propose and evaluate different pointing ray techniques for object reference at a distance without knowledge from the physical environment. We further introduce marking techniques aiming to accurately acquire the position of an arbitrary point in 3D space that can be used for spatial referencing. Last, we provide a systematic assessment of an AR collaborative application that supports efficient spatial referencing in remote learning to demonstrate its benefit. Overall, the dissertation provides empirical evidence of spatial referencing challenges and benefits to collaborative AR and solutions to support adequate spatial referencing when model information from the environment is missing. / Doctor of Philosophy / People often exchange spatial information about objects when they work together. Example phrases include: ``put that there'', or ``pick the third object from left''. On the other hand, Augmented Reality (AR) is the technology that displays 3D information into the real world to enhance or augment reality. Scientists and technology practitioners think that AR can help people collaborate in a better way. The AR system needs to have a good understanding of the physical environment to support exchanging spatial information in the first place. However, limited by current technology, acquiring spatial information from the real world is not always possible or reliable. In this dissertation, we first illustrate the severity of insufficient environmental knowledge when collaborators sit next to each other in AR. Then we present pointing ray techniques to help AR collaborators refer to distant objects without knowing where those objects are. We further explore different marking techniques that can help the AR system calculate the position of a point in space without scanning the area. Last, we provide an AR application that supports efficient spatial information communication in remote discussion around physical objects.
35

Phenomenal Things

Schoenborn, Eric Cade 19 January 2022 (has links)
Phenomenal Things is a comical look into the daily lives of Internet of Things (IoT) artifacts and their experiences as social beings in cyberspace. This Augmented Reality (AR) experience presents a storyworld set in the digital realm where the digital personas of IoT artifacts are engaged in activities normally invisible to humans such as information extraction, learning, talking to each other and communicating with other "things" online. By wearing a head- worn display (HWD), users will encounter anthropomorphized IoT artifacts going about their daily lives and come to understand these characters as digital beings with social lives. Placed inside of cyberspace, participants will find themselves within a circle of anthropomorphized IoT devices in dialogue with one another, as they welcome a new light bulb to their network. As participants move about the AR actors, proximity to each character will cause the participant to "friend" that character. "Friending" in this case means to get close to and influence the version of the story being told by changing the social network of the character. With this work I intend to create a mesmerizing yet subtly-interactive experience using proxemics to create an interactive narrative where participants can create emotional bonds with the AR actors in this immersive theater experiment. / Master of Fine Arts / What is everyday life like for the billions of interconnected sensors and devices that make up the network known as the Internet of Things (IoT)? Many people struggle to accurately describe what the IoT is, so it is likely most of us are unaware what specifically these "smart" devices are doing while continuously completing their digital chores. Beyond collecting information and serving their own unique functions, these devices now autonomously connect to social networks and interact with one another in ways meant to replicate human social networking. Phenomenal Things is a comical look at the social lives of these devices, from inside the Internet of Things. Told with the aid of an Augmented Reality Head Worn Display, the story stars anthropomorphized devices of a smart home network and is centered around the idea of these devices welcoming a new smart bulb to their network. The AR actors engage in dialogue to explain the network to the new bulb, what they are all doing there and how to communicate with other beings online. Participants can directly impact the version of the story being told by "friending" the various devices and thus influencing their point of view as so often happens with the social network experiences of humans.
36

Extended Situation Awareness Theory for Mobile Augmented Reality Interfaces to Support Navigation

Mi, Na 24 April 2014 (has links)
Despite the increasingly sophisticated capabilities of mobile AR guidance applications in providing new ways of interacting with the surrounding environment, empirical research remains needed in four principal areas: 1) identifying user needs and use cases, 2) developing an appropriate theoretical framework, 3) understanding user's interactions with the surrounding environment, and 4) avoiding information overload. To address these needs, a mixed-methods approach, involving two studies, was used to extend current Situation Awareness (SA) theory and evaluate the application of an extended theory. These were achieved in the context of a reality-augmented environment for the task of exploring an unfamiliar urban context. The first study examined SA in terms of the processes that an individual employs and the essential requirements needed to develop SA for the case of urban exploratory navigation using mobile augmented reality (MAR). From this study, SA-supported design implications for an MAR guidance application were developed, and used to evaluate the application of an extended SA theoretical cognitive model. The second study validated the earlier findings, and involved two specific applications of the translated SA-supported interface design and an evaluation of five conceptual design concepts. Results of the AR interface application suggested a significant SA-supported interface design effect on user's SA, which is dependent on the number of Points of Interest (POIs) included in the interface. Results of the embedded Map interface application showed a significant SA-support interface design effect on a user's SA. The SA-supported interface designs helped participants complete task queries faster and led to higher perceived interface usability. This work demonstrates that, by adopting a systematic approach, transformed requirements can be obtained and used to design and develop SA-supported strategies. In doing so, subsequent implementation of SA-supported strategies could enhance a user's SA in the context exploratory navigation in an urban environment using MAR. Indeed, a validation process was initiated for the extracted user requirements, by conducting evaluations on these SA-supported strategies. Finally, a set of preliminary design recommendations is proposed, with the goal of their eventual incorporation into the design and development of more effective mobile AR guidance applications. / Ph. D.
37

Pointing Techniques in AR : Design and Comparative Evaluation of Two Pointing Techniques in Augmented Reality

Bengani, Arham January 2021 (has links)
At present, human-computer interaction (HCI) is no longer limited to traditional input hardware like mouse and keyboard. In the last few years, Augmented and Virtual Reality (AR, VR) have dramatically changed the way we interact with a computer. Currently, one of the many design challenges of these systems is the integration of the physical and digital aspects in an accessible and usable way. The design success of AR systems depends on the fluid and harmonious fusion of the material and digital world. Pointing, which is a radical gesture in communication, can enable easy and intuitive interaction within the AR application. This study explores two pointing techniques that can be used in an AR application. I developed two prototypes, and the concept of laser pointing was used, by shooting out a laser from the point of origin into the application, to perform pointing. The point of origin for the laser is the camera, in the first prototype called camera laser and the fiducial, in the second prototype called pen laser. The camera laser showed promising results in terms of ease of use and reliability, but the pen laser felt more natural to the user. In this study, I present the prototypes, followed by the user study and the results. / För närvarande är interaktionen mellan människa och dator (HCI) inte längre begränsad till traditionaell ingångshårdvara som mus och tangentbord. Under de senaste åren har Augmented och Virtual Reality (AR, VR) dramatiskt förändrat vårt sätt att interagera med en dator. För närvarande är en av de måmga designutmaningarna för dessa system att sammanföra de fysiska och digitala aspekterna på ett tillgångligt och användbart sätt. AR-systemens framgång beror på den löpande och harmoniska sammansmältningen av den materiella och digitala världen. Pekande, som är ett radikalt sätt att kommunicera på, kan möjliggöra enkel och intuitiv interaktion inom AR-användning. Denna studie undersöker två pektekniker som kan användas i AR-användning. Jag utvecklade två prototyper. Laserpekning användes genom att skjuta ut en laser från startpunkten till applikationen, för att utföra pekning. Utgångspunkten för lasern är kameran, i den första prototypen som kallas kameralaser och fiducial, i den andra prototypen som kallas pennlaser. Kameralasern visade lovande resultat när det gäller användarvånlighet och tillförlitlighet, men pennlasern upplevdes mer naturlig för användaren. I denna studie presenterar jag prototyperna följt av användarstudien och resultaten.
38

An embedded augmented reality system

Groufsky, Michael Edward January 2011 (has links)
This report describes an embedded system designed to support the development of embedded augmented reality applications. It includes an integrated camera and built-in graphics acceleration hardware. An example augmented reality application serves as a demonstration of how these features are accessed, as well as providing an indication of the performance of the device. The embedded augmented reality development platform consists of the Gumstix Overo computer-on-module paired with the custom-built Overocam camera board. This device offers an ARM Cortex-A8 CPU running at 600 MHZ and 256 MB of RAM, along with the ability to capture VGA video at 30 frames per second. The device runs an operating system based on version 2.6.33 of the Linux kernel. The main feature of the device is the OMAP3530 multimedia applications processor from Texas Instruments. In addition to the ARM CPU, it provides an on-board 2D/3D graphics accelerator and a digital signal processor. It also includes a built-in camera peripheral interface, reducing the complexity of the camera board design. A working example of an augmented reality application is included as a demonstration of the device's capabilities. The application was designed to represent a basic augmented reality task: tracking a single marker and rendering a simple virtual object. It runs at around 8 frames per second when a marker is visible and 13 frames per second otherwise. The result of the project is a self-contained computing platform for vision-based augmented reality. It may either be used as-is or customised with additional hardware peripherals, depending on the requirements of the developer.
39

Developing a Client/Server Architecture for a Mobile AR Urban Design Application

Partridge, Michael Jonathan January 2013 (has links)
This thesis describes research into developing a client/server ar- chitecture for a mobile Augmented Reality (AR) application. Following the earthquakes that have rocked Christchurch the city is now changed forever. CityViewAR is an existing mobile AR application designed to show how the city used to look before the earthquakes. In CityViewAR 3D virtual building models are overlaid onto video captured by a smartphone camera. However the current version of CityViewAR only allows users to browse information stored on the mobile device. In this research the author extends the CityViewAR application to a client-server model so that anyone can upload models and annotations to a server and have this information viewable on any smartphone running the application. In this thesis we describe related work on AR browser architectures, the system we developed, a user evaluation of the prototype system and directions for future work.
40

Montanita: A Modern Augmented Reality System

Adler, Sean 01 January 2013 (has links)
Augmented Reality (AR) applications require novel rendering technologies, sensors, and interaction techniques. This thesis describes the nascent field of AR, and outlines the design of a new college campus annotation application which serves as a concrete example of how to flesh out a full AR system. Code for the application is included in the Appendix. As such, this thesis demonstrates that AR is advanced paradigm that nonetheless constitutes a feasible challenge for independent programmers.

Page generated in 0.0905 seconds