• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 518
  • 107
  • 87
  • 38
  • 36
  • 34
  • 19
  • 14
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1007
  • 1007
  • 294
  • 201
  • 186
  • 153
  • 150
  • 139
  • 127
  • 123
  • 117
  • 99
  • 99
  • 94
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Statistical Analysis and Evaluation of the 6DOF-utilization of a Handheld Augmented Reality Museum Application / Statistisk analys och evaluering av 6DOF-användningen av en handhållen förstärkt verklighetsapplikation för museum

Mataruga, Danilo January 2019 (has links)
This study explored the relatively new field of public mobile handheld AR and how the touchscreen-based input of smartphones affects the way users aged 10-12 interact with the 6DOF (6 degrees of freedom) that AR provides. Two experiments were performed, one in a public museum setting and one in a private school setting. A statistical analysis was performed between non-restricted and restricted touchscreen-based input. Quantitative and qualitative data was gathered through semi structured interviews and non-participant observations. Results show no statistical significance between the physical distance moved of the smartphone and the restriction of touchscreen-based input. Qualitative data show a different software application may yield different results. / Den här studien utforskade det relativt nya området inom offentlig mobil handhållen förstärkt verklighet och hur den pekskärmsbaserade interaktionen av mobila enheter påverkar sättet användare med åldrar 10-12 interagerar med de sex frihetsgrader som förstärkt verklighet möjliggör. Två experiment utfördes, det ena i en publik miljö på ett museum och det andra i en privat miljö på en skola. En statistisk analys utfördes mellan begränsad och icke-begränsad pekskärms interaktion. Kvantitativ och kvalitativ data samlades genom semi-strukturerade intervjuer och icke-deltagande observationer. Resultatet visar på att det inte finns någon statiskt signifikant skillnad mellan den fysiska rörelsen av den mobila enheten och begränsningen av pekskärmsinteraktionen. Den kvalitativa datan visar dock att en annorlunda implementation av mjukvaran kan ge andra resultat.
492

A case study on the effect of narrative in augmented reality experiences in museums / En fältstudie om effekten av berättelse i Augmented Reality-upplevelser på museum

Chen, Ni January 2018 (has links)
With the increasing popularity of using Augmented Reality (AR) as a new medium in museums, this thesis presented an empirical study to examine the visitors’ sense of presence in two types of AR experiences in museums, "informative" and "narrative". We developed two prototypes, both of which focused on delivering complementary information about a famous head statue at Medelhavsmuseet (the Mediterranean Museum of Stockholm). The "narrative" prototype brought the statue to life with a reconstructive appearance and a voice and allowed it to tell visitors its stories from a first-person perspective. On the other hand, the "informative" prototype presented objective facts by points from a third-person perspective. A series of user studies with 12 participants were conducted at Medelhavsmuseet where the participants reported their sense of presence in two respective conditions through a post-experiment questionnaire together with a semi-structured interview. The results suggested that participants barely experienced presence in the "informative" con- dition, while they experienced presence to a "moderately sufficient" degree in the "narrative" condition. This document reports the impacts of "narrative" on specific aspects of presence. Overall, narrative increased the participants’ sense of presence and the heightened presence had positive impacts on their attitudes towards the experience. However, this study also presented the negative effects (e.g., preference, learning effectiveness) that might be caused by a high sense of presence. This report discusses the observed relationships of other factors with the presence (e.g., age). Finally, directions for future studies were pointed out with respect to the improvements and extensions of the current work. / Med den ökande populariteten av att använda Augmented Reality (AR) som ett nytt medium på museum presenterar denna masteruppsats en empirisk studie som undersöker besökarens upplevelse av närvaro genom två typer av AR-upplevelser; en informativ och en berättande. Två prototyper utvecklades som båda fokuserade på att leverera kompletterande information om en känd staty på Medelhavsmuseeti Stockholm. Den berättande prototypen gav statyn liv genom ett rekonstrueratutseende samt en röst, vilket möjliggjorde att den kunde berätta statyns historier för besökaren ur ett jag-perspektiv. Den informativa prototypen presenterade objektiva fakta ur ett tredjepersonsperspektiv. En serie användartester utfördes med 12 deltagare på Medelhavsmuseet. Deltagarna rapporterade om deras upplevelse av närvaro i de två distinkta fallen genom ett formulär och en semistrukturerad intervju. Resultaten tydde på att deltagarna knappt upplevde någon närvaro i det informativa fallet, medan de upplevde en tillräcklig nivå av närvaro i det berättande fallet. Studien undersökte även berättandets påverkan på specifika aspekter av närvaro. Sammantaget ökade berättelsen deltagarens upplevelse av närvaro och den förhöjda upplevelsen av närvaro hade en positiv inverkan på deras attityd gentemot helhetsupplevelsen. Studien presenterade dock även negativa effekter (t.ex. inställning och lärningseffektivitet) som kan ha orsakats av den förhöjda upplevelsen av närvaro. Rapporten diskuterar även observerade förhållanden mellan olika faktorer kring närvaro (t.ex. ålder). Slutligen föreslås riktningar för framtida studier för förbättringar och breddningar inom det aktuella projektet.
493

Object Placement in AR without Occluding Artifacts in Reality / Placering av objekt i AR utan att dölja objekt i verkligheten

Sténson, Carl January 2017 (has links)
Placement of virtual objects in Augmented Reality is often done without regarding the artifacts in the physical environment. This thesis investigates how placement can be done with the artifacts included. It only considers placement of wall mounted objects. Through the development of two prototypes, using detected edges in RGB-images in combination with volumetric properties to identify the artifacts, arreas will be suggested for placement of virtual objects. The first prototype analyze each triangle in the model, which is an intensive and with low precision on the localization of the physical artifacts. The second prototype analyzed the detected RGB-edges in world space, which proved to detect the features with precise localization and a reduce calculation time. The second prototype manages this in a controlled setting. However, a more challenging environment would possibly pose other issues. In conclusion, placement in relation to volumetric and edge information from images in the environment is possible and could enhance the experience of being in a mixed reality, where physical and virtual objects coexist in the same world. / Placering av virtuella objekt i Augumented Reality görs ofta utan att ta hänsyn till objekt i den fysiska miljön. Den här studien utreder hur placering kan göras med hänsyn till den fysiska miljön och dess objekt. Den behandlar enbart placering av objekt på vertikala ytor. För undersökningen utvecklas två prototyper som använder sig av kantigenkänning i foton samt en volymmetrisk representation av den fysiska miljön. I denna miljö föreslår prototyperna var placering av objekt kan ske. Den första prototypen analyserar varje triangel i den volymmetriska representationen av rummet, vilket visade sig vara krävande och med låg precision av lokaliseringen av objekt i miljön. Den andra prototypen analyserar de detekterade kanterna i fotona och projicerar dem till deras positioner i miljön. Vilket var något som visade sig hitta objekt i rummet med god precision samt snabbare än den första prototypen. Den andra prototypen lyckas med detta i en kontrollerad miljö. I en mer komplex och utmanande miljö kan problem uppstå. Placering av objekt i Augumented Reality med hänsyn till både en volymmetrisk och texturerad representation av en miljö kan uppnås. Placeringen kan då ske på ett mer naturligt sätt och därmed förstärka upplevelsen av att virtuella och verkliga objekt befinner sig i samma värld.
494

Anymaker AR - Augmented reality as a mean to improve 3D sketching in digital space / Anymaker AR - Förstärkt verklighet som medium för att förbättra tredimensionellt skissande i den digitala världen

Häggvik, Adrian January 2017 (has links)
Digital three-dimensional sketching and modeling is a field in computer science that is constantly evolving through new interaction paradigms. Many solutions move away from the traditional modern modeling software and aim to create a more natural and intuitive user experience. This report aims to compare an existing touch-screen solution against a novel implementation using augmented reality, that is made to replicate the way we draw in real life. A comparative task-based user study was performed and objective data was gathered together with a questionnaire and survey. Results indicate that subjects worked faster and preferred certain models when using the existing technology, while using the new implementation reached better or equal results in terms of spatial cognitive abilities, the frequency at which the user needed to redo their work, and wanted to reuse the software. Augmented reality showed good results when creating the simpler geometric shape of pyramids but comparatively worse results for objects of less uniform shapes. With further improvement, augmented reality can be seen as a good mean to improve the way we sketch and model in three di-mensions. / Digital 3D skissande och modellering är ett fält inom datavetenskap som konstant utvecklas genom nya interaktionsparadigmer. Många lösningar rör sig ifrån de traditionella, moderna, programvarorna för att skapa en mer naturlig och intuitiv användarupplevelse. Denna rapport har som mål att utvärdera en existerande pekskärmslösning gentemot en egen implementation som använder förstärkt verklighet i syfte att efterlikna det sätt vi ritar på i verkligheten. En jämförande uppgiftsbaserad användarstudie genomfördes och kvantitativ data samlades tillsammans med ett frågeformulär och en enkät. Resultaten indikerar att man arbetade snabbare och föredrog några modeller gjorda med den existerande programvaran medan den nya implementationen visade bättre eller likvärdiga resultat gällande den spatiala kognitiva förmågan, frekvensen då man ångrade sig samt återanvändbarhet. Förstärkt verklighet påvisade starka resultat gällande den enklare geometriska pyramidformen men jämförelsevis sämre resultat gällande mindre enhetliga former. Med ytterligare förbättringar så kan förstärkt verklighet ses som ett bra medium för att förbättra viset man skissar och modellerar i 3D.
495

An explorative study in the user experience of augmented reality enhanced manuals / En explorativ studie i användarupplevelsen av manualer förbättrade med förstärkt verklighet

Mattsson, Gustav, Hogler, Marcus January 2018 (has links)
Augmented reality has been shown to increase the effectiveness of assembly tasks in several studies related to industrial applications, throughout the existence of the technology. With the availability of smartphones and the recent release of mobile applications utilizing augmented reality, the concept of augmented reality assisted assembly can be applied to domestic use as well. This study aims to examine the user experience of such an application to better understand the its future potential. The application was made specifically for this study with 3D animations showing each step on how to assemble a drawer for the Ikea Hemnes 8 drawer dresser. The results consists of qualitative measurements of user experience and quantitative data of task success in the form of a time measurement and a count of the number of mistakes each participant did. Additional feedback was received from a post test interview. An experimental group of eight people used the application together with the printed manual and their results were compared to an equally sized control group using only the printed manual. The participants were generally positive to the looks, functionality and usability of the application which was reflected in their experience, but no significant improvement in Task Success could be obtained. There is however no observed negative consequences of the introduction of augmented reality to the domestic assembly task, and the future potential based on user experience is deemed satisfactory. / Förstärkt verklighet (AR) har visat sig öka effektiviteten av monteringsuppgifter i flera studier relaterade till industriella applikationer, ända sedan tekniken var helt ny. Med dagens tillgång till smartphones och de nyligen lanserade mobila applikationerna som utnyttjar AR kan konceptet med AR-assisterad montering även tillämpas för hemmabruk. Denna studie syftar till att undersöka användarupplevelsen av en sådan applikation för att bättre förstå dess framtida potential. Appen utvecklades specifikt för denna studie med 3D-animationer som visar varje steg på hur man monterar en låda för en Ikea Hemnes byrå med 8 lådor. Resultaten består av kvalitativa mätningar av användarupplevelse och kvantitativ data om effektivitet i form av en tidsmätning och antal misstag som varje deltagare gjorde. Ytterligare återkoppling mottogs från en intervju som hölls efter experimentet. En experimentell grupp bestående av åtta personer använde applikationen tillsammans med den tryckta manualen och deras resultat jämfördes med en lika stor kontrollgrupp med endast den tryckta manualen. Deltagarna var generellt positiva till applikationens utseende, funktionalitet och användbarhet, vilket återspeglades i deras användarupplevelse, men ingen signifikant förbättring av effektivitet kunde erhållas. Det finns emellertid inga observerade negativa följder av införandet av AR i monteringsuppgifter för hemmabruk, och den framtida potentialen baserad på användarupplevelse anses vara tillfredsställande.
496

Mixed Reality Tailored to the Visually-Impaired

Omary, Danah M 08 1900 (has links)
The goal of the proposed device and software architecture is to apply the functionality of mixed reality (MR) in order to make a virtual environment that is more accessible to the visually-impaired. We propose a glove-based system for MR that will use finger and hand movement tracking along with tactile feedback so that the visually-impaired can interact with and obtain a more detailed sense of virtual objects and potentially even virtual environments. The software architecture makes current MR frameworks more accessible by augmenting the existing software and extensive 3D model libraries with both the interfacing of the glove-based system and the audibly navigable user interface (UI) of a virtual environment we have developed. We implemented a circuit with finger flexion/extension tracking for all 5 fingers of a single hand and variable vibration intensities for the vibromotors on all 5 fingertips of a single hand. The virtual environment can be hosted on a Windows 10 application. The virtual hand and its fingers can be moved with the system's input and the virtual fingertips touching the virtual objects trigger vibration motors (vibromotors) to vibrate while the virtual objects are being touched. A rudimentary implementation of picking up and moving virtual objects inside the virtual environment is also implemented. In addition to the vibromotor responses, text to speech (TTS) is also implemented in the application for when virtual fingertips touch virtual objects and other relevant events in the virtual environment.
497

Dynamic Shared State Maintenance In Distributed Virtual Environments

Hamza-Lup, Felix George 01 January 2004 (has links)
Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. In a distributed interactive VE the dynamic shared state represents the changing information that multiple machines must maintain about the shared virtual components. One of the challenges in such environments is maintaining a consistent view of the dynamic shared state in the presence of inevitable network latency and jitter. A consistent view of the shared scene will significantly increase the sense of presence among participants and facilitate their interactive collaboration. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. A review of the literature illustrates that the techniques for consistency maintenance in distributed Virtual Reality (VR) environments can be roughly grouped into three categories: centralized information management, prediction through dead reckoning algorithms, and frequent state regeneration. Additional resource management methods can be applied across these techniques for shared state consistency improvement. Some of these techniques are related to the systems infrastructure, others are related to the human nature of the participants (e.g., human perceptual limitations, area of interest management, and visual and temporal perception). An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability.
498

Modeling, Simulation, And Visualization Of 3d Lung Dynamics

Santhanam, Anand 01 January 2006 (has links)
Medical simulation has facilitated the understanding of complex biological phenomenon through its inherent explanatory power. It is a critical component for planning clinical interventions and analyzing its effect on a human subject. The success of medical simulation is evidenced by the fact that over one third of all medical schools in the United States augment their teaching curricula using patient simulators. Medical simulators present combat medics and emergency providers with video-based descriptions of patient symptoms along with step-by-step instructions on clinical procedures that alleviate the patient's condition. Recent advances in clinical imaging technology have led to an effective medical visualization by coupling medical simulations with patient-specific anatomical models and their physically and physiologically realistic organ deformation. 3D physically-based deformable lung models obtained from a human subject are tools for representing regional lung structure and function analysis. Static imaging techniques such as Magnetic Resonance Imaging (MRI), Chest x-rays, and Computed Tomography (CT) are conventionally used to estimate the extent of pulmonary disease and to establish available courses for clinical intervention. The predictive accuracy and evaluative strength of the static imaging techniques may be augmented by improved computer technologies and graphical rendering techniques that can transform these static images into dynamic representations of subject specific organ deformations. By creating physically based 3D simulation and visualization, 3D deformable models obtained from subject-specific lung images will better represent lung structure and function. Variations in overall lung deformations may indicate tissue pathologies, thus 3D visualization of functioning lungs may also provide a visual tool to current diagnostic methods. The feasibility of medical visualization using static 3D lungs as an effective tool for endotracheal intubation was previously shown using Augmented Reality (AR) based techniques in one of the several research efforts at the Optical Diagnostics and Applications Laboratory (ODALAB). This research effort also shed light on the potential usage of coupling such medical visualization with dynamic 3D lungs. The purpose of this dissertation is to develop 3D deformable lung models, which are developed from subject-specific high resolution CT data and can be visualized using the AR based environment. A review of the literature illustrates that the techniques for modeling real-time 3D lung dynamics can be roughly grouped into two categories: Geometrically-based and Physically-based. Additional classifications would include considering a 3D lung model as either a volumetric or surface model, modeling the lungs as either a single-compartment or a multi-compartment, modeling either the air-blood interaction or the air-blood-tissue interaction, and considering either a normal or pathophysical behavior of lungs. Validating the simulated lung dynamics is a complex problem and has been previously approached by tracking a set of landmarks on the CT images. An area that needs to be explored is the relationship between the choice of the deformation method for the 3D lung dynamics and its visualization framework. Constraints on the choice of the deformation method and the 3D model resolution arise from the visualization framework. Such constraints of our interest are the real-time requirement and the level of interaction required with the 3D lung models. The work presented here discusses a framework that facilitates a physics-based and physiology-based deformation of a single-compartment surface lung model that maintains the frame-rate requirements of the visualization system. The framework presented here is part of several research efforts at ODALab for developing an AR based medical visualization framework. The framework consists of 3 components, (i) modeling the Pressure-Volume (PV) relation, (ii) modeling the lung deformation using a Green's function based deformation operator, and (iii) optimizing the deformation using state-of-art Graphics Processing Units (GPU). The validation of the results obtained in the first two modeling steps is also discussed for normal human subjects. Disease states such as Pneumothorax and lung tumors are modeled using the proposed deformation method. Additionally, a method to synchronize the instantiations of the deformation across a network is also discussed.
499

Using Augmented Reality For Studying Left Turn Maneuver At Un-signalized Intersection And Horizontal Visibility Blockage

Moussa, Ghada 01 January 2006 (has links)
Augmented reality "AR" is a promising paradigm that can provide users with real-time, high-quality visualization of a wide variety of information. In AR, virtual objects are added to the real-world view in a real time. Using the AR technology can offer a very realistic environment for driving enhancement as well as driving performance testing under different scenarios. This can be achieved by adding virtual objects (people, vehicles, hazards, and other objects) to the normal view while driving in a safe controlled environment. In this dissertation, the feasibility of adapting the AR technology into traffic engineering was investigated. Two AR systems; AR Vehicle "ARV" system and Offline AR Simulator "OARSim" system were built. The systems' outcomes as well as the on-the-road driving under the AR were evaluated. In evaluating systems' outcomes, systems were successfully able to duplicate real scenes and generate new scenes without any visual inconsistency. In evaluating on-the-road driving under the AR, drivers' distance judgment, speed judgment, and level of comfort while driving were evaluated. In addition, our systems were used to conduct two traffic engineering studies; left-turn maneuver at un-signalized intersection, and horizontal visibility blockage when following a light truck vehicle. The results from this work supported the validity of our AR systems to be used as a surrogate to the field-testing for transportation research.
500

Real-time Monocular Vision-based Tracking For Interactive Augmented Reality

Spencer, Lisa 01 January 2006 (has links)
The need for real-time video analysis is rapidly increasing in today's world. The decreasing cost of powerful processors and the proliferation of affordable cameras, combined with needs for security, methods for searching the growing collection of video data, and an appetite for high-tech entertainment, have produced an environment where video processing is utilized for a wide variety of applications. Tracking is an element in many of these applications, for purposes like detecting anomalous behavior, classifying video clips, and measuring athletic performance. In this dissertation we focus on augmented reality, but the methods and conclusions are applicable to a wide variety of other areas. In particular, our work deals with achieving real-time performance while tracking with augmented reality systems using a minimum set of commercial hardware. We have built prototypes that use both existing technologies and new algorithms we have developed. While performance improvements would be possible with additional hardware, such as multiple cameras or parallel processors, we have concentrated on getting the most performance with the least equipment. Tracking is a broad research area, but an essential component of an augmented reality system. Tracking of some sort is needed to determine the location of scene augmentation. First, we investigated the effects of illumination on the pixel values recorded by a color video camera. We used the results to track a simple solid-colored object in our first augmented reality application. Our second augmented reality application tracks complex non-rigid objects, namely human faces. In the color experiment, we studied the effects of illumination on the color values recorded by a real camera. Human perception is important for many applications, but our focus is on the RGB values available to tracking algorithms. Since the lighting in most environments where video monitoring is done is close to white, (e.g., fluorescent lights in an office, incandescent lights in a home, or direct and indirect sunlight outside,) we looked at the response to "white" light sources as the intensity varied. The red, green, and blue values recorded by the camera can be converted to a number of other color spaces which have been shown to be invariant to various lighting conditions, including view angle, light angle, light intensity, or light color, using models of the physical properties of reflection. Our experiments show how well these derived quantities actually remained constant with real materials, real lights, and real cameras, while still retaining the ability to discriminate between different colors. This color experiment enabled us to find color spaces that were more invariant to changes in illumination intensity than the ones traditionally used. The first augmented reality application tracks a solid colored rectangle and replaces the rectangle with an image, so it appears that the subject is holding a picture instead. Tracking this simple shape is both easy and hard; easy because of the single color and the shape that can be represented by four points or four lines, and hard because there are fewer features available and the color is affected by illumination changes. Many algorithms for tracking fixed shapes do not run in real time or require rich feature sets. We have created a tracking method for simple solid colored objects that uses color and edge information and is fast enough for real-time operation. We also demonstrate a fast deinterlacing method to avoid "tearing" of fast moving edges when recorded by an interlaced camera, and optimization techniques that usually achieved a speedup of about 10 from an implementation that already used optimized image processing library routines. Human faces are complex objects that differ between individuals and undergo non-rigid transformations. Our second augmented reality application detects faces, determines their initial pose, and then tracks changes in real time. The results are displayed as virtual objects overlaid on the real video image. We used existing algorithms for motion detection and face detection. We present a novel method for determining the initial face pose in real time using symmetry. Our face tracking uses existing point tracking methods as well as extensions to Active Appearance Models (AAMs). We also give a new method for integrating detection and tracking data and leveraging the temporal coherence in video data to mitigate the false positive detections. While many face tracking applications assume exactly one face is in the image, our techniques can handle any number of faces. The color experiment along with the two augmented reality applications provide improvements in understanding the effects of illumination intensity changes on recorded colors, as well as better real-time methods for detection and tracking of solid shapes and human faces for augmented reality. These techniques can be applied to other real-time video analysis tasks, such as surveillance and video analysis.

Page generated in 0.1388 seconds