• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 526
  • 107
  • 87
  • 38
  • 37
  • 36
  • 19
  • 15
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1020
  • 1020
  • 295
  • 204
  • 186
  • 155
  • 152
  • 140
  • 128
  • 126
  • 117
  • 100
  • 100
  • 96
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
861

Exploring eye-tracking and augmented reality interaction for Industry 4.0 : Study of eye-tracking and augmented reality for manipulation, training and teleassistance

Garcia Sacristan, Eduardo January 2019 (has links)
In this project, we explore eye-tracking enabled interaction in augmented reality for training, teleassistance and controlling Internet of Things devices in the forthcoming manufacturing industry. We performed a design exploration with industrial partners that ended up with the design and implementation of a series of prototypes using gaze for interaction. To explore the possible benefits, we compared their efficiency, effectiveness and user experience against counterparts not using gaze. Overall, we found that participants using the eye-tracking implementation scored better on a subjective user experience questionnaire regarding comfort, stress and perceived completion time. In training prototypes, participants performed faster and committed fewer errors, while in the teleassistance and the Internet of Things prototypes they performed similarly to mouse or touch. We hence argue that augmented reality and eye-tracking can improve the overall experience of users in the manufacturing industry or at least perform equally as well stablished user input devices with the benefit of freeing the user’s hands. / I detta projekt utforskar vi interaktion med ögonstyrning i den förstärkta verkligheten för utbildning, fjärrhjälp och kontroll av sakernas internet enheter i den kommande tillverkningsindustrin. Vi utförde en designundersökning med industriella partners som avslutade med en design och implementering av en serie av prototyper som använder blickdelning för interaktion. För att utforska möjliga fördelar jämförde vi deras effektivitet och användarupplevelse mot motsvarigheter som inte använder blicken. Sammantaget tog vi reda på att de deltagare som använde implementeringen av ögonstyrning fick bättre resultat på ett subjektivt frågeformulär för användarupplevelse angående komfort, stress och slutföringstid. Vid utbildnings prototyper utförde deltagarna uppgiften snabbare och begick färre fel, medan de i fjärrhjälp och sakernas internet prototyperna utförde uppgiften på samma sätt som med en mus eller en beröring. Vi hävdar därför att den förstärkta verkligheten och ögonstyrning kan förbättra den allmänna upplevelsen för användare i tillverkningsindustrin eller åtminstone prestera lika väl som etablerade användarinmatningsenheter med fördel att frigöra användarens händer.
862

Augmented reality fonts with enhanced out-of-focus text legibility

Arefin, Mohammed Safayet 09 December 2022 (has links) (PDF)
In augmented reality, information is often distributed between real and virtual contexts, and often appears at different distances from the viewer. This raises the issues of (1) context switching, when attention is switched between real and virtual contexts, (2) focal distance switching, when the eye accommodates to see information in sharp focus at a new distance, and (3) transient focal blur, when information is seen out of focus, during the time interval of focal distance switching. This dissertation research has quantified the impact of context switching, focal distance switching, and transient focal blur on human performance and eye fatigue in both monocular and binocular viewing conditions. Further, this research has developed a novel font that when seen out-of-focus looks sharper than standard fonts. This SharpView font promises to mitigate the effect of transient focal blur. Developing this font has required (1) mathematically modeling out-of-focus blur with Zernike polynomials, which model focal deficiencies of human vision, (2) developing a focus correction algorithm based on total variation optimization, which corrects out-of-focus blur, and (3) developing a novel algorithm for measuring font sharpness. Finally, this research has validated these fonts through simulation and optical camera-based measurement. This validation has shown that, when seen out of focus, SharpView fonts are as much as 40 to 50% sharper than standard fonts. This promises to improve font legibility in many applications of augmented reality.
863

The Current State of Augmented Reality Adoption : A look at emerging technology adoption / Det aktuella läget för användning av försträkt verklighet

Berggren, Oliver January 2023 (has links)
Augmented reality (AR) has seen a rise in public and corporate interest in recent years and with Apple's announcement of the Vision Pro headset, it is an exciting time to study the technology. The question on many people's minds is if AR is the computing platform of tomorrow or if it is a fad. However, this uncertainty is generally the case for emerging technologies and challenges exist with adoption for it to go from interest to value creator. This research paper explores technology adoption and the current state of Augmented Reality (AR) adoption in a mobile context, by utilizing common innovation frameworks. These frameworks were Diffusion of Innovations, Hype Cycle, and Disruptive Innovations. There seems to be a disparity between the practical and theoretical understanding of technology adoption. Some concepts are misused or unused in practice but well understood in theory and the other way around. This ambiguity and disparity could lead to suboptimal theory development and practical application. The purpose of this paper is to explore emerging technology adoption and to nuance the academic and practical knowledge of technology adoption, by studying the AR. It is important to evolve understanding in this area to decrease resource waste and to increase societal and technological progress. This paper analyzed Google's ARCore data, search topic data, and research publications data and interviewed AR companies to answer the research questions. The results were first that the level of mobile AR adoption is at 40 percent of the total AR mobile market. Secondly, the current interest in AR is 50 to 60 percent compared to the peak interest in 2016 while research isincreasing exponentially. Finally, the study found that the technology is not disruptive to a high degree at the moment. / Argumented reality (AR) eller förstärk verklighet har fått ett ökat intresse bland allmänheten och företag under de senaste åren, och med Apples tillkännagivande av Vision Pro-headsetet är det en spännande tid att studera tekniken. Frågan som många ställer sig är om AR är morgondagens datorplattform eller om det är en modefluga. Denna osäkerhet är dock generellt sett fallet för ny teknik och det finns utmaningar med införandet för att den ska gå från intresse till värdeskapande. I det här forskningsdokumentet undersöks teknikadoption och det aktuella läget för Augmented Reality (AR) i en mobil kontext, genom att använda vanliga innovationsramverk. Dessa ramverk var Diffusion of Innovations, Hype Cycle och Disruptive Innovations. Det verkar finnas en skillnad mellan den praktiska och teoretiska förståelsen av teknikadoption. Vissa begrepp missbrukas eller används inte i praktiken, men är väl förstådda i teorin och tvärtom. Denna tvetydighet och skillnad kan leda till suboptimal teoriutveckling och praktisk tillämpning. Syftet med denna artikel är att utforska ny teknik och nyansera den akademiska och praktiska kunskapen om teknikadoption genom att studera AR. Det är viktigt att utveckla förståelsen inom detta område för att minska resursslöseriet och öka de samhälleliga och tekniska framstegen. I den här artikeln analyserades Googles ARCore-data, data om sökämnen och data om forskningspublikationer, och AR-företag intervjuades för att besvara forskningsfrågorna. Resultaten var för det första att användningen av mobil AR ligger på 40 procent av den totala mobila AR-marknaden. För det andra är det nuvarande intresset för AR 50 till 60 procent jämfört med toppen 2016, samtidigt som forskningen ökar exponentiellt. Slutligen visade studien att tekniken för närvarande inte är särskilt disruptiv.
864

Augmented Reality Method for Supporting Time Studies in Manual Assembly Processes

Domenech, Sofía January 2022 (has links)
In the 21st century, the so-called fourth Industrial Revolution, productivity at work is becoming increasingly important. Companies are looking to invest in the technological pillars of Industry 4.0 in order to advance and improve their results. To this end, they are striving for greater process efficiency. Labour productivity is an essential measure for any company, as it is linked to economic growth and development. Higher worker productivity means better-used resources, more efficiently performed tasks and greater competitiveness, as well as an increase in strengths and a reduction in weaknesses. We know that one of the most common types of work measurement is the time study through which it is determined how much time a skilled worker spends under set conditions to complete a task. This study is carried out by a qualified professional who observes the employee using a time measurement device. At the same time, the quality of the work can be evaluated. The Swedish company, Xylem, famous for using water power, wanted to automate the time study in one of its assembly processes, in particular, the assembly of a water pumphead. The purpose of this idea was to reduce the number of sources required, to have a system to track the employee's hands and to identify the start and end points of the task. With all of this in mind, the possibility of using Augmented Reality was considered. With that goal in mind, a program was created that included hand tracking with coloured spheres to facilitate the assembly process. It also includes QR code scanning to help locate the work area correctly. In addition, it has cube-shaped sensors that help measure time and correctly provide process instructions that are implemented to assist workers. When the job is finished, the program automatically displays a time log and indicates the speed of the work performed. All of this helps to improve productivity and safety at work,making a significant contribution to business sustainability. / <p>Utbytesstudent Universidad de Málaga, Spanien</p>
865

System Support for Next-Gen Mobile Applications

Jiayi Meng (16512234) 10 July 2023 (has links)
<p>Next-generation (Next-Gen) mobile applications, Extended Reality (XR), which encompasses Virtual/Augmented/Mixed Reality (VR/AR/MR), promise to revolutionize how people interact with technology and the world, ushering in a new era of immersive experiences. However, the hardware capacity of mobile devices will not grow proportionally with the escalating resource demands of the mobile apps due to their battery constraint. To bridge the gap, edge computing has emerged as a promising approach. It is further boosted by emerging 5G cellular networks, which promise low latency and high bandwidth. However, realizing the full potential of edge computing faces several fundamental challenges.</p> <p><br></p> <p>In this thesis, we first discuss a set of fundamental design challenges in supporting Next-Gen mobile applications via edge computing. These challenges extend across the three key system components involved — mobile clients, edge servers, and cellular networks. We then present how we address several of these challenges, including (1) how to coordinate mobile clients and edge servers to achieve stringent QoE requirements for Next-Gen apps; (2) how to optimize energy consumption of running Next-Gen apps on mobile devices to ensure long-lasting user experience; and (3) how to model and generate control-plane traffic of cellular networks to enable innovation on mobile network architectural design to support Next-Gen apps not only over 4G but also over 5G and beyond.</p> <p><br></p> <p>First, we present how to optimize the latency in edge-assisted XR system via the mobile-client and edge-server co-design. Specifically, we exploit key insights about frame similarity in VR to build the first multiplayer edge-assisted VR design, Coterie. We demonstrate that compared with the prior work on single-player VR, Coterie reduces the per-player network load by 10.6X−25.7X, and can easily support 4 players for high-quality VR apps on Pixel 2 over 802.11ac running at 60 FPS and under 16ms responsiveness without exhausting the finite wireless bandwidth.</p> <p><br></p> <p>Second, we focus on the energy perspective of running Next-Gen apps on mobile devices. We study a major limitation of a classic and de facto app energy management technique, reactive energy-aware app adaptation, which was first proposed two decades ago. We propose, design, and validate a new solution, the first proactive energy-aware app adaptation, that effectively tackles the limitation and achieves higher app QoE while meeting a given energy drain target. Compared with traditional approaches, our proactive solution improves the QoE by 44.8% (Pixel 2) and 19.2% (Moto Z3) under low power budget.</p> <p><br></p> <p>Finally, we delve into the third system component, cellular networks. To facilitate innovation in mobile network architecture to better support Next-Gen apps, we characterize and model the control-plane traffic of cellular networks, which has been mostly overlooked by prior work. To model the control-plane traffic, we first prove that traditional probability distributions that have been widely used for modeling Internet traffic (e.g., Poisson, Pareto, and Weibull) cannot model the control-plane traffic due to the much higher burstiness and longer tails in the cumulative distributions of the control-plane traffic. We then propose a two-level state-machine-based traffic model based on the Semi-Markov model. We finally validate that the synthesized traces by using our model achieve small differences compared with the real traces, i.e., within 1.7%, 4.9% and 0.8%, for phones, connected cars, and tablets, respectively. We also show that our model can be easily adjusted from LTE to 5G, enabling further research on control-plane design and optimization for 4G/5G and beyond.</p>
866

An Exploration of the Virtual Digital Twin Capture for Spatial Tasks and its Applications

Vedapalle Sri Sai Swarup Reddy (12468435) 27 April 2023 (has links)
<p>Our generation is currently at the juncture of the fourth industrial revolution - Industry 4.0. Emergent technology such as Augmented Reality (AR), Internet of Things (IoT), Artificial Intelligence (AI), cloud computing, big data, and ore are at the center of this. Amidst all these, the concept of  digital Twinning is a promising technology for realizing Industry 4.0. Simply put, a Digital Twin is a virtual representation of a real task, action, or object. This thesis explores the parameters and details required to generate a Digital Twin. Using these insights, we propose two applications that utilize digital twinning - EditAR and AnnotateXR. EditAR is an AR workflow for authoring kinesthetic instructions for spatial tasks. AnnotateXR is an Extended Reality (XR) workflow for automating data annotation to support multiple Computer Vision (CV) applications. We evaluate these systems through user studies and report the results on the usability and viability of these workflows. From an evaluation study, EditAR received an average system usability score (SUS) of 82.0. Over the course of a user study, using AnnotateXR, users were able to generate a total of 112,737 semantically segmented images and 144 videos annotated for action segmentation in 66.55 minutes. AnnotateXR received an average SUS score of 91.0.</p>
867

Explore the Design and Authoring of AI-Driven Context-Aware Augmented Reality Experiences

Xun Qian (15339328) 24 April 2023 (has links)
<p>With the advents in hardware techniques and mobile computing powers, Augmented Reality (AR) has been promising in various areas of our everyday life and work. By superimposing virtual assets onto the real world, the boundary between the digital and physical spaces has been significantly blurred, which bridges a large amount of digital augmentation and intelligence with the surroundings of the physical reality. Meanwhile, thanks to the increasing developments of Artificial Intelligence (AI) perception algorithms such as object detection, scene reconstruction, and human tracking, the dynamic behaviors of digital AR content have extensively been associated with the physical contexts of both humans and environments. This context-awareness enabled by the emerging techniques enriches the potential interaction modalities of AR experiences and improves the intuitiveness and effectiveness of the digital augmentation delivered to the consumers. Therefore, researchers are gradually motivated to include more contextual information in the AR domain to create novel AR experiences used for augmenting their activities in the physical world.</p> <p>    </p> <p>On a broader level, our work in this thesis focuses on novel designs and modalities that combine contextual information with AR content behaviors in context-aware AR experiences. In particular, we design the AR experiences by inspecting different types of contexts from the real world, namely 1) human actions, 2) physical entities, and 3) interactions between humans and physical environments. To this end, we explore 1) software and hardware modules, and conceptual models that perceive and interpret the contexts required by the AR experiences, and 2) supportive authoring tools and interfaces that enable users and designers to define the associations of the AR contents and the interaction modalities leveraging the contextual information. In this thesis, we mainly study the following workflows: 1) designing adaptive AR tutoring systems for human-machine-interactions, 2) customizing human-involved context-aware AR applications, 3) authoring shareable semantic-aware AR experiences, and 4) enabling hand-object-interaction datasets collection for scalable context-aware AR application deployment. We further develop the enabling techniques and algorithms including 1) an adaptation model that adaptively vary the AR tutoring elements based on the real-time learner's interactions with the physical machines, 2) a customized video-see-through AR headset for pervasive human-activity detecting, 3) a semantic adaptation model that adjusts the spatial relationships of the AR contents according to the semantic understanding of different physical entities and environments, and 4) an AR-based interface that empowers novice users to collect high-quality datasets used for training user- and cite-specific networks in hand-object-interaction-aware AR applications.</p> <p><br></p> <p>Takeaways from the research series include 1) the usage of the modern AI modules effectively enlarges both the spatial and contextual scalability of AR experiences, and 2) the design of the authoring systems and interfaces lowers the barrier for end-users and domain experts to leverage AI outputs in the creation of AR experiences that are tailored for target users. We conclude that involving AI techniques in both the creation and implementation stages of AR applications is crucial to building an intelligent, adaptive, and scalable ecosystem of context-aware AR applications.</p>
868

An exploratory research of ARCore's feature detection

Eklind, Anna, Stark, Love January 2018 (has links)
Augmented reality has been on the rise for some time now and begun making its way onto the mobile market for both IOS and Android. In 2017 Apple released ARKit for IOS which is a software development kit for developing augmented reality applications. To counter this, Google released their own variant called ARCore on the 1st of march 2018. ARCore is also a software development kit for developing augmented reality applications but made for the Android, Unity and Unreal platforms instead. Since ARCore is released recently it is still unknown what particular limitations may exist for it. The purpose of this paper is give an indication to companies and developers about ARCore’s potential limitations. The goal with this paper and work is to map how well ARCore works during different circumstances, and in particular, how its feature detection works and behaves. A quantitative research was done with the usage of the case study method. Various tests were performed with a modified test-application supplied by Google. The tests included testing how ARCore’s feature detection, the process that analyzes the environment presented to the application. This which enables the user of an application to place a virtual object on the physical environment. The tests were done to see how ARCore works during different light levels, different types of surfaces, different angles and the difference between having the device stationary or moving. From the testing that were done some conclusions could be drawn about the light levels, surfaces and differences between a moving and stationary device. More research and testing following these principles need to be done to draw even more conclusions of the system and its limitations. How these should be done is presented and discussed. / Forstarkt verklighet (augmented reality) har stigit under en tid och börjat ta sig in på mobilmarknaden for både IOS och Android. År 2017 släppte Apple ARKit för IOS vilket är en utvecklingsplattform för att utveckla applikationer inom förstärkt verklighet. Som svar på detta så slappte Google sin egna utvecklingsplattform vid namn ARCore, som släpptes den 1 mars 2018. ARCore är också en utvecklingsplattform för utvecklandet av applikationer inom förstarkt verklighet men istället inom Android, Unity och Unreal. Sedan ARCore släpptes nyligen är det fortfarande okant vilka särskilda begränsningar som kan finnas för det. Syftet med denna avhandling är att ge företag och utvecklare en indikation om ARCores potentiella begränsningar. Målet med denna avhandling och arbete är att kartlägga hur väl ARCore fungerar under olika omstandigheter, och i synnerhet hur dess struktursdetektor fungerar och beter sig. En kvantitativ forskning gjordes med användning av fallstudie metoden. Olika tester utfördes med en modifierad test-applikation från Google. Testerna inkluderade testning av hur ARCores struktursdetektor, processen som analyserar miljön runt om sig, fungerar. Denna teknik möjliggor att användaren av en applikation kan placera ett virtuellt objekt på den fysiska miljön. Testen innebar att se hur ARCore arbetar under olika ljusnivåer, olika typer av ytor, olika vinklar och skillnaden mellan att ha enheten stationär eller rör på sig. Från testningen som gjordes kan man dra några slutsatser om ljusnivåer, ytor och skillnader mellan en rörlig och stationar enhet. Mer forskning och testning enligt dessa principer måste göras för att dra ännu mer slutsatser av systemet och dess begränsningar. Hur dessa ska göras presenteras och diskuteras.
869

Exploring Augmented Reality for enhancing ADAS and Remote Driving through 5G : Study of applying augmented reality to improve safety in ADAS and remote driving use cases

Meijer, Max Jan January 2020 (has links)
This thesis consists of two projects focusing on how 5G can be used to make vehicles safer. The first project focuses on conceptualizing near-future use cases of how Advanced Driver Assistance Systems (ADAS) can be enhanced through 5G technology. Four concepts were developed in collaboration with various industry partners. These concepts were successfully demonstrated in a proof-of-concept at the 5G Automotive Association (5GAA) “The 5G Path of Vehicle-to-Everything Communication: From Local to Global” conference in Turin, Italy. This proof-of-concept was the world’s first demonstration of such a system. The second project focuses on a futuristic use case, namely remote operation of semi-autonomous vehicles (sAVs). As part of this work, it was explored if augmented reality (AR) can be used to warn remote operators of dangerous events. It was explored if such augmentations can be used to compensate during critical events. These events are defined as occurrences in which the network conditions are suboptimal, and information provided to the operator is limited. To evaluate this, a simulator environment was developed that uses eye- tracking technology to study the impact of such scenarios through user studies. The simulator establishes an extendable platform for future work. Through experiments, it was found that AR can be beneficial in spotting danger. However, it can also be used to directly affect the scanning patterns at which the operator views the scene and directly affect their visual scanning behavior. / Denna avhandling består av två projekt med fokus på hur 5G kan användas för att göra fordon säkrare. Det första projektet fokuserar på att konceptualisera användningsfall i närmaste framtid av hur Advanced Driver Assistance Systems (ADAS) kan förbättras genom 5G-teknik. Fyra koncept utvecklades i samarbete med olika branschpartner. Dessa koncept demonstrerade i ett proof-of- concept på 5G Automotive Association (5GAA) “5G Path of Vehicle to to Everything Communication: From Local to Global” -konferensen i Turin, Italien. Detta bevis-of-concept var världens första demonstration av ett sådant system. Det andra projektet fokuserar på ett långt futuristiskt användningsfall, nämligen fjärrstyrning av semi-autonoma fordon (sAVs). Som en del av detta arbete undersöktes det om augmented reality (AR) kan användas för att varna fjärroperatörer om farliga händelser. Det undersöktes om sådana förstärkningar kan användas för att kompensera under kritiska händelser. Dessa händelser definieras som händelser där nätverksförhållandena är suboptimala och information som tillhandahålls till operatören är begränsad. För att utvärdera detta utvecklades en simulatormiljö som använder ögonspårningsteknologi för att studera effekterna av sådana scenarier genom en användarstudie. Simulatorn bildar en utdragbar plattform för framtida arbete. Genom experiment fann man att AR kan vara fördelaktigt när det gäller att upptäcka fara. Men det kan också användas för att direkt påverka skanningsmönstret där operatören tittar på scenen och direkt påverka deras visuella skanningsbeteende.
870

Conformal Tracking For Virtual Environments

Davis, Larry Dennis, Jr. 01 January 2004 (has links)
A virtual environment is a set of surroundings that appears to exist to a user through sensory stimuli provided by a computer. By virtual environment, we mean to include environments supporting the full range from VR to pure reality. A necessity for virtual environments is knowledge of the location of objects in the environment. This is referred to as the tracking problem, which points to the need for accurate and precise tracking in virtual environments. Marker-based tracking is a technique which employs fiduciary marks to determine the pose of a tracked object. A collection of markers arranged in a rigid configuration is called a tracking probe. The performance of marker-based tracking systems depends upon the fidelity of the pose estimates provided by tracking probes. The realization that tracking performance is linked to probe performance necessitates investigation into the design of tracking probes for proponents of marker-based tracking. The challenges involved with probe design include prediction of the accuracy and precision of a tracking probe, the creation of arbitrarily-shaped tracking probes, and the assessment of the newly created probes. To address these issues, we present a pioneer framework for designing conformal tracking probes. Conformal in this work means to adapt to the shape of the tracked objects and to the environmental constraints. As part of the framework, the accuracy in position and orientation of a given probe may be predicted given the system noise. The framework is a methodology for designing tracking probes based upon performance goals and environmental constraints. After presenting the conformal tracking framework, the elements used for completing the steps of the framework are discussed. We start with the application of optimization methods for determining the probe geometry. Two overall methods for mapping markers on tracking probes are presented, the Intermediary Algorithm and the Viewpoints Algorithm. Next, we examine the method used for pose estimation and present a mathematical model of error propagation used for predicting probe performance in pose estimation. The model uses a first-order error propagation, perturbing the simulated marker locations with Gaussian noise. The marker locations with error are then traced through the pose estimation process and the effects of the noise are analyzed. Moreover, the effects of changing the probe size or the number of markers are discussed. Finally, the conformal tracking framework is validated experimentally. The assessment methods are divided into simulation and post-fabrication methods. Under simulation, we discuss testing of the performance of each probe design. Then, post-fabrication assessment is performed, including accuracy measurements in orientation and position. The framework is validated with four tracking probes. The first probe is a six-marker planar probe. The predicted accuracy of the probe was 0.06 deg and the measured accuracy was 0.083 plus/minus 0.015 deg. The second probe was a pair of concentric, planar tracking probes mounted together. The smaller probe had a predicted accuracy of 0.206 deg and a measured accuracy of 0.282 plus/minus 0.03 deg. The larger probe had a predicted accuracy of 0.039 deg and a measured accuracy of 0.017 plus/minus 0.02 deg. The third tracking probe was a semi-spherical head tracking probe. The predicted accuracy in orientation and position was 0.54 plus/minus 0.24 deg and 0.24 plus/minus 0.1 mm, respectively. The experimental accuracy in orientation and position was 0.60 plus/minus 0.03 deg and 0.225 plus/minus 0.05 mm, respectively. The last probe was an integrated, head-mounted display probe, created using the conformal design process. The predicted accuracy of this probe was 0.032 plus/minus 0.02 degrees in orientation and 0.14 plus/minus 0.08 mm in position. The measured accuracy of the probe was 0.028 plus/minus 0.01 degrees in orientation and 0.11 plus/minus 0.01 mm in position. These results constitute an order of magnitude improvement over current marker-based tracking probes in orientation, indicating the benefits of a conformal tracking approach. Also, this result translates to a predicted positional overlay error of a virtual object presented at 1m of less than 0.5 mm, which is well above reported overlay performance in virtual environments.

Page generated in 0.0791 seconds