Spelling suggestions: "subject:"augmented reality."" "subject:"ugmented reality.""
521 |
AR Physics: Transforming physics diagrammatic representations on paper into interactive simulations.Zhou, Yao 01 January 2014 (has links)
A problem representation is a cognitive structure created by the solver in correspondence to the problem. Sketching representative diagrams in the domain of physics encourages a problem solving strategy that starts from 'envisionment' by which one internally simulates the physical events and predicts outcomes. Research studies also show that sketching representative diagrams improves learner's performance in solving physics problems. The pedagogic benefits of sketching representations on paper make this traditional learning strategy remain pivotal and worthwhile to be preserved and integrated into the current digital learning landscape. In this paper, I describe AR Physics, an Augmented Reality based application that intends to facilitate one's learning of physics concepts about objects' linear motion. It affords the verified physics learning strategy of sketching representative diagrams on paper, and explores the capability of Augmented Reality in enhancing visual conceptions. The application converts the diagrams drawn on paper into virtual representations displayed on a tablet screen. As such learners can create physics simulation based on the diagrams and test their "envisionment" for the diagrams. Users' interaction with AR Physics consists of three steps: 1) sketching a diagram on paper; 2) capturing the sketch with a tablet camera to generate a virtual duplication of the diagram on the tablet screen, and 3) placing a physics object and configuring relevant parameters through the application interface to construct a physics simulation. A user study about the efficiency and usability of AR Physics was performed with 12 college students. The students interacted with the application, and completed three tasks relevant to the learning material. They were given eight questions afterwards to examine their post-learning outcome. The same questions were also given prior to the use of the application in order to compare with the post results. System Usability Scale (SUS) was adopted to assess the application's usability and interviews were conducted to collect subjects' opinions about Augmented Reality in general. The results of the study demonstrate that the application can effectively facilitate subjects' understanding the target physics concepts. The overall satisfaction with the application's usability was disclosed by the SUS score. Finally subjects expressed that they gained a clearer idea about Augmented Reality through the use of the application.
|
522 |
[en] ANNOTATION SYSTEM BASED ON 3D VISUALIZATION WITH 360 DEGREES IMAGES OF INDUSTRIAL INSTALLATIONS / [pt] SISTEMA DE ANOTAÇÃO BASEADO EM VISUALIZAÇÃO 3D COM IMAGENS 360 GRAUS DE INSTALAÇÕES INDUSTRIAISANDERSON SILVA FONSECA 12 January 2023 (has links)
[pt] Com a chegada da Industria 4.0, empresas aderiram a usar gêmeos
digitais para melhorar seus processos de produção e as condições de trabalhos
de seus empregados. Os gêmeos digitais são normalmente associados a modelos
tridimensionais, permitindo a realização de planejamentos, extração de dados,
simulação e treinamento a partir das condições reais. Infelizmente, gêmeos
digitais incorretos ou desatualizados podem induzir a erros e a desencontro
de informações o que retira todas as vantagens do processo de virtualização,
arruinando quaisquer comparativos com a realidade. Em contrapartida, gêmeos
digitais ricos em informação permitem que simulações e extrações de dados
sejam mais fieis a realidade. Atualmente, as tecnologias capazes de enriquecer
as informações de gêmeos digitais são escassos, pois é um procedimento que leva
tempo devido a necessidade de análises de especialistas, custos, equipamentos
e ferramentas específicas. Recursos como fotografias 360 graus, vídeos e modelos
tridimensionais podem ser usados para realizar uma avaliação e atualização
nos gêmeos digitais. Porém, diferenças temporais, condições do ambiente e
erros humanos entre os recursos podem gerar confusão durante a transferência
e conexão da informação. Este trabalho apresenta uma ferramenta que explora
as vantagens de combinar fotografias 360 graus com modelos 3D para gerar gêmeos
digitais as-built. Cada imagem pode ser ajustada a uma localização dentro do
sistema de coordenadas do modelo, inclusive permitindo alterações nos eixos
e no campo de visão. Durante a navegação, é possível navegar livremente pelo
modelo e pelas posições de interesse criadas pelo usuário. Além da visualização,
a ferramenta propõe uma interação mais eficaz para realizar anotações entre
modelos e fotografias 360 graus com o propósito de verificar consistências ou agregar
novas informações ao gêmeo digital. Estas interações são importantes para a
inspeção e manutenção, como avaliação de peças, análise das condições atuais
ou a criação de comparativos entre o planejado e o real. / [en] With the arrival of Industry 4.0, companies have adopted digital twins
to improve their production processes and the working conditions of their employees. Digital twins are generally associated with three-dimensional models
and allow planning, data extraction, simulation, and training based on current conditions. Unfortunately, incorrect or outdated digital twins can lead
to errors and information mismatch, which takes away all the advantages of
the virtualization and computerization process, ruining any comparisons with
reality. In contrast, information-rich digital twins allow simulations and data
extraction to be more faithful to reality. Currently, technologies capable of
enriching the information of digital twins are scarce, as it is a procedure that
takes time due to the need for expert analysis, costs, equipment, and specific
tools. Resources such as 360 degrees photographs, videos, and 3D models can be used
to perform an evaluation and update the digital twins. However, temporal
differences, environmental conditions, and human errors between the images
and the model can generate confusion during the transfer and connection of
information. This work presents a tool that explores the advantages of combining 360 degrees photographs with 3D models to generate as-built digital twins. Each
image can be adjusted to a location within the model s coordinate system,
allowing changes to axes and field of view. During navigation, it is possible to
navigate the model and the user-created positions of interest freely. In addition to visualization, the tool proposes a more effective interaction to annotate
between models and 360 degrees photographs to verify consistency or add new information to the digital twin. These interactions are essential for inspection
and maintenance, such as evaluating parts, analyzing current conditions, or
creating comparisons between planned and actual.
|
523 |
Subject analysis of depth perception in augmented reality through vuforia and hololens trackingMuvva, Veera Venkata Ram Murali Krishna Rao 09 August 2019 (has links)
One of the main goals of augmented reality is placing virtual content in the real world at a precise location. To achieve this goal, the Head Mounted Display (HMD) should be able to place virtual content at a precise location, and the users should be able to perceive at the exact location. However, achieving this task is very challenging. Since the birth of augmented reality, researchers have been trying to design AR glasses which can do this. Recently AR researchers by taking advantage of SLAM algorithms are able to come closer to the first phase of this goal. Microsoft designed and manufactured a pair of smart glasses called the HoloLens. It is well known for its advanced SLAM algorithm to place the content in a precise location as close as possible. However, there is no significant research on the perceptual location of the virtual content which are placed through Hololens. Therefore this thesis presents a method for measuring the perceived location of virutal objects, and presents an experiment, where these measurements are made with the Hololens. Through this experiment, interesting information about HoloLens was found, such as the capability of regaining tracking immediately after occlusion, rightward error about the horizontal plane, and bias of floating the virtual content above the surface, and objects that appear to close to the observer. Therfore Hololens is an advanced AR display, it still suffers from these problems.
|
524 |
Augmented Reality for Dismounted Soldier's Situation Awareness : Designing and Evaluating Intuitive Egocentric Depth Perception with Natural Depth Perception CuesFaltin, Ronja January 2022 (has links)
In this thesis, three kinds of depth perception symbols are designed and evaluated with an implemented Augmented Reality prototype. The three types of depth perception symbol's purpose are to intuitively visualize depth for objects whose position is too far away to see without technical assistance. The area needed to be aware of is increasing with time since weapons are developed to operate at farther distances. The symbols, together with Augmented Reality, could improve the situational awareness of dismounted soldiers during navigation and in that way allow the soldiers to be aware of a larger area. This thesis aims to investigate if the natural depth perception cues Relative size, Aerial effect, and Drop-line effect improve the depth perception of virtual symbols displayed on a handheld 2D screen with Augmented Reality. The three different depth perception cues were integrated into the three symbol designs. The symbol designs were then put into an Augmented Reality prototype that was used during an explorative user study with eight participants. Both qualitative and quantitative data were collected with a presurvey, interviews, and a post-test-questionnaire. The study's results indicate that the three depth perception cues intuitively visualize depth when integrated into the three symbol designs. The most intuitive symbol design combined the three depth perception cues. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
|
525 |
Exploration Of Codeless In-situ Extended Reality Authoring Environment For Asynchronous Immersive Spatial InstructionsSubramanian Chidambaram (14191622) 29 November 2022 (has links)
<p>Immersive reality technology, such as augmented and virtual reality, has recently become quite prevalent due to innovation in hardware and software, leading to cheaper devices such as Head-mounted displays. There is significant evidence of an improved rate of skill acquisition with immersive reality training. However, the knowledge required to develop content for such immersive media is still relatively high. Subject experts often work together with programmers to create such content. </p>
<p><br></p>
<p>Our research goal in this thesis can be broadly classified into four distinct but mutually dependent categories. First, we explored the problem of immersive content creation with ProcessAR, an AR-based system to develop 2D/3D content that captures subject matter experts (SMEs) environment-object interactions in situ. The design space for ProcessAR was identified from formative interviews with AR programming experts and SMEs, alongside a comparative design study with SMEs and novice users. To enable smooth workflows, ProcessAR locates and identifies different tools/objects through computer vision within the workspace when the author looks at them. We explored additional features, such as embedding 2D videos with detected objects and user-adaptive triggers. A final user evaluation comparing ProcessAR and a baseline AR authoring environment showed that, according to our qualitative questionnaire, users preferred ProcessAR.</p>
<p><br></p>
<p>Second, we explored a unified authoring and editing environment, EditAR, that can create content for multiple media, such as AR, VR, and video instructions, based on a single demonstration. EditAR captures the user's interaction within an environment and creates a digital twin, enabling users without programming backgrounds to develop content. We conducted formative interviews with the subject and media experts to design the system. The prototype was developed and reviewed by experts. We also performed a user study comparing traditional video creation with 2D video creation from 3D recordings via a 3D editor, which uses freehand interaction for in-headset editing. Users took five times less time to record instructions and preferred EditAR, giving significantly higher usability scores.</p>
<p><br></p>
<p>We then explore AnnotateXR, an extended reality (XR) workflow to collect various high fidelity data and auto-annotate it in a single demonstration. AnnotateXR allows users to align virtual models over physical objects, tracked with 6DoF sensors. AnnotateXR utilizes a hand tracking capable XR HMD coupled with 6DoF information and collision detection to enable algorithmic segmentation of different actions in videos through its digital twin. The virtual-physical mapping provides a tight bounding volume to generate semantic segmentation masks for the captured image data. Alongside supporting object and action segmentation, we also support other dimensions of annotation required by modern CV, such as Human-Object, Object-Object, and rich 3D recordings, all with a single demonstration. Our user study shows AnnotateXR produced over 112,000 annotated data points in 67 minutes while maintaining the same performance quality as manual annotations.</p>
<p><br></p>
<p>Lastly, We conducted two elicitation studies empirically evaluated to determine design guidance for cross-modal devices capable of supporting an immersive interface in VR, allowing for simultaneous interaction with direct hand interaction while allowing for keyboard and mouse input. Recent advances in hand tracking have allowed users to interact with and experience interactions closer and similar to interactions in the physical world. However, these added benefits of natural interaction come at the cost of precision and accuracy offered by legacy input media such as a keyboard/mouse. The results and the guidelines from the two studies were used to develop a prototype called the Immersive Keyboard, which was evaluated against only traditional interface of only the keyboard and mouse. </p>
<p><br></p>
<p>In this thesis, we have explored a novel extended reality authoring environment that enables users without programming to author asynchronous immersive content in-situ, especially for spatial instructions.</p>
|
526 |
Real-Time Catheter Tracking and Adaptive Imaging for Interventional Cardiovascular MRIElgort, Daniel Robert 23 March 2005 (has links)
No description available.
|
527 |
A First Experiment in Misplaced Trust in Augmented RealityWang, Jue 09 December 2010 (has links)
No description available.
|
528 |
Visual Tracking with an Application to Augmented Realityxiao, changlin, xiao January 2017 (has links)
No description available.
|
529 |
Model Preparation and User Interface Aspects for Microsoft Hololens Medical Tutorial ApplicationsMcNutt, Andrew J. 01 September 2017 (has links)
No description available.
|
530 |
Commentary: The Ethics of Realism in Virtual and Augmented RealityLorenz, Mario 15 January 2024 (has links)
In their opinion article, “The Ethics of Realism in Virtual and Augmented Reality,” Slater et al.
(2020) raised awareness on themanifold ethical issues arising fromXR developing into a ubiquitous
and daily used technology. The article of Slater et al. is true in every aspect. However, there was one
further aspect missing, which will likely play a very important role when XR is no longer contained
in laboratories and professional applications but a daily used technology: the ubiquitousness of
drugs and their influence on perception and cognition in relation to XR.
|
Page generated in 0.0992 seconds