Spelling suggestions: "subject:"anda augmented reality"" "subject:"ando augmented reality""
101 |
Testing Challenges of Mobile Augmented Reality SystemsLehman, Sarah, 0000-0002-9466-0688 January 2022 (has links)
Augmented reality systems are ones which insert virtual content into a user’s view of the real world, in response to environmental conditions and the user’s behavior within that environment. This virtual content can take the form of visual elements such as 2D labels or 3D models, auditory cues, or even haptics; content is generated and updated based on user behavior and environmental conditions, such as the user’s location, movement patterns, and the results of computer vision or machine learning operations. AR systems are used to solve problems in a range of domains, from tourism and retail, education and healthcare, to industry and entertainment. For example, apps from Lowe’s [82] and Houzz [81] support retail transactions by scanning a user’s environment and placing product models into the space, thus allowing the user to preview what the product might look like in her home. AR systems have also proven helpful in such areas as aiding industrial assembly tasks [155, 175], helping users overcome phobias [35], and reviving interest in cultural heritage sites [163].
Mobile AR systems are ones which run on portable handheld or wearable devices, such that the user is free to move around their environment without restric- tion. Examples of such devices include smartphones, tablets, and head-mounted dis- plays. This freedom of movement and usage, in combination with the application’s reliance on computer vision and machine learning logic to provide core function- ality, make mobile AR applications very difficult to test. In addition, as demand and prevalence of machine learning logic increases, the availability and power of commercially available third-party vision libraries introduces new and easy ways for developers to violate usability and end-user privacy.
The goal of this dissertation, therefore, is to understand and mitigate the challenges involved in testing mobile AR systems, given the capabilities of today’s commercially available vision and machine learning libraries. We consider three related challenge areas: application behavior during unconstrained usage conditions, general usability, and end-user privacy. To address these challenge areas, we present three research efforts. The first presents a framework for collecting application performance and usability data in the wild. The second explores how commercial vision libraries can be exploited to conduct machine learning operations without user knowledge. The third presents a framework for leveraging the environment itself to enforce privacy and access control policies for mobile AR applications. / Computer and Information Science
|
102 |
Using Augmented Reality technology to improve health and safety for workers in Human Robot Collaboration environment: A literature reviewChemmanthitta Gopinath, Dinesh January 2022 (has links)
Human Robot Collaboration (HRC) allows humans to operate more efficiently by reducing their human effort. Robots can do the majority of difficult and repetitive activities with or without human input. There is a risk of accidents and crashes when people and robots operate together closely. In this area, safety is extremely important. There are various techniques to increase worker safety, and one of the ways is to use Augmented Reality (AR). AR implementation in industries is still in its early stages. The goal of this study is to see how employees' safety may be enhanced when AR is used in an HRC setting. A literature review is carried out, as well as a case study in which managers and engineers from Swedish firms are questioned about their experiences with AR-assisted safety. This is a qualitative exploratory study with the goal of gathering extensive insight into the field, since the goal is to explore approaches for AR to improve safety. Inductive qualitative analysis was used to examine the data. Visualisation, awareness, ergonomics, and communication are the most critical areas where AR may improve safety, according to the studies. When doing a task, augmented reality aids the user in visualizing instructions and information, allowing them to complete the task more quickly and without mistakes. When working near robots, AR enhances awareness and predicts mishaps, as well as worker trust in a collaborative atmosphere. When AR is utilized to engage with collaborative robots, it causes less physical and psychological challenges than when traditional approaches are employed. AR allows operators to communicate with robots without having to touch them, as well as make adjustments. As a result, accidents are avoided and safety is ensured. There is a gap between theoretical study findings and data gathered from interviews in real time. Even though AR and HRC are not new topics, and many studies are being conducted on them, there are key aspects that influence their adoption in sectors. Due to considerations such as education, experience, suitability, system complexity, time, and technology, HRC and AR are employed less for assuring safety in industries by managers in various firms. In this study, possible future solutions to these challenges are also presented.
|
103 |
Designing Simulation-Based Active Learning Activities Using Augmented Reality and Sets of Offline GamesHernandez, Olivia Kay January 2020 (has links)
No description available.
|
104 |
Training Wayfinding: Natural Movement In Mixed RealitySavage, Ruthann 01 January 2006 (has links)
The Army needs a distributed training environment that can be accessed whenever and wherever required for training and mission rehearsal. This paper describes an exploratory experiment designed to investigate the effectiveness of a prototype of such a system in training a navigation task. A wearable computer, acoustic tracking system, and see-through head mounted display (HMD) were used to wirelessly track users' head position and orientation while presenting a graphic representation of their virtual surroundings, through which the user walked using natural movement. As previous studies have shown that virtual environments can be used to train navigation, the ability to add natural movement to a type of virtual environment may enhance that training, based on the proprioceptive feedback gained by walking through the environment. Sixty participants were randomly assigned to one of three conditions: route drawing on printed floor plan, rehearsal in the actual facility, and rehearsal in a mixed reality (MR) environment. Participants, divided equally between male and female in each group, studied verbal directions of route, then performed three rehearsals of the route, with those in the map condition drawing it onto three separate printed floor plans, those in the practice condition walking through the actual facility, and participants in the MR condition walking through a three dimensional virtual environment, with landmarks, waypoints and virtual footprints. A scaling factor was used, with each step in the MR environment equal to three steps in the real environment, with the MR environment also broken into "tiles", like pages in an atlas, through which participant progressed, entering each tile in succession until they completed the entire route. Transfer of training testing that consisted of a timed traversal of the route through the actual facility showed a significant difference in route knowledge based on the total time to complete the route, and the number of errors committed while doing so, with "walkers" performing better than participants in the paper map or MR condition, although the effect was weak. Survey knowledge showed little difference among the three rehearsal conditions. Three standardized tests of spatial abilities did not correlate with route traversal time, or errors, or with 3 of the 4 orientation localization tasks. Within the MR rehearsal condition there was a clear performance improvement over the three rehearsal trials as measured by the time required to complete the route in the MR environment which was accepted as an indication that learning occurred. As measured using the Simulator Sickness Questionnaire, there were no incidents of simulator sickness in the MR environment. Rehearsal in the actual facility was the most effective training condition; however, it is often not an acceptable form of rehearsal given an inaccessible or hostile environment. Performance between participants in the other two conditions were indistinguishable, pointing toward continued experimentation that should include the combined effect of paper map rehearsal with mixed reality, especially as it is likely to be the more realistic case for mission rehearsal, since there is no indication that maps should be eliminated. To walk through the environment beforehand can enhance the Soldiers' understanding of their surroundings, as was evident through the comments from participants as they moved from MR to the actual space: "This looks like I was just here", and "There's that pole I kept having trouble with". Such comments lead one to believe that this is a tool to continue to explore and apply. While additional research on the scaling and tiling factors is likely warranted, to determine if the effect can be applied to other environments or tasks, it should be pointed out that this is not a new task for most adults who have interacted with maps, where a scaling factor of 1 to 15,000 is common in orienteering maps, and 1 to 25,000 in military maps. Rehearsal time spent in the MR condition varied widely, some of which could be blamed on an issue referred to as "avatar excursions", a system anomaly that should be addressed in future research. The proprioceptive feedback in MR was expected to positively impact performance scores. It is very likely that proprioceptive feedback is what led to the lack of simulator sickness among these participants. The design of the HMD may have aided in the minimal reported symptoms as it allowed participants some peripheral vision that provided orientation cues as to their body position and movement. Future research might include a direct comparison between this MR, and a virtual environment system through which users move by manipulating an input device such as a mouse or joystick, while physically remaining stationary. The exploration and confirmation of the training capabilities of MR as is an important step in the development and application of the system to the U.S. Army training mission. This experiment was designed to examine one potential training area in a small controlled environment, which can be used as the foundation for experimentation with more complex tasks such as wayfinding through an urban environment, and or in direct comparison to more established virtual environments to determine strengths, as well as areas for improvement, to make MR as an effective addition to the Army training mission.
|
105 |
Orienting Of Visual-spatial Attention With Augmented Reality: Effects Of Spatial And Non-spatial Multi-modal CuesJerome, Christian 01 January 2006 (has links)
Advances in simulation technology have brought about many improvements to the way we train tasks, as well as how we perform tasks in the operational field. Augmented reality (AR) is an example of how to enhance the user's experience in the real world with computer generated information and graphics. Visual search tasks are known to be capacity demanding and therefore may be improved by training in an AR environment. During the experimental task, participants searched for enemies (while cued from visual, auditory, tactile, combinations of two, or all three modality cues) and tried to shoot them while avoiding shooting the civilians (fratricide) for two 2-minute low-workload scenarios, and two 2-minute high-workload scenarios. The results showed significant benefits of attentional cuing on visual search task performance as revealed by benefits in reaction time and accuracy from the presence of the haptic cues and auditory cues when displayed alone and the combination of the visual and haptic cues together. Fratricide occurrence was shown to be amplified by the presence of the audio cues. The two levels of workload produced differences within individual's task performance for accuracy and reaction time. Accuracy and reaction time were significantly better with the medium cues than all the others and the control condition during low workload and marginally better during high workload. Cue specificity resulted in a non-linear function in terms of performance in the low workload condition. These results are in support of Posner's (1978) theory that, in general, cueing can benefit locating targets in the environment by aligning the attentional system with the visual input pathways. The cue modality does not have to match the target modality. This research is relevant to potential applications of AR technology. Furthermore, the results identify and describe perceptual and/or cognitive issues with the use of displaying computer generated augmented objects and information overlaid upon the real world. The results also serve as a basis for providing a variety of training and design recommendations to direct attention during military operations. Such recommendations include cueing the Soldier to the location of hazards, and mitigating the effects of stress and workload.
|
106 |
Effectiveness of Augmented RealityCommunication Through Poster DesignMuffet, Nicholas J. 06 December 2022 (has links)
No description available.
|
107 |
Ljudets roll på ljusfestivaler : Intervjustudie med audiovisuella konstnärer verksamma på svenska ljusfestivalerThilander, Isak January 2023 (has links)
Uppsatsen tar upp vad ljudets roll är på en ljusfestival utifrån kreatörer och kompositörers perspektiv med hjälp av djupintervjuer. Uppsatsens teoretiska ramverk är fokuserat på teorier om interaktivt ljud. Ambitionen är att bidra med ny kunskap och fördjupad förståelse som i sin tur kan ligga till grund för vidareutveckling av ljusfestivaler. I fråga om interaktivitet har den teknik som används i augmented reality och metaverse framstått som speciellt intressant. Uppsatsen har två övergripande frågeställningar vilka är: På vilka sätt kan ljudet påverka den personliga upplevelsen av ett audiovisuellt konstverk? Den andra frågan lyder: Vad bidrar ljudet med till den totala upplevelsen av en ljusfestival? Resultatet visar blev att ljudet bidrar till att höja upplevelsen och att placeringen av ljudet hade stor effekt på ljusfestivaler. För att samla in data så har djupintervjuer gjorts med kreatörer inom området för ljusfestivaler. Resultatet visar att ljudet bidrar till att höja upplevelsen och att placeringen av ljudet hade stor effekt på ljusfestivaler. Kreatörer och kompositörer ofta saknar den kompetens som krävs för att utveckla ljusfestivaler, vad som jag i uppsatsen benämner som multikompetens. Det handlar både om hur ljud kan skapas till kreatörens verk och påverkan från omgivande. Resultaten visar att intresset för att använda både ljud och ljus successivt ökar hos visuell kreatörer, audiovisuella kreatörer och kompositörer. Det finns dock de som på senare tid har börjat att skapa ljud eftersom man tycker att det kan bidra till en större upplevelse för betraktaren.
|
108 |
Comparative Analysis of the Performance of ARCore and WebXR APIs for AR ApplicationsShaik, Abu Bakr Rahman, Asodi, Venkata Sai Yakkshit Reddy January 2023 (has links)
Background: Augmented Reality has become a popular technology in recent years. Two of the most prominent AR APIs are ARCore, developed by Google, and We- bXR, an open standard for AR and Virtual Reality (VR) experiences on the web. A comparative analysis of the performance of these APIs in terms of CPU load, network latency, and frame rate is needed to determine which API is more suitable for cloud-based object visualisation AR applications that are integrated with Firebase. Firebase is a cloud-based backend-as-a-service platform made for app development. Objectives: This study aims to provide a comparative analysis of the performance of the ARCore API and WebXR API for an object visualisation application integrated with Firebase Cloud Storage. The objective is to analyze and compare the performance of the APIs in terms of latency, frame rate, and CPU load to provide insights into their strengths and weaknesses and identify the key factors that may influence the choice of API for object visualisation. Methods: To achieve the objectives, two object visualisation AR applications were developed using ARCore API and WebXR API with Firebase cloud. The frame rate, CPU load, and latency were used as performance metrics, the performance data was collected from the applications. The collected data was analysed and visualized to provide insights into the strengths and weaknesses of each API. Results: The results of the study provided a comparative analysis of the performance of the ARCore API and WebXR API for object visualisation applications. The performance metrics of the AR applications, including frame rate, CPU load, and latency, were analyzed and visualized. WebXR API was found to be performing better in terms of CPU load and frame rate, while ARCore API was found to be performing better in terms of latency. Conclusion: The study concluded that the WebXR API showcased advantages in terms of lower CPU load, and higher frame rates compared to the ARCore API which has reduced network latency. These results suggest that the WebXR API is more suitable for efficient and responsive object visualization in augmented reality applications.
|
109 |
Egocentric Depth Perception in Optical See-Through Augmented RealityJones, James Adam 11 August 2007 (has links)
Augmented Reality (AR) is a method of mixing computer-generated graphics with real-world environments. In AR, observers retain the ability to see their physical surroundings while additional (augmented) information is depicted as simulated graphical objects matched to the real-world view. In the following experiments, optical see-through head-mounted displays (HMDs) were used to present observers with both Augmented and Virtual Reality environments. Observers were presented with varied real, virtual, and combined stimuli with and without the addition of motion parallax. The apparent locations of the stimuli were then measured using quantitative methods of egocentric depth judgment. The data collected from these experiments were then used to determine how observers perceived egocentric depth with respect to both real-world and virtual objects.
|
110 |
Augmented Reality Visualization of Building Information ModelLai, Yuchen 11 August 2017 (has links)
No description available.
|
Page generated in 0.1246 seconds