• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparison of Augmented Reality Rearview and Radar Head-Up Displays for Increasing Spatial Awareness During Exoskeleton Operation

Hollister, Mark Andrew 19 March 2024 (has links)
Full-body powered exoskeletons for industrial workers have the potential to reduce the incidence of work-related musculoskeletal disorders while increasing strength beyond human capabilities. However, operating current full-body powered exoskeletons imposes different loading, motion, and balance requirements on users compared to unaided task performance, potentially resulting in additional mental workload on the user which may reduce situation awareness (SA) and increase risk of collision with pedestrians, negating the health and safety benefits of exoskeletons. Exoskeletons could be equipped with visual aids to improve SA, like rearview cameras or radar displays. However, research on design and evaluation of such displays for exoskeleton users are absent in the literature. This empirical study compared several augmented reality (AR) head-up displays (HUDs) in providing SA to minimize pedestrian collisions while completing common warehouse tasks. Specifically, the study consisted of an experimental factor of display abstraction including four levels, from low to high abstraction: rearview camera, overhead radar, ring radar, and no visual aid (as control). The second factor was elevation angle that was analyzed with the overhead and ring radar displays at 15°, 45°, and 90°. A 1x4 repeated measures ANOVA on all four display abstraction levels at 90° revealed that every display condition performed better than the no visual aid condition, the Bonferroni post-hoc test revealed that overhead and ring radars (medium and high abstraction respectively) received higher usability ratings than the rearview camera (low abstraction). A 2x3 repeated measures ANOVA on the two radar displays at all three display angles found that the overhead radar yielded better transport time and situation awareness ratings than the ring radar. Further, the two-way ANOVA found that 45° angles yielded the best transport collision times. Thus, AR displays presents promise in augment SA to minimize collision risk to collision and injury in warehouse settings. / Master of Science / Exoskeletons can increase the strength capabilities of industrial workers while reducing the likelihood of injury from heavy lifting and materials handling. However, full-body powered exoskeletons are currently very unwieldy, demanding users to focus their attention on controlling the exoskeleton that may cause a loss awareness of their surroundings. This may increase the likelihood of collisions with pedestrians, presenting a significant safety concern that could negate the benefits of exoskeletons. Rearview cameras and radar displays of nearby pedestrians could improve situation awareness for the exoskeleton user; however, these methods are not well-tested in settings where exoskeletons would be used. This study compared a rearview camera, a conventional radar, and a ring-shaped radar at display angles of 15°, 45°, and 90° using an augmented reality headset and simulated warehouse task to determine the combination of display type and angle that would maximize situation awareness and minimize collisions with pedestrians. The study revealed that all displays performed better than no display support and the latest evidence from this study and the literature suggests that a conventional overhead radar at 45° performed best.
2

Field Of View Effects On Reflexive Motor Response In Flight Simulation

Covelli, Javier 01 January 2008 (has links)
Virtual Reality (VR) and Augmented Reality (AR) Head Mounted Display (HMD) or Head Worn Display (HWD) technology represents low-cost, wide Field of Regard (FOR), deployable systems when compared to traditional simulation facilities. However, given current technological limitations, HWD flight simulator implementations provide a limited effective Field of View (eFOV) far narrower than the normal human 200[degrees] horizontal and 135[degrees] vertical FOV. Developing a HWD with such a wide FOV is expensive but can increase the aviator's visual stimulus, perception, sense of presence and overall training effectiveness. This research and experimentation test this proposition by manipulating the eFOV of experienced pilots in a flight simulator while measuring their reflexive motor response and task performance. Reflexive motor responses are categorized as information, importance and effort behaviors. Performance metrics taken include runway alignment error (RAE) and vertical track error (VTE). Results indicated a significant and systematic change in visual scan pattern, head movement and flight control performance as the eFOV was sequentially decreased. As FOV decreased, the average visual scan pattern changed to focus less on out-the-window (OTW) and more on the instruments inside the cockpit. The head range of movement significantly increased below 80[degrees] horizontal x 54[degrees] vertical eFOV as well as significantly decreasing runway alignment and vertical track performance, which occurred below 120[degrees] horizontal x 81[degrees] vertical eFOV.
3

Interaction-Triggered Estimation of AR Object Placement on Indeterminate Meshes

Luksas, John Peter 30 October 2024 (has links)
Current Augmented Reality devices rely heavily on real-time environment mapping to provide convincing world-relative experiences through user interaction with virtual content integrated into the real world. This mapping is obtained and updated through many different algorithms, but often results in holes and other mesh artifacts when generated in less ideal scenarios, like outdoors and with fast movement. In this work, we present the Interaction-Triggered Estimation of AR Object Placement on Indeterminate Meshes, a quick, interaction-triggered method to estimate the normal and position of missing mesh pieces in real-time with low computational overhead. We achieve this by extending the user's hand using a group of additional raycast sample points, aggregating results according to different algorithms, and then using the resulting values to place an object. This thesis will first cover problems with current mapping techniques, thoroughly explain the rationale and algorithms behind our method, and then evaluate our method using a user study. / Master of Science / Augmented Reality (AR) technologies have the potential to change all our lives for the better through tight and seamless integration into our daily lives. Crucial to this seamless integration is the ability for users to manipulate virtual AR objects and interact effortlessly with real-world features around them. In order to facilitate this interaction, AR devices often create 3D maps of the real world to allow the device to recognize and respect the geometry of the world around it. Unfortunately, many AR devices still have trouble creating and maintaining these maps in challenging environments, like outdoors or when moving fast, so the resulting 3D maps of the environments have holes and inaccuracies, causing user interaction with the environment to be unreliable and breaking the seamless integration. While many solutions look toward more advanced algorithms that require more specialized sensors or next-gen AR devices to improve this mapping issue, we see an opportunity to enhance any existing 3D maps using a novel interaction aggregation approach that can theoretically work with any mapping technology. In this work, we present the Interaction-Triggered Estimation of AR Object Placement on Indeterminate Meshes, a work-in-progress application providing a quick, interaction-triggered method to estimate the normal and position of missing mesh in real-time with low computational overhead.
4

Measuring the Effect of Task-Irrelevant Visuals in Augmented Reality

Allison C Hopkins (6632282) 14 May 2019 (has links)
<p>Augmented reality (AR) allows people to view digital information overlaid on to real-world objects. While the technology is still new, it is currently being used in places such as the military and industrial assembly operations in the form of ocular devices worn on the head over the eyes. Head-mounted displays (HMDs) let people always see AR information in their field of view no matter where their head is positioned. Studies have shown that HMDs displaying information directly related to the immediate task can decreased cognitive workload and increase the speed and accuracy of task performance. However, task-irrelevant information has shown to decrease performance and accuracy of the primary task and also hinder the efficiency of processing the irrelevant information. This has been investigated in industry settings but less so in an everyday consumer context. This study proposes comparing two types of visual information (text and shapes) in AR displayed on an HMD to answer the following questions: 1) when content is of importance, which visual notification (text or shapes) is processed faster while degrading the performance of the primary task the least? And 2) When presence is of importance, which visual notification (text or shapes) is processed faster while degrading the performance of the primary task the least?</p>
5

Rapid Design and Prototyping Methods for Mobile Head-Worn Mixed Reality (MR) Interface and Interaction Systems

Redfearn, Brady Edwin 09 February 2018 (has links)
As Mixed Reality (MR) technologies become more prevalent, it is important for researchers to design and prototype the kinds of user interface and user interactions that are most effective for end-user consumers. Creating these standards now will aid in technology development and adoption in MR overall. In the current climate of this domain, however, the interface elements and user interaction styles are unique to each hardware and software vendor and are generally proprietary in nature. This results in confusion for consumers. To explore the MR interface and interaction space, this research employed a series of standard user-centered design (UCD) methods to rapidly prototype 3D head-worn display (HWD) systems in the first responder domain. These methods were performed across a series of 13 experiments, resulting in an in-depth analysis of the most effective methods experienced herein and providing suggested paths forward for future researchers in 3D MR HWD systems. Lessons learned from each individual method and across all of the experiments are shared. Several characteristics are defined and described as they relate to each experiment, including interface, interaction, and cost. / Ph. D. / Trends in technology development have shown that the inclusion of virtualized objects and worlds will become more popular in both professional workflows and personal entertainment. As these synthetic objects become easier to build and deploy in consumer devices, it will become increasingly important for a set of standard information elements (e.g., the “save” operation disk icon in desktop software) and user interaction motifs (e.g., “pinch and zoom” on touch screen interfaces) to be deployed in these types of futuristic technologies. This research effort explores a series of rapid design and prototype methods that inform how a selection of common interface elements in the first responder domain should be communicated to the user. It also explores how users in this domain prefer to interact with futuristic technology systems. The results from this study are analyzed across a series of characteristics and suggestions are made on the most effective methods and experiments that should be used by future researchers in this domain.
6

Instructing workers through a head-worn Augmented Reality display and through a stationary screen on manual industrial assembly tasks : A comparison study

Kenklies, Kai Malte January 2020 (has links)
It was analyzed if instructions on a head-worn Augmented Reality display (AR-HWD) are better for manual industrial assembly tasks than instructions on a stationary screen. A prototype was built which consisted of virtual instruction screens for two example assembly tasks. In a comparison study participants performed the tasks with instructions through an AR-HWD and alternatively through a stationary screen. Questionnaires, interviews and observation notes were used to evaluate the task performances and the user experience. The study revealed that the users were excited and enjoyed trying the technology. The perceived usefulness at the current state was diverse, but the users saw a huge potential in AR-HWDs for the future. The task accuracy with instructions on the AR-HWD was equally good as with instructions on the screen. AR-HWDs are found to be a better approach than a stationary screen, but technological limitations need to be overcome and workers need to train using the new technology to make its application efficient.

Page generated in 0.0757 seconds