• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 13
  • 10
  • 9
  • 6
  • 2
  • 2
  • 1
  • Tagged with
  • 175
  • 175
  • 48
  • 37
  • 31
  • 31
  • 31
  • 30
  • 28
  • 26
  • 24
  • 22
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Supporting Remote Manipulation: An Ecological Approach

Atherton, John A. 10 August 2009 (has links) (PDF)
User interfaces for remote robotic manipulation widely lack sufficient support for situation awareness and, consequently, can induce high mental workload. With poor situation awareness, operators may fail to notice task-relevant features in the environment often leading the robot to collide with the environment. With high workload, operators may not perform well over long periods of time and may feel stressed. We present an ecological visualization that improves operator situation awareness. Our user study shows that operators using the ecological interface collided with the environment on average half as many times compared with a typical interface, even with a poorly calibrated 3D sensor; however, users performed more quickly with the typical interface. The primary benefit of the user study is identifying several changes to the design of the user interface; preliminary results indicate that these changes improve the usability of the manipulator.
22

Enhancing Situational Awareness Through Haptics Interaction In Virtual Environment Training Systmes

Hale, Kelly 01 January 2006 (has links)
Virtual environment (VE) technology offers a viable training option for developing knowledge, skills and attitudes (KSA) within domains that have limited live training opportunities due to personnel safety and cost (e.g., live fire exercises). However, to ensure these VE training systems provide effective training and transfer, designers of such systems must ensure that training goals and objectives are clearly defined and VEs are designed to support development of KSAs required. Perhaps the greatest benefit of VE training is its ability to provide a multimodal training experience, where trainees can see, hear and feel their surrounding environment, thus engaging them in training scenarios to further their expertise. This work focused on enhancing situation awareness (SA) within a training VE through appropriate use of multimodal cues. The Multimodal Optimization of Situation Awareness (MOSA) model was developed to identify theoretical benefits of various environmental and individual multimodal cues on SA components. Specific focus was on benefits associated with adding cues that activated the haptic system (i.e., kinesthetic/cutaneous sensory systems) or vestibular system in a VE. An empirical study was completed to evaluate the effectiveness of adding two independent spatialized tactile cues to a Military Operations on Urbanized Terrain (MOUT) VE training system, and how head tracking (i.e., addition of rotational vestibular cues) impacted spatial awareness and performance when tactile cues were added during training. Results showed tactile cues enhanced spatial awareness and performance during both repeated training and within a transfer environment, yet there were costs associated with including two cues together during training, as each cue focused attention on a different aspect of the global task. In addition, the results suggest that spatial awareness benefits from a single point indicator (i.e., spatialized tactile cues) may be impacted by interaction mode, as performance benefits were seen when tactile cues were paired with head tracking. Future research should further examine theoretical benefits outlined in the MOSA model, and further validate that benefits can be realized through appropriate activation of multimodal cues for targeted training objectives during training, near transfer and far transfer (i.e., real world performance).
23

Comparison of Augmented Reality Rearview and Radar Head-Up Displays for Increasing Spatial Awareness During Exoskeleton Operation

Hollister, Mark Andrew 19 March 2024 (has links)
Full-body powered exoskeletons for industrial workers have the potential to reduce the incidence of work-related musculoskeletal disorders while increasing strength beyond human capabilities. However, operating current full-body powered exoskeletons imposes different loading, motion, and balance requirements on users compared to unaided task performance, potentially resulting in additional mental workload on the user which may reduce situation awareness (SA) and increase risk of collision with pedestrians, negating the health and safety benefits of exoskeletons. Exoskeletons could be equipped with visual aids to improve SA, like rearview cameras or radar displays. However, research on design and evaluation of such displays for exoskeleton users are absent in the literature. This empirical study compared several augmented reality (AR) head-up displays (HUDs) in providing SA to minimize pedestrian collisions while completing common warehouse tasks. Specifically, the study consisted of an experimental factor of display abstraction including four levels, from low to high abstraction: rearview camera, overhead radar, ring radar, and no visual aid (as control). The second factor was elevation angle that was analyzed with the overhead and ring radar displays at 15°, 45°, and 90°. A 1x4 repeated measures ANOVA on all four display abstraction levels at 90° revealed that every display condition performed better than the no visual aid condition, the Bonferroni post-hoc test revealed that overhead and ring radars (medium and high abstraction respectively) received higher usability ratings than the rearview camera (low abstraction). A 2x3 repeated measures ANOVA on the two radar displays at all three display angles found that the overhead radar yielded better transport time and situation awareness ratings than the ring radar. Further, the two-way ANOVA found that 45° angles yielded the best transport collision times. Thus, AR displays presents promise in augment SA to minimize collision risk to collision and injury in warehouse settings. / Master of Science / Exoskeletons can increase the strength capabilities of industrial workers while reducing the likelihood of injury from heavy lifting and materials handling. However, full-body powered exoskeletons are currently very unwieldy, demanding users to focus their attention on controlling the exoskeleton that may cause a loss awareness of their surroundings. This may increase the likelihood of collisions with pedestrians, presenting a significant safety concern that could negate the benefits of exoskeletons. Rearview cameras and radar displays of nearby pedestrians could improve situation awareness for the exoskeleton user; however, these methods are not well-tested in settings where exoskeletons would be used. This study compared a rearview camera, a conventional radar, and a ring-shaped radar at display angles of 15°, 45°, and 90° using an augmented reality headset and simulated warehouse task to determine the combination of display type and angle that would maximize situation awareness and minimize collisions with pedestrians. The study revealed that all displays performed better than no display support and the latest evidence from this study and the literature suggests that a conventional overhead radar at 45° performed best.
24

Takeover Required!  Augmented Reality Head-Up Displays' Ability to Increase Driver Situation Awareness During Takeover Scenarios in  Driving Automation Systems

Greatbatch, Richard 27 July 2023 (has links)
The number of automated features in surface vehicles are increasing as new vehicles are released each year. Some of these features allow drivers to temporarily take their attention off-road and en-gage in other tasks. However, there are times when it is important for drivers to immediately take control of the vehicle, if required. To safely take control, drivers must understand what is required of them and have situation awareness (SA) to understand important changes or factors within the environment around them. We can present drivers with needed takeover information using a head-up display (HUD), keeping the driver's eyes on the road. However, drivers operating conditionally automated vehicles on various roadways, such as highways and urban arterial roads, require differ-ent information to be conveyed to them as they drive due to inherent differences in roadway and obstacle features within the driving scene, such as the addition of vulnerable road users on urban arterial roads. This work aimed to (1) investigate impacts of novel HUDs on driver situation awareness during takeover on a highway, (2) identify system design criteria to fulfill driver's needs during takeover on an urban arterial road, and, (3) examine the effects of HUDs on driver situation awareness during takeover on an urban arterial road. We investigated these goals by collecting em-pirical data for takeover performance metrics, self-reported situation awareness, participant prefer-ences, and expert's opinions. From our studies we conclude that HUDs can increase aspects of takeover performance on high-ways, with participants demonstrating lower response times and higher time to collision metrics. We did not find significant impacts of HUDs on driver situation awareness on highways. Results from our semistructed interviews indicated that experts felt systems should communicate the need for driver attention to relevant information, communicate obstacle information, and provide information using a variety of driver senses. HUDs can also increase driver situation awareness during takeover on an urban arterial road and support improved takeover performance. This work allowed us to identify potential use cases and design criteria for new designs of novel HUDs to deliver important information during takeover. / Doctor of Philosophy / More features that take some of the tasks of vehicle operation off drivers are being released with every new model year of vehicle. Currently, these features still require drivers to maintain attention to the road and, in some cases, immediately take control of the vehicle, called takeover. However, research has not identified how best to communicate the need for takeover on all types of roads. Research has utilized a head-up display (HUD) to present vehicle information, communicate navigation, and highlight objects around the world to drivers while keeping driver's eyes on road. Keeping driver's eyes on road allows drivers to maintain situation awareness (SA) where they would perceive, understand, and react to changes in the driving scene. Currently, we can convey information to drivers both using traditional head-down displays (HDDs) in the instrument cluster and some vehicles are equipped with HUDs that can deliver in-formation within driver's field of view. This work aimed to first understand how takeover request delivered via HUD affect takeover performance and drivers' situation awareness on highways compared to HDDs. Next, we investigated expert's opinions on driver needs from the automated system during takeover on urban arterial roads to develop design criteria for new types of takeover requests. Finally, we took the design criteria to develop, test, and compare driver's takeover performance and situation awareness in new takeover requests delivered by HDDs and HUDs. HUDs may be useful in presenting information to drivers during takeover. Results support that on highways, HUDs are beneficial for increasing safer driver responses, where they responded quick-er and kept a greater distance to an object in the road in front of them. From design criteria identified by experts, we designed alerts that directed driver's attention to bicyclists, pedestrians, and vehicles crossing the path of their vehicle. After testing the alerts, results indicated that drivers had higher levels of situation awareness and performance metrics during takeover on urban arterial roads. Though HUDs show promise in increasing driver's takeover performance and situation awareness, we must take careful consideration into design of future HUDs to give appropriate and relevant information to drivers.
25

The Effects of System Transparency and Reliability on Drivers' Perception and Performance Towards Intelligent Agents in Level 3 Automated Vehicles

Zang, Jing 05 July 2023 (has links)
In the context of automated vehicles, transparency of in-vehicle intelligent agents (IVIAs) is an important contributor to drivers' perception, situation awareness (SA), and driving performance. However, the effects of agent transparency on driver performance when the agent is unreliable have not been fully examined yet. The experiments in this Thesis focused on different aspects of IVIA's transparency, such as interaction modes and information levels, and explored their impact on drivers considering different system reliability. In Experiment 1, a 2 x 2 mixed factorial design was used in this study, with transparency (Push: proactive vs. Pull: on-demand) as a within-subjects variable and reliability (high vs. low) as a between-subjects variable. In a driving simulator, twenty-seven young drivers drove with two types of in-vehicle agents during Level 3 automated driving. Results suggested that participants generally preferred the Push-type agent, as it conveyed a sense of intelligence and competence. The high-reliability agent was associated with higher situation awareness and less workload, compared to the low-reliability agent. Although Experiment 1 explored the effects of transparency by changing the interaction mode and the accuracy of the information, a theoretical framework was not well outlined regarding how much information should be conveyed and how unreliable information influenced drivers. Thus, Experiment 2 further studied the transparency regrading information level, and the impact of reliability on its effect. A 3 x 2 mixed factorial design was used in this study, with transparency (T1, T2, T3) as a between-subject variable and reliability (high vs. low) as a within-subjects variable. Fifty-three participants were recruited. Results suggested that transparency influenced drivers' takeover time, lane keeping, and jerk. The high-reliability agent was associated with the higher perception of system accuracy and response speed, and longer takeover time than the low-reliability agent. Participants in T2 transparency showed higher cognitive trust, lower workload, and higher situation awareness only when system reliability was high. The results of this study may have significant effects on the ongoing creation and advancement of intelligent agent design in automated vehicles. / Master of Science / This thesis explores the effects of system's transparency and reliability of the in-vehicle intelligent agents (IVIAs) on drivers' performance and perception in the context of automated vehicles. Transparency is defined as the amount of information and the way to be shared with the operator about the function of the system. Reliability refers to the accuracy of the agent's statements. The experiments focused on different aspects of IVIA's transparency, such as interaction modes (proactive vs. on-demand) and information composition (small vs. medium vs. large), and how they impact drivers considering different system reliability. In the experiment, participants were required to drive in the driving simulator and follow the voice command from the IVIAs. A theoretical model called Situation Awareness-based Agent Transparency Model was adopted to build the agent's interactive scripts. In Experiment 1, 27 young drivers drove with two types of in-vehicle agents during Level 3 automated driving. Results suggested that participants generally preferred the agent that provided information proactively, and it conveyed a sense of intelligence and competence. Also, when the system's reliability is high, participants were found to have higher situation awareness of the environment and spent less effort on the driving tasks, compared to when the system's reliability is low. Our result also showed that these two factors can jointly influence participants' driving performance when they need to take over control from the automated system. Experiment 2 further studied the transparency regarding the information composition of the agent's voice prompt and the impact of reliability on its effect. A total of 53 participants were recruited, and the results suggested that transparency influenced drivers' takeover time, lane keeping, and jerk. The high-reliability agent was associated with a higher perception of system accuracy and response speed and a longer time to take over when requested than the low-reliability agent. Participants in the medium transparency condition showed higher cognitive trust toward the system, perceived lower workload when driving, and higher situation awareness only when system reliability was high. Overall, this research highlights the importance of transparency in IVIAs for improving drivers' performance, perception, and situation awareness. The results may have significant implications for the design and advancement of intelligent agents in automated vehicles.
26

Investigating Pilot Performance Using Mixed-Modality Simulated Data Link

Lancaster, Jeff A. 19 April 2004 (has links)
Empirical studies of general aviation (GA) pilot performance are lacking, especially with respect to envisioned future requirements. Two research studies were conducted to evaluate human performance using new technologies. In the first study, ten participants completed the Modified Rhyme Test (MRT) in an effort to compare the intelligibility of two text-to-speech (TTS) engines (DECtalk and AT&T's Natural Voices) as presented in 85 dB(A) aircraft cockpit engine noise. Results indicated significant differences in intelligibility (p £ 0.05) between the two speech synthesizers across the tested speech-to-noise ratios (S/N) (i.e., −5 dB, -8 dB, and −11 dB S/N) with the AT&T engine resulting in superior intelligibility in all of the S/N. The AT&T product was therefore selected as the TTS engine for the second study. In the second study, 16 visual flight rules (VFR) rated pilots were evaluated for their data link performance using a flight simulator (ELITE i-GATE) equipped with a mixed-modality simulated data link within one of two flight conditions. Data link modalities included textual, synthesized speech, digitized speech, and synthesized speech/textual combination. Flight conditions included VFR (unlimited ceiling, visibility) or marginal VFR (MVFR) flight conditions (clouds 2800 feet above ground level [AGL], three miles visibility). Evaluation focused on the time required accessing, understanding, and executing data link commands. Additional data were gathered to evaluate workload, situation awareness, and subjective preference. Results indicated significant differences in pilot performance, mental workload, and situation awareness across the data link modalities and between flight conditions. Textual data link resulted in decreased performance while the other three data link conditions did not differ in performance. Workload evaluation indicated increased workload in the textual data link condition. Situation awareness (SA) measures indicated differences in perceived SA between flight conditions while objective SA measures differed across data link conditions. Actual or potential applications of this research include guidance in the development of flight performance objectives for future GA systems. Other applications include guidance in the integration of automated voice technologies in the cockpit and/or in similar systems that present elevated levels of background noise during normal communications and auditory display operations. / Ph. D.
27

An Assessment of the Attention Demand Associated with the Processing of Information for In-Vehicle Information Systems (IVIS)

Gallagher, John Paul 04 May 2001 (has links)
Technological interventions are being considered to alleviate congestion and to improve the quality of driving on our nation's highways. These new technology interventions will be capable of increasing the amount of information provided to the driver; therefore, steps must be taken to ensure they do not require a high attention demand. (Limited attention resources can be diverted from the primary task of driving to a secondary in-vehicle task). The attention demand required as part of the process of extracting information has been studied relatively extensively. However, the processing required to make complex decisions is not well understood and provides cause for concern. This study investigated the attention demand required to perform several types of tasks, such as selecting a route, selecting the cheapest route, and selecting the fastest route. The three objectives of this study were: 1) To investigate driver performance during IVIS tasks that required additional processing of information after the extraction of information from a visual display. 2) To develop a method for evaluating driver performance with regard to safety. This task was accomplished by performing an extensive review of the literature, and developing two composite measures. 3) To provide descriptive data on the proportion of drivers who exceeded a threshold of driver performance for each of the different IVIS tasks. An instrumented vehicle, equipped with cameras and sensors, was used to investigate on-road driver behavior on a four-lane divided road with good visibility. A confederate vehicle was driven in front of the instrumented vehicle to create a vehicle following situation. Thirty-six drivers participated in this study. Age, presentation format, information density, and type of task were the independent variables used in this study. Results from this study indicate that a high proportion of drivers' will have substantially degraded performance performing IVIS tasks such as selecting a route or a hotel from several possibilities. Findings also indicate that tasks involving computations, such as selecting the quickest or cheapest route, require a high attention demand and consequently should not be performed by a driver when the vehicle is in motion. In addition, text-based messages in paragraph format should not be presented to the driver while the vehicle is in motion. The graphic icon format should be utilized for route planning tasks. / Ph. D.
28

Development and Human Factors Evaluation of a Portable Auditory Localization Acclimation Training System

Thompson, Brandon Scott 19 June 2020 (has links)
Auditory situation awareness (ASA) is essential for safety and survivability in military operations where many of the hazards are not immediately visible. Unfortunately, the Hearing Protection Devices (HPDs) required to operate in these environments can impede auditory localization performance. Promisingly, recent studies have exhibited the plasticity of the human auditory system by demonstrating that training can improve auditory localization ability while wearing HPDs, including military Tactical Communications and Protective Systems (TCAPS). As a result, the U.S. military identified the need for a portable system capable of imparting auditory localization acquisition skills at similar levels to those demonstrated in laboratory environments. The purpose of this investigation was to develop and validate a Portable Auditory Localization Acclimation Training (PALAT) system equipped with an improved training protocol against a proven laboratory grade system referred to as the DRILCOM system and subsequently evaluate the transfer-of-training benefit in a field environment. In Phase I, a systems decision process was used to develop a prototype PALAT system consisting of an expandable frame housing 32-loudspeakers operated by a user-controlled tablet computer capable of reproducing acoustically accurate localization cues similar to the DRILCOM system. Phase II used a within-subjects human factors experiment to validate whether the PALAT system could impart similar auditory localization training benefits as the DRILCOM system. Results showed no significant difference between the two localization training systems at each stage of training or in training rates for the open ear and with two TCAPS devices. The PALAT system also demonstrated the ability to detect differences in localization accuracy between listening conditions in the same manner as the DRILCOM system. Participant ratings indicated no perceived difference in localization training benefit but significantly preferred the PALAT system user interface which was specifically designed to improve usability features to meet requirements of a user operable system. The Phase III investigation evaluated the transfer-of-training benefit imparted by the PALAT system using a broadband stimulus to a field environment using gunshot stimulus. Training under the open ear and in-the-ear TCAPS resulted in significant differences between the trained and untrained groups from in-office pretest to in-field posttest. / Doctor of Philosophy / Auditory situation awareness (ASA) is essential for safety and survivability in military operations where many of the hazards are not immediately visible. Unfortunately, the Hearing Protection Devices (HPDs) required to operate in these environments can impede sound localization performance. Promisingly, recent studies have exhibited the ability of the human auditory system to learn by demonstrating that training can improve sound localization ability while wearing HPDs. As a result, the U.S. military identified the need for a portable system capable of improving sound localization performance at similar levels to those demonstrated in laboratory environments. The purpose of this investigation was to develop and validate a Portable Auditory Localization Acclimation Training (PALAT) system equipped with an improved training protocol against a proven laboratory grade system referred to as the DRILCOM system and subsequently evaluate the transfer-of-training benefit in a field environment. In Phase I, a systems decision process was used to develop a prototype PALAT system consisting of an expandable frame housing 32-loudspeakers operated by a user-controlled tablet computer capable of reproducing similar sounds as the DRILCOM system. Phase II used a within-subjects human factors experiment to validate whether the PALAT system could impart similar sound localization training benefits as the DRILCOM system. Results showed no significant difference between the two localization training systems at each stage of training or in training rates for the open ear and with two HPDs. The PALAT system also demonstrated the ability to detect differences in localization accuracy between listening conditions in the same manner as the DRILCOM system. Participant ratings indicated no perceived difference in localization training benefit but significantly preferred the PALAT system user interface which was specifically designed to improve usability features to meet requirements of a user operable system. The Phase III investigation evaluated the transfer-of-training benefit imparted by the PALAT system using a broadband stimulus to a field environment using gunshot stimulus. Training under the open ear and in-the-ear TCAPS resulted in significant differences between the trained and untrained groups from in-office pretest to in-field posttest.
29

Shared Situation Awareness in Student Group Work When Using Immersive Technology

Bröring, Tabea January 2023 (has links)
Situation awareness (SA) describes how well a person perceives and understands their environment and the situation that they are in. When working in groups, shared SA describes how similarly the team members view and interpret the situation in a given environment. Immersive technology comprises technology that integrates virtual objects into the user’s reality of a physical world. It holds great potential for the application in educational contexts and collaborative settings like group projects. Immersive technology can increase engagement, make complex concepts more tangible, and increase media fluency. When immersive technology is introduced into a real-world setting, it creates a mixed reality with virtual and physical elements. In mixed reality collaborations, the complexity of elements in the environment can negatively affect the shared SA of the group members. The research problem of this thesis is that the intersection between shared SA and student group work that involves immersive technology is under-researched to this date. The research question is ”How is shared situation awareness in student group work formed when using immersive technology?”. A case study of a student group containing a participatory observation of several of their work sessions was carried out, and the obtained material was analyzed using sequential analysis. It was found that the students do not prioritize shared SA but work individually, dividing smaller subtasks among themselves and focusing on their own tasks first and foremost. Communication is used sparsely to stay updated about the other students’ work status, which helps to build shared SA. Communication also plays a crucial role in building shared SA when using immersive technology. It was also observed that the students prefer to use immersive technology in a way that allows more than one person to see the same virtual environment, as it is the case when two virtual reality (VR) headsets are connected to the same application.
30

Autonomous terminal area operations for unmanned aerial systems

McAree, Owen January 2013 (has links)
After many years of successful operation in military domains, Unmanned Aerial Systems (UASs) are generating significant interest amongst civilian operators in sectors such as law enforcement, search and rescue, aerial photography and mapping. To maximise the benefits brought by UASs to sectors such as these, a high level of autonomy is desirable to reduce the need for highly skilled operators. Highly autonomous UASs require a high level of situation awareness in order to make appropriate decisions. This is of particular importance to civilian UASs where transparency and equivalence of operation to current manned aircraft is a requirement, particularly in the terminal area immediately surrounding an airfield. This thesis presents an artificial situation awareness system for an autonomous UAS capable of comprehending both the current continuous and discrete states of traffic vehicles. This estimate forms the basis of the projection element of situation awareness, predicting the future states of traffic. Projection is subject to a large degree of uncertainty in both continuous state variables and in the execution of intent information by the pilot. Both of these sources of uncertainty are captured to fully quantify the future positions of traffic. Based upon the projection of future traffic positions a self separation system is designed which allows an UAS to quantify its separation to traffic vehicles up to some future time and manoeuvre appropriately to minimise the potential for conflict. A high fidelity simulation environment has been developed to test the performance of the artificial situation awareness and self separation system. The system has demonstrated good performance under all situations, with an equivalent level of safety to that of a human pilot.

Page generated in 0.1229 seconds