Spelling suggestions: "subject:"humanfactors"" "subject:"humanfactor""
11 |
Modeling the Relationship between Perceptual and Stimulus Space in Category LearningKillingsworth, Clay 01 May 2021 (has links) (PDF)
Learning to categorize visual stimuli is a fundamental cognitive skill underlying both everyday functioning and professional competencies in domains such as radiology and airport security screening. Categories may be very simple or highly complex, with accurate categorization dependent on multiple interacting features. General recognition theory (GRT) models uniquely allow examination of feature dimension interactions, but basic questions remain about the applicability of such models and the 2x2 categorization tasks (four-alternative forced choice) employed in studies which use them. Findings in several studies that factorially combine 2 levels of 2 stimulus dimensions indicate a common pattern of perceptual advantage for the category that is high on both dimensions, despite examining stimuli as diverse as simulated human faces, baggage x-rays, and mammograms. Because of the ambiguous ground truth of these applied studies, their conclusions are limited by the inability to rule out the influence of task artifacts on their results. The present work fills this gap in the literature and seeks to disambiguate such findings by examining the contributions of task artifacts such as response mapping and assessing the sensitivity of the modeling paradigm using simple stimuli. Participants learned categories of simple two-dimensional stimuli produced by various manipulations of a basic category construction, and GRT-wIND models were fit to their responses. Results indicate that the model is sensitive to manipulations of the perceptual space and category structures. Further, the previously observed pattern advantaging one of four categories is observed here despite the absence of such a relationship between the feature dimensions in the objective category constructions. The effect is largely mitigated, however, by altering the response locations such that they are no longer orthogonally mapped to their corresponding categories. These findings further evidence the utility and sensitivity of the GRT-wIND model and suggest updates to best practices in applying the four-alternative forced choice task.
|
12 |
Getting the Upper Hand: Natural Gesture Interfaces Improve Instructional Efficiency on a Conceptual Computer LessonBailey, Shannon 01 January 2017 (has links)
As gesture-based interactions with computer interfaces become more technologically feasible for educational and training systems, it is important to consider what interactions are best for the learner. Computer interactions should not interfere with learning nor increase the mental effort of completing the lesson. The purpose of the current set of studies was to determine whether natural gesture-based interactions, or instruction of those gestures, help the learner in a computer lesson by increasing learning and reducing mental effort. First, two studies were conducted to determine what gestures were considered natural by participants. Then, those gestures were implemented in an experiment to compare type of gesture and type of gesture instruction on learning conceptual information from a computer lesson. The goal of these studies was to determine the instructional efficiency – that is, the extent of learning taking into account the amount of mental effort – of implementing gesture-based interactions in a conceptual computer lesson. To test whether the type of gesture interaction affects conceptual learning in a computer lesson, the gesture-based interactions were either naturally- or arbitrarily-mapped to the learning material on the fundamentals of optics. The optics lesson presented conceptual information about reflection and refraction, and participants used the gesture-based interactions during the lesson to manipulate on-screen lenses and mirrors in a beam of light. The beam of light refracted/reflected at the angle corresponding with type of lens/mirror. The natural gesture-based interactions were those that mimicked the physical movement used to manipulate the lenses and mirrors in the optics lesson, while the arbitrary gestures were those that did not match the movement of the lens or mirror being manipulated. The natural gestures implemented in the computer lesson were determined from Study 1, in which participants performed gestures they considered natural for a set of actions, and rated in Study 2 as most closely resembling the physical interaction they represent. The arbitrary gestures were rated by participants as most arbitrary for each computer action in Study 2. To test whether the effect of novel gesture-based interactions depends on how they are taught, the way the gestures were instructed was varied in the main experiment by using either video- or text-based tutorials. Results of the experiment support that natural gesture-based interactions were better for learning than arbitrary gestures, and instruction of the gestures largely did not affect learning and amount of mental effort felt during the task. To further investigate the factors affecting instructional efficiency in using gesture-based interactions for a computer lesson, individual differences of the learner were taken into account. Results indicated that the instructional efficiency of the gestures and their instruction depended on an individual's spatial ability, such that arbitrary gesture interactions taught with a text-based tutorial were particularly inefficient for those with lower spatial ability. These findings are explained in the context of Embodied Cognition and Cognitive Load Theory, and guidelines are provided for instructional design of computer lessons using natural user interfaces. The theoretical frameworks of Embodied Cognition and Cognitive Load Theory were used to explain why gesture-based interactions and their instructions impacted the instructional efficiency of these factors in a computer lesson. Gesture-based interactions that are natural (i.e., mimic the physical interaction by corresponding to the learning material) were more instructionally efficient than arbitrary gestures because natural gestures may help schema development of conceptual information through physical enactment of the learning material. Furthermore, natural gestures resulted in lower cognitive load than arbitrary gestures, because arbitrary gestures that do not match the learning material may increase the working memory processing not associated with the learning material during the lesson. Additionally, the way in which the gesture-based interactions were taught was varied by either instructing the gestures with video- or text-based tutorials, and it was hypothesized that video-based tutorials would be a better way to instruct gesture-based interactions because the videos may help the learner to visualize the interactions and create a more easily recalled sensorimotor representation for the gestures; however, this hypothesis was not supported and there was not strong evidence that video-based tutorials were more instructionally efficient than text-based instructions. The results of the current set of studies can be applied to educational and training systems that incorporate a gesture-based interface. The finding that more natural gestures are better for learning efficiency, cognitive load, and a variety of usability factors should encourage instructional designers and researchers to keep the user in mind when developing gesture-based interactions.
|
13 |
Comparing Human and Machine Learning Classification of Human Factors in Incident Reports From AviationBoesser, Claas Tido 01 January 2020 (has links)
Incident reporting systems are an integral part of any organization seeking to increase the safety of their operation by gathering data on past events, which can then be used to identify ways of mitigating similar events in the future. In order to analyze trends and common issues with regards to the human element in the system, reports are often classified according to a human factors taxonomy. Lately, machine learning algorithms have become popular tools for automated classification of text; however, performance of such algorithms varies and is dependent on several factors. In supervised machine learning tasks such as text classification, the algorithm is trained with features and labels, where the features here are a function of the incident reports themselves and the labels are supplied by a human annotator, whether that is the reporter or a third person. Aside from the intricacies of building and tuning machine learning models, a subjective classification according to a human factors taxonomy can generate considerable noise and bias. I examined the interdependencies between the features of incident reports, the subjective labeling process, the constraints that the taxonomy itself imposes, and basic characteristics of human factors taxonomies that can influence human, as well as automated, classification. In order to evaluate these challenges, I trained a machine learning classifier on 17,253 incident reports from the NASA Aviation Safety Reporting System (ASRS) using multi-label classification, and collected labels from six human annotators for a subset of 400 incident reports each, resulting in a total of 2,400 individual annotations. Results show that, in general, reliability of annotation for the set of incident reports selected in this study was comparatively low. It was also evident that some human factors labels were more agreed upon than others, sometimes related to the presence of key words in the reports which map directly to the label. Performance of machine learning annotation followed patterns of human agreement on labels. The high variability of content and quality of narratives has been identified as a major factor for difficulties in annotation. Suggestions on how to improve the data collection and labeling process are provided.
|
14 |
Does One Bad Phish Spoil the Whole Email Load?: Exploring Phishing Susceptibility Task Factors and Potential InterventionsSarno, Dawn 01 January 2020 (has links)
Phishing emails have become a prevalent cybersecurity threat for the modern email user. Research attempting to understand how users are susceptible to phishing attacks has been limited and hasn't fully explored how task factors influence accurate detection. Even further lacking are the existing training interventions that still have users falling victim to up to 90% of phishing emails following training. The present studies examined how task factors (e.g., email load, phishing prevalence) and a new form of intervention, rather than training, influence email performance. In four experiments, participants classified emails as either legitimate or not legitimate and reported on a variety of other categorizations (e.g., threat level). The first two experiments examined how email load and phishing prevalence influence phishing detection. The third experiment examined the interaction of these two factors to determine whether they have compounding effects. The last experiment investigated how performance can be improved with a novel cheat sheet intervention method. All four experiments utilized individual difference variables to examine how cognitive, behavioral, and personality factors influence detection under various task conditions and how they impact the utilization of training interventions. The results across the first three experiments indicated that both high email load and low phishing prevalence decrease email classification accuracy and sensitivity. However, performance was poor across all conditions, with phishing detection near chance performance and sensitivity values indicating that the task was very challenging. Additionally, participants demonstrated poor metacognition with over confidence, low self-reported difficulty, and low perceived threat for the emails. Experiment 4's results indicated that phishing detection could be improved by 20% with the embedded cheat sheet intervention. Overall, the present studies suggest that email load and phishing prevalence can decrease fraud detection, but that embedded phishing tips can improve performance.
|
15 |
Subjective Measures of Implicit Categorization LearningZlatkin, Audrey 01 January 2019 (has links)
The neuropsychological theory known as COVIS (COmpetition between Verbal and Implicit Systems) postulates that distinct brain systems compete during category learning. The explicit system involves conscious hypothesis testing about verbalizable rules, while the implicit system relies on procedural learning of rules that are difficult to verbalize. Specifically from a behavioral approach, COVIS has been supported through demonstrating empirical dissociations between explicit and implicit learning tasks. The current studies were designed to gain deeper understanding of implicit category learning through the implementation of a subjective measure of awareness, Meta d', which until now has not been validated within a COVIS framework. Meta d' is a measure of metacognitive accuracy. This is the ability to assess the accuracy of one's own performance. These three experiments evaluated the use of Meta d' as a valid predictor of task performance within a two-structure perceptual categorization task. Experiment 1 focuses on using Meta d' to parse out dissociations between awareness and performance through the phenomenon of Blind Sight and Blind Insight. Experiment 2 and 3 utilize a motor response mapping disruption to observe predicted decrements to the implicit learning system. Experiment 3 utilizes functional Near Infrared Spectroscopy (fNIRS) to measure hemodynamic changes in the Prefrontal Cortex as a function of category structure. Across the 3 experiments, Meta d' in conjunction with decision bound model fits were used to make accurate predictions about the differences in performance throughout implicit and explicit categorization tasks. These collective results indicate that metacognitive accuracy, an implicit structure, was highly sensitive to a whether a person is using the correct rule strategies through the task.
|
16 |
The Effects of Presence and Cognitive Load on Episodic Memory in Virtual EnvironmentsBarclay, Paul 01 January 2019 (has links)
Episodic memory refers to an individual's memory for events that they have experienced in the past along with the associated contextual details. In order to more closely reflect the way that episodic memory functions in the real world, researchers and clinicians test episodic memory using virtual environments. However, these virtual environments introduce new interfaces and task demands that are not present in traditional methodologies. This dissertation investigates these environments through the lenses of Presence and Cognitive Load theories in order to unravel the ways that basic technological and task differences may affect memory performance. Participants completed a virtual task under High and Low Immersion conditions intended to manipulate Presence and Single-Task, Ecological Dual-Task and Non-Ecological Dual-Task conditions intended to manipulate cognitive load. Afterward they completed a battery of memory tasks assessing spatial, object, and feature binding aspects of episodic memory. Analysis through 2x3 ANOVA showed that performance for spatial memory is greatly improved by manipulation of Presence, where performance for object memory is improved by germane cognitive load. Exploratory analyses also revealed significant gender differences in spatial memory performance, indicating that improving Presence may offset the higher levels in male performance traditionally seen on spatial tasks. These results have practical implications for clinical memory assessment, as well as training paradigms and may serve to highlight the differences in the ways that memory is studied in the laboratory versus the way that it is employed in day-to-day life. Future studies based on this research should focus on linking these differences in memory performance to visuospatial and verbal strategies of memorization and determining whether the effects observed in this study replicate using other manipulations of presence and cognitive load.
|
17 |
Threatening Instructions During a Hurricane Influence Risk Perceptions: The Case of Fear Appeals and Changing Hurricane ProjectionsWhitmer, Daphne 01 May 2019 (has links)
The goal of this research was to examine the effectiveness of persuasive language in the protective action recommendation of an emergency warning, which instructs people how to prepare and stay safe. Study 1 was a pilot study, which suggested that participants were able to make distinctions between hurricane categories. In study 2, the presence of fear language and second-person personal pronouns (i.e., "you") in a recommendation was manipulated. Overall, fear language was more influential than a pronoun on risk perceptions. To understand how context influences risk perceptions, participants in study 3 made decisions after each piece of information received. The severity of the hurricane increased, decreased, or stayed the same before decision point 2 and a recommendation containing fear or neutral language was presented before decision point 3. Those who read the fear message were more likely to be in the danger control process than those in the neutral language condition, which suggested that the fear message emphasized threat but did not diminish participants' perception of efficacy. Behavioral compliance with the warning was high in all conditions. In terms of change in perceived threat from decision point 2 to 3, participants in the decrease condition who read the fear appeal had the largest increase in perceived threat. In contrast, the hurricane increasing in intensity may be fear provoking enough that a fear appeal does not enhance risk perceptions. When examining individual differences, high Need for Cognition women had the largest increase in perceived message persuasiveness in the decrease and increase conditions. Phrasing guidelines for emergency management are discussed, along with the theoretical contributions of using social psychological methodology to examine emergency warnings. While individual differences are important predictors of warning interpretation, future research needs to reconcile the conundrum of emergency management's current limitations regarding individualized warnings.
|
18 |
If a Virtual Tree Falls in a Simulated Forest, is the Sound Restorative? An Examination of the Role of Level of Immersion in the Restorative Capacity of Virtual Nature EnvironmentsMichaelis, Jessica 01 January 2019 (has links)
Stress and cognitive fatigue have become a pervasive problem, especially in Western society. Stress and cognitive fatigue can have deleterious effects not only on performance, but also on one's physical and mental health. This dissertation presents a study in which the aim is to investigate the effects of virtual nature on stress reduction and cognitive restoration. Specifically, this study assessed the effects of Immersion (Non-immersive, Semi-immersive, Fully-immersive) and Exploration (Passive vs Active) on stress reduction and cognitive restoration. Additionally, restoration from the most effective virtual nature environment was compared to that of taking an active coloring break. Eighty-three university students with normal color vision, depth perception and good visual acuity participated in this study. The overall findings of the study suggest that virtual nature is able to reduce stress and anxiety, generally the more immersive and interactive the better. Moreover, though both the those in the passive VR nature condition and those in the coloring condition reported a reduction in stress, only those in the passive VR nature condition exhibited the physiological changes indicative of stress reduction.
|
19 |
Human Factor Effects of Simulating Battlefield Malodors in Field TrainingPike, William 01 January 2018 (has links) (PDF)
In order to explore how to better utilize simulated odors for live training, a study of 180 United States Military Academy at West Point cadets was undertaken to determine whether pre-exposure to a simulated malodor may result in an amelioration of performance issues, as well as improving performance of a complex task. Exposure to malodors has long been shown to increase stress and escape behavior, and reduce performance of complex tasks, in addition to degrading other human factor areas. However, desensitization to a particular odor through a process known as olfactory adaptation, could ameliorate these performance issues. In this study, cadets were assigned to one of three conditions: Adaptation (odor/odor, to denote presence or absence of the simulated malodor in each of two phases), No Adaptation (no odor/odor), or Control (no odor/no odor). Participants wore a device to track electrodermal activity, a predictor of stress. Participants spent 12 minutes in a tent taking a quiz involving a common military task. After two minutes, a scent delivery system was turned on, delivering either the simulated malodor (burnt human flesh) or no odor. Participants exited the tent after the full 12 minutes and rated the air quality of the tent. They repeated the exercise in a second tent, with a similar quiz. Metrics of interest included perceived intensity and detection time, common metrics for gauging olfactory adaption, as well as electrodermal activity, escape behavior, and task performance. Results indicate participants in the Adaptation condition were partially desensitized to the malodor. Performance metrics did not show any statistical significance for stress, escape behavior, or performance improvement for the Adaptation condition, although there was a strong negative correlation of performance and perceived mental demand. Performance improvement and stress results were trending in the expected directions. This study differed from previous work in olfactory adaptation studies by linking adaptation to performance during a relevant complex task, and provides valuable lessons for future olfactory studies. From a more applied viewpoint, this study also provides insight for future research into the incorporation of malodors in live training.
|
20 |
The Effects of Social Conformity in Human-Robot InteractionVolante, William 01 January 2020 (has links) (PDF)
While previous work has investigated aspects of the robot, the human, and the environment as influential factors in the human-robot relationship, little work has examined the role of social conformity in this relationship. As social conformity has been shown to affect human-human choice, relationships, and trust, there are a-priori reasons to believe that it will play an influential role in human-robot interaction (HRI) scenarios as well. Early research into the influence of social conformity in human-robot interaction (HRI) did not find the effect to be present with robots, however more recent work has adapted the methodological paradigm to find more consistent evidence of conformity in HRI settings. Here two studies investigated the impact of response methods (e.g., the ability to change your response), task types (i.e., arithmetic or line discriminations) and task importance on conformity with robots. Previous research has demonstrated conformity effects with the change response technique but not with the ordered response technique in HRI research. The first experiment aimed to specifically quantify the distinction in methodological change that has resulted in the conformity effect. This was done by examining both the task and response methods used. The second experiment aimed to add to the body of literature on the impact of task importance on conformity in HRI. While task importance has been shown to increase conformity in human-human research settings, this has yet to be specifically examined in HRI conformity research. Results from Experiment 1 show that a conformity effect is present with humans but not robots in the change response condition under certain task settings. Results from Experiment 2 indicate a pattern of conformity with robots but not humans during high importance tasks. These findings are discussed in relation to the body of literature on conformity and HRI settings as a whole.
|
Page generated in 0.0602 seconds