• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 27
  • 23
  • 16
  • 16
  • 9
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 376
  • 376
  • 88
  • 73
  • 59
  • 49
  • 47
  • 44
  • 37
  • 37
  • 36
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

詞彙歧義解困的次要語義偏向效應再視:中文多義詞的眼動研究證據 / Revisiting the subordinate bias effect of lexical ambiguity resolution: evidence from eye movements in reading Chinese

盧怡璇, Lu, I Hsuan Unknown Date (has links)
過去二十多年來,心理語言學研究關注詞彙歧義解困 (lexical ambiguity resolution)歷程發生時,語義脈絡與多義詞的語義頻率之間的交互作用。許多研究發現,當語境支持非均勢同形異義詞 (unbalanced homograph) 的次要語義時,同形異義詞的凝視時間長於與其有相同字形頻率的單義詞 (unambiguous control),此為次要語義偏向效應 (subordinate bias effect)。根據再排序觸接模型 (reordered-access model),次要語義偏向效應來自於主要語義與次要語義的競爭;相對地,選擇觸接模型 (selective access model)則認為只有與語境相關的語義被激發,因此,次要語義偏向效應是因為提取到一個使用頻率較低的語義。本論文進行兩個眼動實驗。實驗一檢視中文多義詞的次要語義偏向效應以區辨兩種詞彙歧義解困模型分別提出的解釋。本實驗的材料使用了低頻同形異義詞、低頻單義詞、以及高頻單義詞。結果顯示,當使用的單義詞與多義詞字形頻率相同時,在目標詞及後目標詞上(目標詞後一個詞)皆發生了次要語義偏向效應。實驗二利用口語理解─視覺典範中透過受試者理解語音訊息時同步記錄眼動的作業方式來探究次要語義偏向效應是否來自於主要語義的激發。當口語句子中的目標詞被唸出後,會計算出隨著時間增加眼睛落在四個雙字詞的凝視比例。結果發現次要語義因為語境的選擇在聽到目標詞後大約500毫秒時就可被激發,主要語義則在一聽完多義詞後被激發。因此,多義詞的兩個語義在聽到目標詞後大約900至1300毫秒時(相當於在後目標詞時)發生競爭。整體而言,本研究顯示即使語境支持多義詞的次要語義,主要語義依然會被激發。因此,次要語義偏向效應是由兩個語義競爭後所造成的結果,符合再排序觸接模型的解釋。 / Research in psycholinguistics throughout the last two decades has focused on the interaction between linguistic context and meaning dominance during lexical ambiguity resolution. Many studies demonstrated the subordinate bias effect when the preceding context biased for the subordinate meaning (i.e. infrequent meaning) of an unbalanced homograph. According to the reordered access model, the SBE is due to competition between the dominant and subordinate meanings. On the contrary, the selective access model assumes only the context-relevant meaning is activated and the SBE is a result of access to a low frequent meaning. Two eye tracking experiments of sentence reading and sentence listening were conducted. Experiment 1 examined the SBE of Chinese homographs to differentiate the two accounts. We utilized low frequency homographs along with their matched low and high-frequency unambiguous words. The results showed the SBE emerging in fixation durations of the target region and post-target region (i.e. next two words of the target), when unambiguous controls were matched to the word-form frequency of ambiguous words. Experiment 2 used visual world paradigm to explore temporal dynamics of dominant meaning activation responsible for the SBE in an instructional eyetracking-during-listening task. Fixation probabilities on four disyllabic printed words were analyzed during a time period after a target word was uttered in a spoken sentence. The results supported the reordered access model. The subordinate meaning was activated by contextual information at about 500 ms after the onset of acoustic homograph at the time when context penetrated to make its favored meaning available. Soon after the offset of homograph, the dominant meaning became active. Both meanings associated with the homograph were activated during the time windows of 901 ms to 1300 ms, which approximately corresponding to the acoustic onset of post target. In sum, our studies demonstrate that the dominant meaning is activated even when the contextual information biases to the subordinate meaning of a homograph. The subordinate bias effect is the result of competition from two meanings, conforming to the reordered access model.
372

THE EFFECTS OF NOISE ON AUTONOMIC AROUSAL AND ATTENTION AND THE RELATIONSHIP TO AUTISM SYMPTOMATOLOGY

Ann Marie Alvar (11820860) 18 December 2021 (has links)
<p>Experiment One: The Effect of Noise on Autonomic Arousal</p><p><br></p><p>In response to the growing demand for research that helps us understand the complex interactions between Autonomic Arousal (AA) on behavior and performance there is an increasing need for robust techniques to efficiently utilize stimuli, such as sound, to vary the level of AA within a study. The goal of this study was to look at the impact of several factors, including sound intensity, order of presentation, and direction of presentation on skin conductance level, a widely utilized technique for approximating levels of AA. To do this we had 34 young adults ages 18- 34 listen to a series of 2-minute blocks of a sound stimuli based off a heating, ventilation, and air conditioning system (HVAC). Blocks included 5 single intensity conditions each block differing in 10 dBA steps ranging from 35-75 dBA. We presented blocks in both rising and falling level of intensity, with half the participants hearing them in a rising order first and half in a falling order first. The evidence found by this study suggests that increasing the sound level plays an important role in increasing AA and habituation is an extremely important factor that must be accounted for as it, in the case of typical young adults, quickly dampens the response to stimuli and subsequent stimuli. These findings suggest that researchers can best efficiently maximize the range of AA they can use while keeping their participants comfortable by starting out with the most intense stimuli and proceeding to the less intense stimuli, working with habitation instead of against it.</p><p> </p><p><br></p><p> Experiment Two: The Effect of Autonomic Arousal on Visual Attention</p><p><br></p><p>The goal of this study was to better understand how various levels of autonomic arousal impact different components of attentional control and if ASD-related traits indexed by Autism Quotient scores (AQ) might relate to alterations in this relationship. This study had 41 young adult participants (23 women, 17 men, 1 prefer not to say), ages ranging from 18 to 38 years old. Participants listened to varying levels of noise to induce changes in AA, which were recorded as changes in skin conductance level (SCL). To evaluate attentional control, participants preformed pro and anti-saccade visual gap–overlap paradigm tasks as measures of attentional control. The findings of this study suggest that increased levels of autonomic arousal are helpful for improving performance on anti-saccade tasks, which are heavily dependent on top-down attentional control. Additionally, increases in AQ scores were related to having less of a benefit from increasing levels of arousal on anti-saccade tasks. Additional interactions were also found and are discussed in this paper.</p>
373

Analyse visuelle et cérébrale de l’état cognitif d’un apprenant

Ben Khedher, Asma 02 1900 (has links)
No description available.
374

How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models

Nuthmann, Antje, Einhäuser, Wolfgang, Schütz, Immo 22 January 2018 (has links)
Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead (“central bias”). This problem is further exacerbated in the context of model comparisons, because some—but not all—models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox “GridFix” available.
375

Context Effects in Early Visual Processing and Eye Movement Control

Nortmann, Nora 29 April 2015 (has links)
There is a difference between the raw sensory input to the brain and our stable perception of entities in the environment. A first approach to investigate perception is to study relationships between properties of currently presented stimuli and biological correlates of perceptual processes. However, it is known that such processes are not only dependent on the current stimulus. Sampling of information and the concurrent neuronal processing of stimulus content rely on contextual relationships in the environment, and between the environment and the body. Perceptual processes dynamically adjust to relevant context, such as the current task of the organism and its immediate history. To understand perception, we have to study how processing of current stimulus content is influenced by such contextual factors. This thesis investigates the influence of such factors on visual processing. In particular, it investigates effects of temporal context in early visual processing and the effect of task context in eye movement control. To investigate effects of contextual factors on early visual processing of current stimulus content, we study neuronal processing of visual information in the primary visual cortex. We use real-time optical imaging with voltage sensitive dyes to capture neuronal population activity in the millisecond range across several millimeters of cortical area. To characterize the cortical layout concerning the mapping of orientation, previous to further investigations, we use smoothly moving grating stimuli. Investigating responses to this stimulus type systematically, we find independent encoding of local contrast and orientation, and a direct mapping of current stimulus content onto cortical activity (Study 1). To investigate the influence of the previous stimulus as context on processing of current stimulus content, we use abrupt visual changes in sequences of modified natural images. In earlier studies, investigating relatively fast timescales, it was found that the primary visual cortex continuously represents current input (ongoing encoding), with little interference from past stimuli. We investigate whether this coding scheme generalizes to cases in which stimuli change more slowly, as frequently encountered in natural visual input. We use sequences of natural scene contours, comprised of vertically and horizontally filtered natural images, their superpositions, and a blank stimulus, presented with 10 or 33 Hz. We show that at the low temporal frequency, cortical activity patterns do not encode the present orientations but instead reflect their relative changes in time. For example, when a stimulus with horizontal orientation is followed by the superposition of both orientations, the pattern of cortical activity represents the newly added vertical orientations instead of the full sum of orientations. Correspondingly, contour removal from the superposition leads to the representation of orientations that have disappeared rather than those that remain. This is in sharp contrast to more rapid sequences for which we find an ongoing representation of present input, consistent with earlier studies. In summary, we find that for slow stimulus sequences, populations of neurons in the primary visual cortex are no longer tuned to orientations within individual stimuli but instead represent the difference between consecutive stimuli. Our results emphasize the influence of the temporal context on early visual processing and consequentially on information transmission to higher cortical areas (Study 2). To study effects of contextual factors on the sampling of visual information, we focus on human eye movement control. The eyes are actively moved to sample visual information from the environment. Some traditional approaches predict eye movements solely on simple stimulus properties, such as local contrasts (stimulus-driven factors). Recent arguments, however, emphasize the influence of tasks (task context) and bodily factors (spatial bias). To investigate how contextual factors affect eye movement control, we quantify the relative influences of the task context, spatial biases and stimulus-driven factors. Participants view and classify natural scenery and faces while their eye movements are recorded. The stimuli are composed of small image patches. For each of these patches we derive a measure that quantifies stimulus-driven factors, based on the image content of a patch, and spatial viewing biases, based on the location of the patch. Utilizing the participants’ classification responses, we additionally derive a measure, which reflects the information content of a patch in the context of a given task. We show that the effect of spatial biases is highest, that task context is a close runner-up, and that stimulus-driven factors have, on average, a smaller influence. Remarkably, all three factors make independent and significant contributions to the selection of viewed locations. Hence, in addition to stimulus-driven factors and spatial biases, the task context contributes to visual sampling behavior and has to be considered in a model of human eye movements. Visual processing of current stimulus content, in particular visual sampling behavior and early processing, is inherently dependent on context. We show that already in the first cortical stage, temporal context strongly affects the processing of new visual information and that visual sampling by eye movements is significantly influenced by the task context, independently of spatial factors and stimulus-driven factors. The empirical results presented provide foundations for an improved theoretical understanding of the role of context in perceptual processes.
376

Adaptive Eyes: Driver Distraction and Inattention PreventionThrough Advanced Driver Assistance Systems and Behaviour-Based Safety

Wege, Claudia 30 January 2014 (has links)
Technology pervades our daily living, and is increasingly integrated into the vehicle – directly affecting driving. On the one hand technology such as cell phones provoke driver distraction and inattention, whereas, on the other hand, Advanced Driver Assistance Systems (ADAS) support the driver in the driving task. The question is, can a driver successfully adapt to the ever growing technological advancements? Thus, this thesis aimed at improving safe driver behaviour by understanding the underlying psychological mechanisms that influence behavioural change. Previous research on ADAS and human attention was reviewed in the context of driver behavioural adaptation. Empirical data from multiple data sources such as driving performance data, visual behaviour data, video footage, and subjective data were analyzed to evaluate two ADAS (a brake-capacity forward collision warning system, B-FCW, and a Visual Distraction Alert System, VDA-System). Results from a field operational test (EuroFOT) showed that brake-capacity forward collision warnings lead to immediate attention allocation toward the roadway and drivers hit the brake, yet change their initial response later on by directing their eyes toward the warning source in the instrument cluster. A similar phenomenon of drivers changing initial behaviour was found in a driving simulator study assessing a Visual Distraction Alert System. Analysis showed that a Visual Distraction Alert System successfully assists drivers in redirecting attention to the relevant aspects of the driving task and significantly improves driving performance. The effects are discussed with regard to behavioural adaptation, calibration and system acceptance. Based on these findings a novel assessment for human-machine-interaction (HMI) of ADAS was introduced. Based on the contribution of this thesis and previous best-practices, a holistic safety management model on accident prevention strategies (before, during and after driving) was developed. The DO-IT BEST Feedback Model is a comprehensive feedback strategy including driver feedback at various time scales and therefore is expected to provide an added benefit for distraction and inattention prevention. The central contributions of this work are to advance research in the field of traffic psychology in the context of attention allocation strategies, and to improve the ability to design future safety systems with the human factor in focus. The thesis consists of the introduction of the conducted research, six publications in full text and a comprehensive conclusion of the publications. In brief this thesis intends to improve safe driver behaviour by understanding the underlying psychological mechanisms that influence behavioral change, thereby resulting in more attention allocation to the forward roadway, and improved vehicle control.:Abstract i Zusammenfassung iii List of included publications v Acknowledgements vii Previously published work ix Table of contents xi Preface xii 1 Chapter 1 Introduction 1 1.1 Outline 1 1.2 Objectives 2 1.3 Background 8 1.3.1 Behavioural adaption to ADAS 8 1.3.2 Driver distraction and inattention 9 2 Chapter 2 Paper I 23 3 Chapter 3 Paper II 47 4 Chapter 4 Paper III 61 5 Chapter 5 Paper IV 91 6 Chapter 6 Paper V 117 7 Chapter 7 Paper VI 143 8 Chapter 8 Conclusions and discussion 161 8.1. Contributions 161 8.2. Implications 171 8.3. Limitations and research needs 173 9 References 177 Curriculum Vitae 199 Eidesstattliche Erklärung 201 / Technologie durchdringt unser tägliches Leben und ist zunehmend integriert in Fahrzeuge – das Resultat sind veränderte Anforderungen an Fahrzeugführer. Einerseits besteht die Gefahr, dass er durch die Bedienung innovativer Technologien (z.B. Mobiltelefone) unachtsam wird und visuell abgelenkt ist, andererseits kann die Nutzung von Fahrerassistenzsystemen die den Fahrer bei der Fahraufgabe unterstützten einen wertvollen Beitrag zur Fahrsicherheit bieten. Die steigende Aktualität beider Problematiken wirft die Frage auf: "Kann der Fahrer sich erfolgreich dem ständig wachsenden technologischen Fortschritt anpassen?" Das Ziel der vorliegenden Arbeit ist der Erkenntnisgewinn zur Verbesserung des Fahrverhaltens indem der Verhaltensänderungen zugrunde liegende psychologische Mechanismen untersucht werden. Eine Vielzahl an Literatur zu Fahrerassistenzsystemen und Aufmerksamkeitsverteilung wurde vor dem Hintergrund von Verhaltensanpassung der Fahrer recherchiert. Daten mehrerer empirischer Quellen, z. B. Fahrverhalten, Blickbewegungen, Videomitschnitte und subjektive Daten dienten zur Datenauswertung zweier Fahrerassistenzsysteme. Im Rahmen einer Feldstudie zeigte sich, dass Bremskapazitäts-Kollisionswarnungen zur sofortigen visuellen Aufmerksamkeitsverteilung zur Fahrbahn und zum Bremsen führen, Fahrer allerdings ihre Reaktion anpassen indem sie zur Warnanzeige im Kombinationsinstrument schauen. Ein anderes Phänomen der Verhaltensanpassung wurde in einer Fahrsimulatorstudie zur Untersuchung eines Ablenkungswarnsystems, das dabei hilft die Blicke von Autofahrern stets auf die Straße zu lenken, gefunden. Diese Ergebnisse weisen nach, dass solch ein System unterstützt achtsamer zu sein und sicherer zu fahren. Die vorliegenden Befunde wurden im Zusammenhang zu Vorbefunden zur Verhaltensanpassung zu Fahrerassistenzsystemen, Fahrerkalibrierung und Akzeptanz von Technik diskutiert. Basierend auf den gewonnenen Erkenntnissen wurde ein neues Vorgehen zur Untersuchung von Mensch- Maschine-Interaktion eingeführt. Aufbauend auf den Resultaten der vorliegenden Arbeit wurde ein ganzheitliches Modell zur Fahrsicherheit und -management, das DO-IT BEST Feedback Modell, entwickelt. Das Modell bezieht sich auf multitemporale Fahrer-Feedbackstrategien und soll somit einen entscheidenen Beitrag zur Verkehrssicherheit und dem Umgang mit Fahrerunaufmerksamkeit leisten. Die zentralen Beiträge dieser Arbeit sind die Gewinnung neuer Erkenntnisse in den Bereichen der Angewandten Psychologie und der Verkehrspsychologie in den Kontexten der Aufmerksamkeitsverteilung und der Verbesserung der Gestaltung von Fahrerassistenzsystemen fokusierend auf den Bediener. Die Dissertation besteht aus einem Einleitungsteil, drei empirischen Beiträgen sowie drei Buchkapiteln und einer abschliessenden Zusammenfassung.:Abstract i Zusammenfassung iii List of included publications v Acknowledgements vii Previously published work ix Table of contents xi Preface xii 1 Chapter 1 Introduction 1 1.1 Outline 1 1.2 Objectives 2 1.3 Background 8 1.3.1 Behavioural adaption to ADAS 8 1.3.2 Driver distraction and inattention 9 2 Chapter 2 Paper I 23 3 Chapter 3 Paper II 47 4 Chapter 4 Paper III 61 5 Chapter 5 Paper IV 91 6 Chapter 6 Paper V 117 7 Chapter 7 Paper VI 143 8 Chapter 8 Conclusions and discussion 161 8.1. Contributions 161 8.2. Implications 171 8.3. Limitations and research needs 173 9 References 177 Curriculum Vitae 199 Eidesstattliche Erklärung 201

Page generated in 0.0691 seconds