• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 6
  • 2
  • 2
  • Tagged with
  • 23
  • 23
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Selective listening processes in humans

Tan, Michael Nicholas January 2009 (has links)
This thesis presents data which support cochlear involvement in attentional listening. It has been previously proposed that the descending auditory pathways, in particular the medial olivocochlear system, play a role in reducing the cochlea's response to noise in a process known as antimasking. This hypothesis was investigated in human subjects for its potential impact on the detection of signals in noise following auditory cues. Three experimental chapters (Chapters 3, 4 and 5) are described in this thesis. Experiments in the first chapter measured the effect of acoustic cues on the detection of subsequent tones of equal or different frequency. Results show that changes in the ability to detect signals following auditory cues are the result of both enhanced detection for tones at the cued frequency, and suppressed detection for tones at non-cue frequencies. Both effects were measured to be in the order of ~3 dB. This thesis has argued that the enhancement of a cued tone is the implicit result of an auditory cue, while suppression of a probe tone results from the expectation of a specific frequency based on accumulated experience of a listening task. The properties of enhancement support the antimasking hypothesis, however, the physiological mechanism for suppression is uncertain. In the second experimental chapter, auditory cues were replaced with visual cues (representing musical notes) whose pitch corresponded to the target frequency, and were presented to musician subjects who possessed absolute or relative pitch. Results from these experiments showed that a visual cue produces the same magnitude of enhancement as that produced by an acoustic cue. This finding demonstrates a cognitive influence on the detection of tones in noise, and implicates the role of higher centres such as those involved in template-matching or top-down control of the efferent pathways. The final experimental chapter repeated several of the experiments from the first chapter on subjects with various forms of hearing loss. The results indicate that subjects with an outer hair cell deficit (concomitant with a sensorineural hearing loss) do not exhibit an enhancement of cued frequencies or a suppression of unexpected frequencies to the same extent as the normal-hearing subjects. In addition, one subject with a long-standing conductive hearing loss (with normal cochlear function) produced an enhancement equivalent to that of the normalhearing subjects. These findings also support the role of the medial olivocochlear system and the outer hair cells in antimasking. It is the conclusion of this thesis that enhancement most likely results from a combination of changes in receptive field characteristics, at various levels of the auditory system. The medial olivocochlear system is likely to be involved in unmasking a portion of the signal at the cochlear level, which may be influenced by both acoustic reflex pathways or higher centres of the brain.
12

Relationship between Subjective and Objective Measures of Attention in a Clinical Population of Children and Adolescents

Silk, Eric Edward 01 January 2012 (has links)
Attention problems can pose a serious challenge to the academic progress of children and adolescents. Currently, there are several different assessment methods utilized in the clinical diagnosis of Attention-Deficit/Hyperactivity Disorder (ADHD). Subjective and objective assessment measures purport to be measuring similar constructs of inattention, hyperactivity, and impulsivity. The present study examines the degree of correlation between the constructs of attention problems, hyperactivity/impulsivity, and signal detection between the Conners' Continuous Performance Test II Version 5 (CPT II) and a parent-report measure of attention problems, hyperactivity/impulsivity, aggression, emotional problems, and learning problems in a general clinical population of children and adolescents. This study also measures the correlation between these measures and the Woodcock Johnson III Tests of Cognitive Abilities (WJ-III COG) Auditory Attention subtest. No significant correlations were found among the CPT II errors of omission score or commissions score and attention problems or hyperactivity/impulsivity. No significant correlations were found among attention problems or hyperactivity/impulsivity and CPT II Reaction Time (RT), variability, or attentiveness. No significant correlations were found among the CPT II errors of omission or commissions scores and emotional problems, aggression, or learning problems. No significant correlations were found among the CPT II errors of omission score, commissions score, or RT and the WJ-III COG Auditory Attention subtest. A significant negative correlation was found between the CPT II variability score and the WJ-III COG Auditory Attention subtest. A significant positive correlation was found between the CPT II attentiveness score and the WJ-III COG Auditory Attention subtest. No significant correlations were found among any of the parent measures of attention problems Auditory Attention subtest. In a canonical correlation analysis there were high loadings on attention problems and hyperactivity/impulsivity on the parent-measure set and on commission errors and RT on the CPT II set. A modest loading on the multiple imputation set was also found on aggression problems and the parent-measure set. These findings support the overall conclusion that the CPT II does not generally relate to the parent-report measures. These findings indicate that there is little meaningful relationship between these two measures, which clinically are both used to assess attention problems.
13

Decoding spatial location of attended audio-visual stimulus with EEG and fNIRS

Ning, Matthew H. 17 January 2023 (has links)
When analyzing complex scenes, humans often focus their attention on an object at a particular spatial location in the presence of background noises and irrelevant visual objects. The ability to decode the attended spatial location would facilitate brain computer interfaces (BCI) for complex scene analysis. Here, we tested two different neuroimaging technologies and investigated their capability to decode audio-visual spatial attention in the presence of competing stimuli from multiple locations. For functional near-infrared spectroscopy (fNIRS), we targeted dorsal frontoparietal network including frontal eye field (FEF) and intra-parietal sulcus (IPS) as well as superior temporal gyrus/planum temporal (STG/PT). They all were shown in previous functional magnetic resonance imaging (fMRI) studies to be activated by auditory, visual, or audio-visual spatial tasks. We found that fNIRS provides robust decoding of attended spatial locations for most participants and correlates with behavioral performance. Moreover, we found that FEF makes a large contribution to decoding performance. Surprisingly, the performance was significantly above chance level 1s after cue onset, which is well before the peak of the fNIRS response. For electroencephalography (EEG), while there are several successful EEG-based algorithms, to date, all of them focused exclusively on auditory modality where eye-related artifacts are minimized or controlled. Successful integration into a more ecological typical usage requires careful consideration for eye-related artifacts which are inevitable. We showed that fast and reliable decoding can be done with or without ocular-removal algorithm. Our results show that EEG and fNIRS are promising platforms for compact, wearable technologies that could be applied to decode attended spatial location and reveal contributions of specific brain regions during complex scene analysis.
14

Sound Reconstruction from Human Brain Activity / ヒトの脳活動からの音の再構成

Park, Jong-Yun 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24932号 / 情博第843号 / 新制||情||141(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 神谷 之康, 教授 西田 眞也, 准教授 吉井 和佳 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
15

Sensory-processing sensitivity predicts fatigue from listening, but not perceived effort, in young and older adults

McGarrigle, Ronan, Mattys, S. 24 October 2022 (has links)
Yes / Purpose: Listening-related fatigue is a potential negative consequence of challenges experienced during everyday listening, and may disproportionately affect older adults. Contrary to expectation, we recently found that increased reports of listening-related fatigue were associated with better performance on a dichotic listening task (McGarrigle et al., 2021a). However, this link was found only in individuals who reported heightened sensitivity to a variety of physical, social, and emotional stimuli (i.e., increased ‘sensory-processing sensitivity’; SPS). The current study examined whether perceived effort may underlie the link between performance and fatigue. Methods: 206 young adults, aged 18-30 years (Experiment 1) and 122 older adults, aged 60-80 years (Experiment 2) performed a dichotic listening task and were administered a series of questionnaires including: the NASA task load index of perceived effort, the Vanderbilt Fatigue Scale (measuring daily life listening-related fatigue) and the Highly Sensitive Person Scale (measuring SPS). Both experiments were completed online. Results: SPS predicted listening-related fatigue but perceived effort during the listening task was not associated with SPS or listening-related fatigue in either age group. We were also unable to replicate the interaction between dichotic listening performance and SPS in either group. Exploratory analyses revealed contrasting effects of age; older adults found the dichotic listening task more effortful, but indicated lower overall fatigue. Conclusions: These findings suggest that SPS is a better predictor of listening-related fatigue than performance or effort ratings on a dichotic listening task. SPS may be an important factor in determining an individual’s likelihood of experiencing listening-related fatigue irrespective of hearing or cognitive ability. / This research was supported by an ESRC New Investigator Award (ES/R003572/1) to Ronan McGarrigle.
16

Características do efeito da atenção intermodal automática. / Characteristics of crossmodal automatic attentional effect.

Righi, Luana Lira 13 December 2012 (has links)
O presente trabalho analisou algumas das possíveis características do efeito da atenção intermodal entre elas: relação sinal/ruído e assincronia entre início dos estímulos (AIE) em relação ao tipo de tarefa realizada. Os Experimentos 1 e 2 mostraram que os efeitos da atenção intermodal se manifestam na presença de ruído visual externo, e que não se manifestam na ausência de ruído em uma AIE de 133 ms. No entanto, o Experimento 3 mostrou que quando a AIE é maior que a utilizada nos experimentos anteriores, o efeito intermodal se manifesta no comportamento na ausência de ruído visual externo. Finalmente, o Experimento 4 mostrou que em uma AIE curta (133 ms), e em uma tarefa de localização, o efeito intermodal se manifesta. Os resultados sugerem que o efeito atencional intermodal se manifesta na presença e na ausência de ruído visual e que o mecanismo de discriminação da frequência do alvo demora mais tempo para se completar do que o mecanismo de localização do alvo. / The current work examined the possible contribution of signal to noise ratio, the asynchrony between the onsets of the cue and the target (SOA) and the kind of task performed by the observer to the manifestation of crossmodal attentional effects. The Experiments 1 and 2 showed that crossmodal attentional effect appears when there is visual noise, but it does not appear when there is no visual noise at 133 ms SOA. The Experiment 3 showed that when the SOA is longer than 133 ms, the crossmodal attentional effect appears when there is no visual noise. The Experiment 4 showed that in a localization task, the crossmodal attentional effect appears even in a short SOA (133 ms). Taken together, the results indicate that crossmodal attentional effects appear when there is visual noise and when there is no visual noise. However, in the later condition and when the target has to be identified, the crossmodal attentional effect takes longer to appear.
17

Características do efeito da atenção intermodal automática. / Characteristics of crossmodal automatic attentional effect.

Luana Lira Righi 13 December 2012 (has links)
O presente trabalho analisou algumas das possíveis características do efeito da atenção intermodal entre elas: relação sinal/ruído e assincronia entre início dos estímulos (AIE) em relação ao tipo de tarefa realizada. Os Experimentos 1 e 2 mostraram que os efeitos da atenção intermodal se manifestam na presença de ruído visual externo, e que não se manifestam na ausência de ruído em uma AIE de 133 ms. No entanto, o Experimento 3 mostrou que quando a AIE é maior que a utilizada nos experimentos anteriores, o efeito intermodal se manifesta no comportamento na ausência de ruído visual externo. Finalmente, o Experimento 4 mostrou que em uma AIE curta (133 ms), e em uma tarefa de localização, o efeito intermodal se manifesta. Os resultados sugerem que o efeito atencional intermodal se manifesta na presença e na ausência de ruído visual e que o mecanismo de discriminação da frequência do alvo demora mais tempo para se completar do que o mecanismo de localização do alvo. / The current work examined the possible contribution of signal to noise ratio, the asynchrony between the onsets of the cue and the target (SOA) and the kind of task performed by the observer to the manifestation of crossmodal attentional effects. The Experiments 1 and 2 showed that crossmodal attentional effect appears when there is visual noise, but it does not appear when there is no visual noise at 133 ms SOA. The Experiment 3 showed that when the SOA is longer than 133 ms, the crossmodal attentional effect appears when there is no visual noise. The Experiment 4 showed that in a localization task, the crossmodal attentional effect appears even in a short SOA (133 ms). Taken together, the results indicate that crossmodal attentional effects appear when there is visual noise and when there is no visual noise. However, in the later condition and when the target has to be identified, the crossmodal attentional effect takes longer to appear.
18

Speech-in-Noise Processing in Adults with Autism Spectrum Disorder

Anderson, Chelsea D 08 1900 (has links)
Individuals diagnosed with autism spectrum disorder often experience difficulty during speech-in-noise (SIN) processing tasks. However, it remains unclear how behavioral and cortical mechanisms of auditory processing explain variability in SIN performance in adults with ASD and their neurotypical counterparts. The proposed research explored variability in SIN as it relates to behavioral, perceptual, and objective measures of auditory processing. Results showed significant differences between groups in SIN thresholds. In addition, neurotypicals outperformed the ASD group on measures of sustained auditory attention characterized by reduced impulsivity, increased inhibition, and increased selective auditory attention. Individuals with ASD showed decreased acceptance of noise as compared to neurotypical peers. Overall, results highlighted auditory processing deficits in individuals with ASD that contribute to SIN performance.
19

The Development of Auditory “Spectral Attention Bands” in Children

Youngdahl, Carla L. 15 October 2015 (has links)
No description available.
20

Auditory foreground and background decomposition: New perspectives gained through methodological diversification

Thomaßen, Sabine 11 April 2022 (has links)
A natural auditory scene contains many sound sources each of which produces complex sounds. These sounds overlap and reach our ears at the same time, but they also change constantly. To still be able to follow the sound source of interest, the auditory system must decide where each individual tone belongs to and integrate this information over time. For well-controlled investigations on the mechanisms behind this challenging task, sound sources need to be simulated in the lab. This is mostly done with sine tones arranged in certain spectrotemporal patterns. The vast majority of studies simply interleave two sub-sequences of sine tones. Participants report how they perceive these sequences or they perform a task whose performance measure allows hints on how the scene was perceived. While many important insights have been gained with this procedure, the questions that can be addressed with it are limited and the commonly used response methods are partly susceptible to distortions or only indirect measures. The present thesis enlarged the complexity of the tone sequences and the diversity of perceptual measures used for investigations on auditory scene analysis. These changes are intended to open up new questions and give new perspectives on our knowledge about auditory scene analysis. In detail, the thesis established three-tone sequences as a tool for specific investigations on the perceptual foreground and background processing in complex auditory scenes. In addition, it modifies an already established approach for indirect measures of auditory perception in a way that enables detailed and univocal investigations on background processing. Finally, a new response method, namely a no-report method for auditory perception that might also serve as a method to validate subjective report measures, was developed. This new methodological approach uses eye movements as a measurement tool for auditory perception. With the aid of all these methodological improvements, the current thesis shows that auditory foreground formation is actually more complex than previously assumed since listeners hold more than one auditory source in the foreground without being forced to do so. In addition, it shows that the auditory system prefers a limited number of specific source configurations probably to avoid combinatorial explosion. Finally, the thesis indicates that the formation of the perceptual background is also quite complex since the auditory system holds perceptual organization alternatives in parallel that were basically assumed to be mutually exclusive. Thus, both the foreground and the background follow different rules than expected based on two-tone sequences. However, one finding seems to be true for both kinds of sequences: the impact of the tone pattern on the subjective perception is marginal, be it in two- or three-tone sequences. Regarding the no-report method for auditory perception, the thesis shows that eye movements and the reported auditory foreground formations were in good agreement and it seems like this approach indeed has the potential to become a first no-report measure for auditory perception.:Abstract 3 Acknowledgments 5 List of Figures 8 List of Tables 9 Collaborations 11 1 General Introduction 13 1.1 The auditory foreground 13 1.1.1 Attention and auditory scene analysis 13 1.1.2 Investigating auditory scene analysis with two-tone sequences 16 1.1.3 Multistability 18 1.2 The auditory background 21 1.2.1 Investigating auditory background processing 22 1.3 Measures of auditory perception 23 1.3.1 Report procedures 23 1.3.2 Performance-based measures 26 1.3.3 Psychophysiological measures 27 1.4 Summary and goals of the thesis 30 2 The auditory foreground 33 2.1 Study 1: Foreground formation in three-tone sequences 33 2.1.1 Abstract 33 2.1.2 Introduction 33 2.1.3 Methods 37 2.1.4 Results 43 2.1.5 Discussion 48 2.2 Study 2: Pattern effects in three-tone sequences 53 2.2.1 Abstract 53 2.2.2 Methods 53 2.2.3 Results 54 2.2.4 Discussion 58 2.3 Study 3: Pattern effects in two-tone sequences 59 2.3.1 Abstract 59 2.3.2 Introduction 59 2.3.3 General Methods 63 2.3.4 Experiment 1 – Methods and Results 65 2.3.5 Experiment 2 – Methods and Results 67 2.3.6 Experiment 3 – Methods and Results 70 2.3.7 Discussion 72 3 The auditory background 74 3.1 Study 4: Background formation in three-tone sequences 74 3.1.1 Abstract 74 3.1.2 Introduction 74 3.1.3 Methods 77 3.1.4 Results 82 3.1.5 Discussion 86 4 Audio-visual coupling for investigations on auditory perception 90 4.1 Study 5: Using Binocular Rivalry to tag auditory perception 90 4.1.1 Abstract 90 4.1.2 Introduction 90 4.1.3 Methods 92 4.1.4 Results 100 4.1.5 Discussion 108 5 General Discussion 113 5.1 Short review of the findings 113 5.2 The auditory foreground 114 5.2.1 Auditory foreground formation and attention theories 114 5.2.2 The role of tone pattern in foreground formation 116 5.2.3 Methodological considerations and continuation 117 5.3 The auditory background 118 5.3.1 Auditory object formation without attention 120 5.3.2 Multistability without attention 121 5.3.3 Methodological considerations and continuation 122 5.4 Auditory scene analysis by audio-visual coupling 124 5.4.1 Methodological considerations and continuation 124 5.5 Artificial listening situations and conclusions on natural hearing 126 6 Conclusions 128 References 130

Page generated in 0.1234 seconds