• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 15
  • 10
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 149
  • 56
  • 55
  • 33
  • 29
  • 21
  • 20
  • 18
  • 17
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Vestibular contributions to target-directed reaching movements

Brunke, Kirsten Marie. January 1900 (has links)
Thesis (M.S.)--University of British Columbia, 2006. / Includes bibliographical references (leaves 40-42). Also available online (PDF file) by a subscription to the set or by purchasing the individual file.
82

Incorporating Auditory Models in Speech/Audio Applications

January 2011 (has links)
abstract: Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task. / Dissertation/Thesis / Ph.D. Electrical Engineering 2011
83

An investigation into the use of artificial intelligence techniques for the analysis and control of instrumental timbre and timbral combinations

Antoine, Aurélien January 2018 (has links)
Researchers have investigated harnessing computers as a tool to aid in the composition of music for over 70 years. In major part, such research has focused on creating algorithms to work with pitches and rhythm, which has resulted in a selection of sophisticated systems. Although the musical possibilities of these systems are vast, they are not directly considering another important characteristic of sound. Timbre can be defined as all the sound attributes, except pitch, loudness and duration, which allow us to distinguish and recognize that two sounds are dissimilar. This feature plays an essential role in combining instruments as it involves mixing instrumental properties to create unique textures conveying specific sonic qualities. Within this thesis, we explore harnessing techniques for the analysis and control of instrumental timbre and timbral combinations. This thesis begins with investigating the link between musical timbre, auditory perception and psychoacoustics for sounds emerging from instrument mixtures. It resulted in choosing to use verbal descriptors of timbral qualities to represent auditory perception of instrument combination sounds. Therefore, this thesis reports on the developments of methods and tools designed to automatically retrieve and identify perceptual qualities of timbre within audio files, using specific musical acoustic features and artificial intelligence algorithms. Different perceptual experiments have been conducted to evaluate the correlation between selected acoustics cues and humans' perception. Results of these evaluations confirmed the potential and suitability of the presented approaches. Finally, these developments have helped to design a perceptually-orientated generative system harnessing aspects of artificial intelligence to combine sampled instrument notes. The findings of this exploration demonstrate that an artificial intelligence approach can help to harness the perceptual aspect of instrumental timbre and timbral combinations. This investigation suggests that established methods of measuring timbral qualities, based on a diverse selection of sounds, also work for sounds created by combining instrument notes. The development of tools designed to automatically retrieve and identify perceptual qualities of timbre also helped in designing a comparative scale that goes towards standardising metrics for comparing timbral attributes. Finally, this research demonstrates that perceptual characteristics of timbral qualities, using verbal descriptors as a representation, can be implemented in an intelligent computing system designed to combine sampled instrument notes conveying specific perceptual qualities.
84

Psychophysical and Neural Correlates of Auditory Attraction and Aversion

January 2014 (has links)
abstract: This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli. / Dissertation/Thesis / Masters Thesis Psychology 2014
85

Estudo das propriedades acústicas e psicofísicas da cóclea / Study on the acoustical and psychophysical properties of the choclea

Rebeca Bayeh 27 March 2018 (has links)
O presente trabalho tem como objetivo apresentar uma revisão bibliográfica de alguns dos principais conceitos acústicos e psicoacústicos associados à audição humana já desenvolvidos na literatura, de Helmholtz aos dias atuais, com foco no órgão da cóclea, relacionando as áreas de física, neurociências e computação musical, bem como aplicações diretamente derivadas de tal revisão. A partir dos cálculos realizados por Couto (COUTO, 2000) de distribuição da pressão sonora no meato acústico externo, foi calculada a pressão sonora relativa e a impedância acústica ao longo do órgão coclear. Também é apresentado um algoritmo de minimização da dissonância sensorial baseado nos modelos de bandas críticas de Cambridge e de Munique. / The present work presents a literature review on some of the most important acoustical and psychoacoustical concepts associated to human hearing, from Helmholtz to the present day, focusing on the cochlea and connecting concepts of physics, neurosciences and computer music, as well as applications directly derived from such concepts. From the sound pressure distribution model developed by Couto (COUTO, 2000), the relative sound pressure and the acoustic impedance along the cochlea were calculated. An algorithm for minimizing sensory dissonance based on Cambridge and Munich models of critical bandwidths is also presented.
86

Estudo das emissões otoacústicas e dos potenciais auditivos evocados de tronco cerebral em pacientes com zumbido. / Study of otoacoustic emissions and auditory brainstem response in patients with tinnitus

Alessandra Giannella Samelli 05 December 2000 (has links)
O zumbido (ou tinnitus) pode ser descrito como a percepção de um som ou ruído sem nenhuma estimulação acústica externa. Apesar de freqüente, ainda existem muitas dúvidas envolvendo o zumbido, no que se refere à sua origem e tratamento para a totalidade dos casos. Os objetivos do presente trabalho foram estudar a supressão das Emissões Otoacústicas Transitórias com estimulação contralateral e as latências, intervalos interpicos, bem como as amplitudes das ondas dos Potenciais Auditivos Evocados de Tronco Cerebral, em pacientes com zumbido e perda auditiva neurossensorial, causada possivelmente por exposição prolongada a níveis de pressão sonora elevados. Foram avaliados 30 sujeitos com zumbido (grupo Z) e 30 sujeitos sem zumbido (grupo C), ambos os grupos do sexo masculino e pareados quanto à faixa etária, tempo de exposição ao ruído e grau de perda auditiva neurossensorial em agudos. Os resultados mostraram homogeneidade dos dois grupos quanto à faixa etária, tempo de exposição ao ruído e limiares auditivos. Observou-se supressão das emissões menores para o grupo Z, com diferença estatística somente para a orelha esquerda e indícios de diferença significante para a orelha direita. Quanto aos Potenciais Auditivos Evocados de Tronco Cerebral, houve um aumento das latências e redução das amplitudes para o grupo Z, com resultados significantes para a latência de onda III da orelha direita e para as latências das ondas I e III da orelha esquerda. Com base nos achados descritos, hipotetizou-se que, nos pacientes com zumbido, o sistema auditivo eferente olivococlear medial seria possivelmente menos eficiente, já que a supressão das emissões foi menor nestes pacientes. Além disso, poder-se-ia supor a existência de uma possível alteração na atividade do Tronco Cerebral em indivíduos com zumbido, evidenciadas pelos prolongamentos das latências e redução das amplitudes. / Tinnitus can be described as a perception of a particular sound or noise without any external acoustic stimulation. Though frequent, there are still many unanswered questions regarding tinnitus, the origin and treatment for all cases. The aim of this work was to study the suppression of Transitory Otoacoustic Emissions with contralateral stimulus and the latencies, interpeak intervals and amplitudes of Auditory Brainstem Response waves in patients with tinnitus and sensorineural hearing loss, possibly caused by prolonged exposition to high sound pressure levels. For that purpose, 30 individuals with tinnitus (group Z) and another 30 without it (group C) were studied. Both groups formed by males matched according with age, time exposed to noise and high-frequency sensorineural hearing loss. The results show homogeneous age, noise exposure time and hearing thresholds of both groups. Weaker suppression of emissions in group Z was observed, with significant statistical difference only for left ear and indicia of significant difference for the right ear. As for auditory brainstem response, there was an increase in latencies and reduction of amplitudes for group Z, with significant results for wave III latency of right ear and for I and III waves of left ear. Based on these findings, the theory is that in patients with tinnitus the medial olivocochlear efferent auditory system could possibly be less efficient, since the emission suppression was weaker in such patients. Besides, an assumption could be made that a possible alteration of brainstem activity takes place in patients with tinnitus, made clear by prolonged latencies and reduction of amplitudes in that group.
87

Activation cérébrales liées aux acouphènes / Tinnitus related Cerebral Activations

Gentil, Anthony 16 December 2016 (has links)
Les acouphènes subjectifs chroniques concernent entre 10 et 15% de la population des pays industrialisés. Ils peuvent entraîner des modifications visibles du comportement, une détérioration très nette de la qualité de vie et dans des cas extrêmes des troubles nerveux post-traumatiques ou un état dépressif clinique (Berrios & Rose, 1992; Berrios et al, 1988) pouvant mener à des pensées suicidaires (Dobie, 2003). Les acouphènes peuvent provenir de différents relais des voies auditives périphériques ou centrales, cependant la grande majorité des acouphènes chroniques est liée à une perte auditive, presbyacousie, ou exposition au bruit. En effet, approximativement 90% des acouphènes chroniques sont liés à une perte auditive (Nicolas-Puel et al. 2006).Les acouphènes peuvent entraîner des changements importants dans le fonctionnement de certains réseaux cérébraux. Même en cas de lésion périphérique, l’anomalie perceptive qui en résulte peut à terme entraîner une modification de l’activité des structures cérébrales, par des phénomènes de plasticité. Malheureusement aujourd’hui compte tenu des divergences constatées aux niveaux des résultats obtenus par les différentes études effectuées chez les sujets acouphéniques, aucun consensus n’a pu être établi quant à l’établissement d’un modèle de la physiopathologie des acouphènes liés à une perte auditive.L’objectif général de ce travail est donc de mettre en place un protocole multimodal robuste afin de détecter des d’anomalies d’activation cérébrales chez des sujets souffrants d’acouphènes unilatéraux liés à une perte auditive par rapport à des sujets normo-entendants. Ce protocole comprend :- un paradigme permettant de réaliser des acquisitions fonctionnelles avec et sans acouphènes perçus, grâce à l’inhibition résiduelle des acouphènes,- un paradigme d’IRM fonctionnelle de repos,- un paradigme d’IRM fonctionnelle avec stimulation sonores,- une mesure du débit sanguin cérébral régional.Notre étude a permis de mettre en évidence l’implication du réseau du mode par défaut et notamment du précunéus dans la perception d’acouphènes. En effet, les résultats de notre étude montrent selon plusieurs modalités que ce réseau est hypo-activé chez les sujets présentant des acouphènes liés à une perte auditive. L’hypoactivité de ces régions pourrait être réduite de façon temporaire par le masquage des acouphènes par une stimulation sonore. De plus, l’inhibition résiduelle des acouphènes conduit à l’hyper-activations des régions du réseau du mode par défaut et notamment du précunéus. / Chronic subjective tinnitus concern 10 to 15% of the population in industrialized countries. They can cause visible changes in behavior, a sharp deterioration in the quality of life and in extreme cases of post-traumatic stress disorder or clinical depression (Berrios & Rose, 1992; Berrios et al, 1988) that could lead to suicidal thoughts (Dobie, 2003). Tinnitus may originate from different peripherical nodes or central auditory pathways, however the majority of chronic tinnitus is associated with hearing loss, presbycusis, or exposure to noise. Indeed, approximately 90% of chronic tinnitus is associated with hearing loss (Nicolas-Puel et al. 2006).Tinnitus can cause significant changes in the activity of brain networks. Even in case of peripherical lesions, the perceptual abnormality that results can ultimately lead to a change in the activity of brain structures, by plasticity phenomena. Unfortunately today given the discrepancies in the levels of results obtained by different studies in the affected persons, no consensus has been established regarding the establishment of a model of the pathophysiology of tinnitus associated with a loss hearing.The general objective of this work is to develop a robust multimodal protocol to detect cerebral activation abnormalities in subjects suffering from unilateral tinnitus associated with hearing loss compared to normal-hearing subjects. This protocol includes:- An acquisition paradigm with and without perceived tinnitus through the residual inhibition,- A resting state fMRI paradigm,- A sound evoked fMRI paradigm (with sound stimulation),- A measure of the regional cerebral blood flow.Our study has permitted to highlight the involvement of the default mode network including the precuneus in the perception of tinnitus. Indeed, the results of our study show as many terms as the network is hypo-activated in subjects with tinnitus associated with hearing loss. Hypoactivity of these areas could be reduced temporarily by masking the tinnitus by sound stimulation. In addition, residual inhibition of tinnitus leads to hyper-activation regions of the default mode of the network and in particular precuneus.
88

Timing cues for azimuthal sound source localization / Indices temporels pour la localisation des sources sonores en azimuth

Benichoux, Victor 25 November 2013 (has links)
La localisation des sources en azimuth repose sur le traitement des différences de temps d'arrivée des sons à chacune des oreilles: les différences interaurales de temps (``Interaural Time Differences'' (ITD)). Pour certaines espèces, il a été montré que cet indice dépendait du spectre du signal émis par la source. Pourtant, cette variation est souvent ignorée, les humains et les animaux étant supposés ne pas y être sensibles. Le but de cette thèse est d'étudier cette dépendance en utilisant des méthodes acoustiques, puis d'en explorer les conséquences tant au niveau électrophysiologique qu'au niveau de la psychophysique humaine. A la proximité de sphères rigides, le champ sonore est diffracté, ce qui donne lieu à des régimes de propagation de l'onde sonore différents selon la fréquence. En conséquence, quand la tête d'un animal est modélisée par une sphère rigide, l'ITD pour une position donnée dépend de la fréquence. Je montre que cet effet est reflété dans les indices humains en analysant des enregistrements acoustiques pour de nombreux sujets. De plus, j'explique cet effet à deux échelles: localement en fréquence, la variation de l'ITD donne lieu à différents délais interauraux dans l'enveloppe et la structure fine des signaux qui atteignent les oreilles. Deuxièmement, l'ITD de sons basses-fréquences est généralement plus grand que celui pour des sons hautes-fréquences venant de la même position. Dans une seconde partie, je discute l'état de l'art sur le système binaural sensible à l'ITD chez les mammifères. J'expose que l'hétérogénéité des réponses de ces neurones est prédite lorsque l'on fait l'hypothèse que les cellules encodent des ITDs variables avec la fréquence. De plus, je discute comment ces cellules peuvent être sensibles à une position dans l'espace, quel que soit le spectre du signal émis par la source. De manière générale, j'argumente que les données disponibles chez les mammifères sont en adéquation avec l'hypothèse de cellules sélectives à une position dans l'espace. Enfin, j'explore l'impact de la dépendance en fréquence de l'ITD sur le comportement humain, en utilisant des techniques psychoacoustiques. Les sujets doivent faire correspondre la position latérale de deux sons qui n'ont pas le même spectre. Les résultats suggèrent que les humains perçoivent des sons avec différents spectres à la même position lorsqu'ils ont des ITDs différents, comme prédit part des enregistrements acoustiques. De plus, cet effet est prédit par un modèle sphérique de la tête du sujet. En combinant des approches de différents domaines, je montre que le système binaural est remarquablement adapté aux indices disponibles dans son environnement. Cette stratégie de localisation des sources utilisée par les animaux peut être d'une grande inspiration dans le développement de systèmes robotiques. / Azimuth sound localization in many animals relies on the processing of differences in time-of-arrival of the low-frequency sounds at both ears: the interaural time differences (ITD). It was observed in some species that this cue depends on the spectrum of the signal emitted by the source. Yet, this variation is often discarded, as humans and animals are assumed to be insensitive to it. The purpose of this thesis is to assess this dependency using acoustical techniques, and explore the consequences of this additional complexity on the neurophysiology and psychophysics of sound localization. In the vicinity of rigid spheres, a sound field is diffracted, leading to frequency-dependent wave propagation regimes. Therefore, when the head is modeled as a rigid sphere, the ITD for a given position is a frequency-dependent quantity. I show that this is indeed reflected on human ITDs by studying acoustical recordings for a large number of human and animal subjects. Furthermore, I explain the effect of this variation at two scales. Locally in frequency the ITD introduces different envelope and fine structure delays in the signals reaching the ears. Second the ITD for low-frequency sounds is generally bigger than for high frequency sounds coming from the same position. In a second part, I introduce and discuss the current views on the binaural ITD-sensitive system in mammals. I expose that the heterogenous responses of such cells are well predicted when it is assumed that they are tuned to frequency-dependent ITDs. Furthermore, I discuss how those cells can be made to be tuned to a particular position in space irregardless of the frequency content of the stimulus. Overall, I argue that current data in mammals is consistent with the hypothesis that cells are tuned to a single position in space. Finally, I explore the impact of the frequency-dependence of ITD on human behavior, using psychoacoustical techniques. Subjects are asked to match the lateral position of sounds presented with different frequency content. Those results suggest that humans perceive sounds with different frequency contents at the same position provided that they have different ITDs, as predicted from acoustical data. The extent to which this occurs is well predicted by a spherical model of the head. Combining approaches from different fields, I show that the binaural system is remarkably adapted to the cues available in its environment. This processing strategy used by animals can be of great inspiration to the design of robotic systems.
89

Audition et démasquage binaural chez l'homme / Binaural hearing and binaural masking release in human

Lorenzi, Antoine 14 December 2016 (has links)
Contexte : Le démasquage binaural est un processus indispensable pour la compréhension en environnement bruyant. Ce mécanisme ferait intervenir la comparaison d’indices temporels et fréquentiels tout au long des voies nerveuses auditives. Cependant, il n’existe pas de réel consensus évoquant un traitement du démasquage à un niveau sous-cortical et/ou cortical. L’objet de cette étude est d’étudier ces indices temporels et fréquentiels du démasquage par le biais d’une étude perceptive, puis d’une étude électroencéphalographique (EEG). Matériels et méthodes : Une population normoentendante a été évaluée lors d’une étude perceptive visant à estimer l’importance du démasquage en fonction de 1) la largeur fréquentielle du bruit controlatéral (de 1 octave, 3 octaves ou à large bande), 2) la cohérence temporelle des bruits bilatéraux (corrélation égale à 0 ou 1) et 3) la fréquence des stimuli cibles (0,5, 1, 2 et 4 kHz). Puis, le démasquage a été évalué en EEG par l’étude 1) des latences précoces (<10 ms, PEA-P), 2) des latences tardives (<50 ms, PEA-T) et 3) de l’onde de discordance (PEA-MMN). Pour ces trois études EEG, l’influence de la cohérence temporelle des bruits bilatéraux a été explorée.Résultats : L’étude perceptive traduit un démasquage croissant lorsque la largeur fréquentielle du bruit controlatéral augmente. L’ajout du bruit controlatéral non corrélé (corrélation=0) se traduit par une amélioration de détection de 1,28 dB, quelle que soit la fréquence des stimuli cibles (antimasquage), alors que l’ajout d’un bruit controlatéral corrélé (corrélation=1) évoque une amélioration de détection lorsque la fréquence des stimuli cibles diminue (démasquage) : 0,97 dB à 4 kHz et 9,25 dB à 0,5 kHz. En PEA-P, les latences des ondes III et V se raccourcissent lorsqu’un bruit controlatéral corrélé ou non corrélé est ajouté (≈0,1 ms). En PEA-T, les amplitudes des ondes P1, N1 et des complexes P1N1 et N1P2 augmentent lorsqu’un bruit controlatéral corrélé ou non corrélé est ajouté. Enfin, l’amplitude de la MMN est plus conséquente lorsque le bruit controlatéral ajouté est corrélé (versus non corrélé). Conclusion : L’étude perceptive explicite l’importance des indices spectraux (antimasquage) et temporels (démasquage), pour améliorer la perception d’un signal initialement masqué. L’étude EEG suggère, quant à elle, un traitement sous-cortical influencé uniquement par les indices spectraux (antimasquage) et un traitement plus cortical influencé par les indices temporels (démasquage). / Background: Binaural unmasking is an essential process for understanding in noisy environments. This mechanism would involve the comparison of time and frequency cues throughout the hearing nerve pathways. However, there is no real consensus evoking a treatment of a binaural masking release at a subcortical and/or a cortical level. The purpose of this study is to investigate the time and frequency cues of the binaural unmasking through a perceptual study, and then through an electroencephalographic study (EEG).Materials and Methods: Normal hearing people were evaluated with a perceptive study to estimate the importance of the binaural unmasking according to 1) the frequency width of the contralateral noise (1 octave, 3 octaves or broadband), 2) the temporal coherence of bilateral noises (correlation equal to 0 or 1) and 3) the frequency of the target stimuli (0.5, 1, 2 and 4 kHz). Binaural unmasking was then evaluated with EEG by studying 1) early latencies (<10 ms, PEA-P), 2) late latencies (<50 ms, PEA-T) and 3), the mismatch wave (PEA- MMN). For these three EEG studies, the influence of the temporal coherence of bilateral noise was investigated.Results: The study shows a growing perceptive binaural unmasking when the frequency width of the contralateral noise increases. The addition of an uncorrelated contralateral noise (correlation = 0) results in a 1.28 dB detection enhancement, regardless of the frequency of the target stimuli (antimasking), while adding a contralateral correlated noise (correlation = 1) refers to a detection enhancement when the frequency of the target stimuli decreases (unmasking): 0.97 dB at 4 kHz and 9.25 dB at 0.5 kHz. The latencies of waves III and V are shortened when a contralateral correlated or uncorrelated noise is added (≈0,1 ms) in the PEA-P. The amplitudes of P1, N1 waves and P1N1 and N1P2 complex increase when contralateral correlated or uncorrelated noise is added in PEA-T. Finally, the amplitude of the MMN is higher when a contralateral correlated noise is added (versus an uncorrelated one).Conclusion: The perceptual study shows the significance of spectral cues (antimasking) and temporal cues (unmasking), to improve the perception of an initially masked signal. The EEG study suggests a subcortical treatment which is only influenced by spectral cues (antimasking) and a cortical processing, influenced by temporal cues (unmasking).
90

The Effect of Three Different Levels of Skill Training in Musical Timbre Discrimination on Alphabet Sound Discrimination in Pre-Kindergarten and Kindergarten Children

Battle, Julia Blair 05 1900 (has links)
The purpose of this study was to investigate the effects of three different levels of skill training in musical timbre discrimination on alphabet sound discrimination in pre-kindergarten and kindergarten children. The findings of prior investigations indicated similarities between aural music and language perception. Psychoacoustic and neurological findings have reported the discrimination of alphabet quality and musical timbre to be similar perceptual functions and have provided, through imaging technology, physical evidence of music learning simultaneously stimulating non-musical areas of the brain. This investigator hypothesized that timbre discrimination, the process of differentiating the characteristic quality of one complex sound from another of identical pitch and loudness, may have been a common factor between music and alphabet sound discrimination. Existing studies had not explored this relationship or the effects of directly teaching for transfer on learning generalization between skills used for the discrimination of musical timbre and alphabet sounds. Variables identified as similar from the literature were the discrimination of same- different musical and alphabet sounds, visual recognition of musical and alphabet pictures as sound sources, and association of alphabet and musical sounds with matching symbols. A randomized pre-post test design with intermittent measures was used to implement the study. There were 5 instructional groups. Groups 1, 2,and 3 received one, two and three levels of skill instruction respectively. Groups 4 received three levels of skill training with instruction for transfer; Group 5 traditional timbre instruction. Students were measured at the 5th (Level 1), 10th (Level 2), 14th (Level 3), and 18th (delayed re-test), weeks of instruction. Results revealed timbre discrimination instruction had a significant impact on alphabet sound-symbol discrimination achievement in pre-kindergarten and kindergarten children. Different levels of timbre instruction had different degrees of effectiveness on alphabet sound discrimination. Students who received three levels of timbre discrimination instruction and were taught to transfer skill similarities from music timbre discrimination to alphabet sound discrimination, were significantly more proficient in alphabet sound symbol discrimination than those who had not received instruction Posttest comparisons indicated skill relationships were strengthened by instruction for transfer. Transfer strategies had a significant impact on the retention of newly learned skills over time.

Page generated in 0.0673 seconds