Spelling suggestions: "subject:"psychoacoustics."" "subject:"psychouacoustics.""
61 |
Contextual musicality : vocal modulation and its perception in human social interactionLeongomez, Juan David January 2014 (has links)
Music and language are both deeply rooted in our biology, but scientists have given far more attention to the neurological, biological and evolutionary roots of language than those of music. Because of this, and probably partially due to this, the purpose of music, in evolutionary terms, remains a mystery. Our brain, physiology and psychology make us capable of producing and listening to music since early infancy; therefore, our biology and behaviour are carrying some of the clues that need to be revealed to understand what music is “for”. Furthermore, music and language have a deep relationship, particularly in terms of cognitive processing, that can provide clues about the origins of music. Non-verbal behaviours, including voice characteristics during speech, are an important form of communication that enables individual recognition and assessment of the speaker’s physical characteristics (including sex, femininity/masculinity, body size, physical strength, and attractiveness). Vocal parameters, however, can be intentionally varied, for example altering the intensity (loudness), rhythm and pitch during speech. This is classically demonstrated in infant directed speech (IDS), in which adults alter vocal characteristics such as pitch, cadence and intonation contours when speaking to infants. In this thesis, I analyse vocal modulation and its perception in human social interaction, in different social contexts such as courtship and authority ranking relationships. Results show that specific vocal modulations, akin to those of IDS, and perhaps music, play a role in communicating courtship intent. Based on these results, as well the body of current knowledge, I then propose a model for the evolution of musicality, the human capacity to process musical information, in relation to human vocal communication. I suggest that musicality may not be limited to specifically musical contexts, and can have a role in other domains such as language, which would provide further support for a common origin of language and music. This model supports the hypothesis of a stage in human evolution in which individuals communicated using a music-like protolanguage, a hypothesis first suggested by Darwin.
|
62 |
Terapeutické postupy s akustickým nebo komunikačním základem / Therapeutical procedures based on acoustic and communicative materialBečvářová, Jana January 2012 (has links)
The aim of this thesis was to explain therapeutic possibilities of sound in all its connections. Sound is described in relation to several disciplines. Initially, sound is presented as an acoustic and psychoacoustic phenomenon, followed by characterization of physiology of auditory system and findings from psychology of music. The focus of the thesis is anchored in the chapter dedicated to healing and corrective effects of sound - the music therapy. After short history context is presented, the characteristics and analysis of contemporary situation are discussed. Several types of sound - noise, music and sound of speech - are studied in their positive as well as negative influence on psychic and physical health of human. Current research is represented by selection of relevant papers (n=9) which are assessed by the criterion of credibility and rigidness of methodology. This aspect is also perceived as an essential one for the future research in the area of sound effect on human psychic and physical health.
|
63 |
Détection de signaux émergents au sein d'habitacles : mesures et modélisationDubois, Françoise 19 July 2011 (has links)
La caractérisation du ressenti global du bruit intérieur d’habitacle passe par la définition des conditions conduisant à l’audibilité de ces composantes fréquentielles, émergeant du bruit de fond. Mon travail de thèse s’est attaché à décomposer les situations de masquage pouvant apparaître au sein d’automobile ou de train, en complexifiant progressivement les stimuli au cours des mesures de masquage, en laboratoire. Un certain nombre de choix méthodologiques ont dû être effectués, limitant l’étude aux sons stationnaires, sans modulation d’amplitude, sans déphasage entre les oreilles.Nous nous sommes confrontés tout d’abord à la question du mode de reproduction. Nous avons souligné les difficultés rencontrées lors de la mesure de l’étalonnage des casques d’écoute. Nous avons validé l’écoute au casque étalonné au tympan, en comparant les mesures de seuils masqués de sons purs dans un bruit large bande, à une écoute en chambre sourde, face à une enceinte monophonique.Ensuite, nous avons complexifié le contenu spectral du masque en présentant des bruits comportant des tonalités marquées. Plusieurs modèles perceptifs ont été testés de façon à prédire l’élévation des seuils mesurés.Enfin, nous avons étudié l’amélioration à la détection d’un signal multifréquentiel et développé un modèle, issu de la théorie de la détection du signal, applicable aux signaux présentant des différences en niveau entre les composantes. L’influence du bruit masquant a également été révélé par la mesure de seuils de signaux multifréquentiels dans un bruit d’habitacle automobile. Un unique modèle de détection de signaux émergents, applicable aux signaux stationnaires, a été proposé. Ces travaux ouvrent de nombreuses perspectives, comme la poursuite du travail sur les émergences multifréquentielles, la prise en compte de la relation de phase entre les oreilles, l'étude des sons non stationnaires ou les phénomènes attentionels. / Emergences like tonal components take part in automobile and railroad acoustic comfort. These signals are totally or partially masked by the background noise, of automobile or train coaches. Determining the audibility of spectrally complex signals in a complex broadband noise masker, with tonalities or not, is yet an unanswered question and an industry expectation to characterize the overall sound quality of train/car cabins. The purpose of my PhD thesis was to measure detection thresholds for tones or complex tones, masked by a broadband noise, with pronounced tonal components or not. Several choices must have been performed, restricting study to stationary sounds, without modulation of amplitude, without inter-aural phase differences.First, different methods of sound reproduction are compared measuring detection thresholds. We underlined difficulties met during the measure of the calibration of headphones We validated the eardrum calibration, by comparing detection thresholds of pure sounds in a broad band noise, in a anechoic room, in front of a monophonic loudspeaker.Then, the masking thresholds of pure tones in the presence of maskers with pronounced tonal components are measured. Several perceptual models were tested in order to predict the elevation of the measured thresholds.Finally, we studied the improvement in detection of a multitone complex and developed a model to predict masking thresholds, based on the statistical summation model, applicable to multicomponent signals with differences in level between components. The influence of tonalities have been revealed with the car cabin noise.A threshold model, applicable to the stationary sounds, is proposed. Several perspectives are discussed, from time-variant signals to inter-aural differences or attention phenomena for example.
|
64 |
Ferramenta de áudio conferência espacial implementando conceitos de realidade aumentada. / Spatial audio conference tool implementing augmented reality concepts.Bulla Junior, Romeo 29 October 2009 (has links)
Este trabalho apresenta uma ferramenta para conferência de áudio 3D (espacial) implementando conceitos de Realidade Aumentada (RA). O objetivo desta ferramenta é aprimorar a sensação de presença e melhorar a interatividade entre seus participantes remotos, por meio de benefícios proporcionados pela utilização de técnicas de áudio espacial (implementadas em avatares de áudio) pela: maior facilidade de concentração e atenção em um único participante e pelos efeitos positivos na memorização dos conteúdos pelos participantes como conseqüência da melhor inteligibilidade e compreensão. A motivação desta implementação reside em sua utilização como ferramenta de comunicação síncrona no ambiente de aprendizagem eletrônica Tidia-Ae, auxiliando na realização de atividades colaborativas e, possivelmente, nos processos de ensino e aprendizagem à distância. A ferramenta implementada foi integrada ao sistema Tidia-Ae e os resultados dos experimentos realizados demonstraram sua efetividade com relação às melhorias proporcionadas pelo processamento de áudio espacial. / This work presents a 3D (spatial) audio conference tool implementing Augmented Reality (AR) concepts. The main intent of this tool is to enhance the sense of presence and increase the interactivity among remote participants, by implementing spatial audio techniques in audio avatars. The use of such techniques facilitates focusing the attention on anyone specific participant of the conference and enables a positive effect on memory retention, resulting in a better intelligibility and comprehension. The motivation of this implementation lies on its appliance as a synchronous communication tool within the Tidia-Ae e-Learning system, thus aiding on collaborative activities realization and, possibly, on teaching and learning processes. The results of the experiments showed the effectiveness provided by the spatial audio processing when applied in such environment.
|
65 |
Estudo das emissões otoacústicas e dos potenciais auditivos evocados de tronco cerebral em pacientes com zumbido. / Study of otoacoustic emissions and auditory brainstem response in patients with tinnitusSamelli, Alessandra Giannella 05 December 2000 (has links)
O zumbido (ou tinnitus) pode ser descrito como a percepção de um som ou ruído sem nenhuma estimulação acústica externa. Apesar de freqüente, ainda existem muitas dúvidas envolvendo o zumbido, no que se refere à sua origem e tratamento para a totalidade dos casos. Os objetivos do presente trabalho foram estudar a supressão das Emissões Otoacústicas Transitórias com estimulação contralateral e as latências, intervalos interpicos, bem como as amplitudes das ondas dos Potenciais Auditivos Evocados de Tronco Cerebral, em pacientes com zumbido e perda auditiva neurossensorial, causada possivelmente por exposição prolongada a níveis de pressão sonora elevados. Foram avaliados 30 sujeitos com zumbido (grupo Z) e 30 sujeitos sem zumbido (grupo C), ambos os grupos do sexo masculino e pareados quanto à faixa etária, tempo de exposição ao ruído e grau de perda auditiva neurossensorial em agudos. Os resultados mostraram homogeneidade dos dois grupos quanto à faixa etária, tempo de exposição ao ruído e limiares auditivos. Observou-se supressão das emissões menores para o grupo Z, com diferença estatística somente para a orelha esquerda e indícios de diferença significante para a orelha direita. Quanto aos Potenciais Auditivos Evocados de Tronco Cerebral, houve um aumento das latências e redução das amplitudes para o grupo Z, com resultados significantes para a latência de onda III da orelha direita e para as latências das ondas I e III da orelha esquerda. Com base nos achados descritos, hipotetizou-se que, nos pacientes com zumbido, o sistema auditivo eferente olivococlear medial seria possivelmente menos eficiente, já que a supressão das emissões foi menor nestes pacientes. Além disso, poder-se-ia supor a existência de uma possível alteração na atividade do Tronco Cerebral em indivíduos com zumbido, evidenciadas pelos prolongamentos das latências e redução das amplitudes. / Tinnitus can be described as a perception of a particular sound or noise without any external acoustic stimulation. Though frequent, there are still many unanswered questions regarding tinnitus, the origin and treatment for all cases. The aim of this work was to study the suppression of Transitory Otoacoustic Emissions with contralateral stimulus and the latencies, interpeak intervals and amplitudes of Auditory Brainstem Response waves in patients with tinnitus and sensorineural hearing loss, possibly caused by prolonged exposition to high sound pressure levels. For that purpose, 30 individuals with tinnitus (group Z) and another 30 without it (group C) were studied. Both groups formed by males matched according with age, time exposed to noise and high-frequency sensorineural hearing loss. The results show homogeneous age, noise exposure time and hearing thresholds of both groups. Weaker suppression of emissions in group Z was observed, with significant statistical difference only for left ear and indicia of significant difference for the right ear. As for auditory brainstem response, there was an increase in latencies and reduction of amplitudes for group Z, with significant results for wave III latency of right ear and for I and III waves of left ear. Based on these findings, the theory is that in patients with tinnitus the medial olivocochlear efferent auditory system could possibly be less efficient, since the emission suppression was weaker in such patients. Besides, an assumption could be made that a possible alteration of brainstem activity takes place in patients with tinnitus, made clear by prolonged latencies and reduction of amplitudes in that group.
|
66 |
Individual profiling of perceived tinnitus by developing tinnitus analyzer softwareUnknown Date (has links)
Tinnitus is a conscious perception of phantom sounds in the absence of external acoustic stimuli, and masking is one of the popular ways to treat it. Due to the variation in the perceived tinnitus sound from patient to patient, the usefulness of masking therapy cannot be generalized. Thus, it is important to first determine the feasibility of masking therapy on a particular patient, by quantifying the tinnitus sound, and then generate an appropriate masking signal. This paper aims to achieve this kind of individual profiling by developing interactive software -Tinnitus Analyzer, based on clinical approach. The developed software has been proposed to be used in place of traditional clinical methods and this software (as a part of the future work) will be implemented in the practical scenario involving real tinnitus patients. / by Bashali Chaudbury. / Thesis (M.S.C.S.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
|
67 |
Estudo das propriedades acústicas e psicofísicas da cóclea / Study on the acoustical and psychophysical properties of the chocleaBayeh, Rebeca 27 March 2018 (has links)
O presente trabalho tem como objetivo apresentar uma revisão bibliográfica de alguns dos principais conceitos acústicos e psicoacústicos associados à audição humana já desenvolvidos na literatura, de Helmholtz aos dias atuais, com foco no órgão da cóclea, relacionando as áreas de física, neurociências e computação musical, bem como aplicações diretamente derivadas de tal revisão. A partir dos cálculos realizados por Couto (COUTO, 2000) de distribuição da pressão sonora no meato acústico externo, foi calculada a pressão sonora relativa e a impedância acústica ao longo do órgão coclear. Também é apresentado um algoritmo de minimização da dissonância sensorial baseado nos modelos de bandas críticas de Cambridge e de Munique. / The present work presents a literature review on some of the most important acoustical and psychoacoustical concepts associated to human hearing, from Helmholtz to the present day, focusing on the cochlea and connecting concepts of physics, neurosciences and computer music, as well as applications directly derived from such concepts. From the sound pressure distribution model developed by Couto (COUTO, 2000), the relative sound pressure and the acoustic impedance along the cochlea were calculated. An algorithm for minimizing sensory dissonance based on Cambridge and Munich models of critical bandwidths is also presented.
|
68 |
Stream segregation and pattern matching techniques for polyphonic music databases.January 2003 (has links)
Szeto, Wai Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 81-86). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgements --- p.vi / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivations and Aims --- p.1 / Chapter 1.2 --- Thesis Organization --- p.6 / Chapter 2 --- Preliminaries --- p.7 / Chapter 2.1 --- Fundamentals of Music and Terminology --- p.7 / Chapter 2.2 --- Findings in Auditory Psychology --- p.8 / Chapter 3 --- Literature Review --- p.12 / Chapter 3.1 --- Pattern Matching Techniques for Music Information Retrieval --- p.12 / Chapter 3.2 --- Stream Segregation --- p.14 / Chapter 3.3 --- Post-tonal Music Analysis --- p.15 / Chapter 4 --- Proposed Method for Stream Segregation --- p.17 / Chapter 4.1 --- Music Representation --- p.17 / Chapter 4.2 --- Proposed Method --- p.19 / Chapter 4.3 --- Application of Stream Segregation to Polyphonic Databases --- p.27 / Chapter 4.4 --- Experimental Results --- p.30 / Chapter 4.5 --- Summary --- p.36 / Chapter 5 --- Proposed Approaches for Post-tonal Music Analysis --- p.38 / Chapter 5.1 --- Pitch-Class Set Theory --- p.39 / Chapter 5.2 --- Sequence-Based Approach --- p.43 / Chapter 5.2.1 --- Music Representation --- p.43 / Chapter 5.2.2 --- Matching Conditions --- p.44 / Chapter 5.2.3 --- Algorithm --- p.46 / Chapter 5.3 --- Graph-Based Approach --- p.47 / Chapter 5.3.1 --- Graph Theory and Its Notations --- p.48 / Chapter 5.3.2 --- Music Representation --- p.50 / Chapter 5.3.3 --- Matching Conditions --- p.53 / Chapter 5.3.4 --- Algorithm --- p.57 / Chapter 5.4 --- Experiments --- p.67 / Chapter 5.4.1 --- Experiment 1 --- p.67 / Chapter 5.4.2 --- Experiment 2 --- p.68 / Chapter 5.4.3 --- Experiment 3 --- p.70 / Chapter 5.4.4 --- Experiment 4 --- p.75 / Chapter 6 --- Conclusion --- p.79 / Bibliography --- p.81 / A Publications --- p.87
|
69 |
Ferramenta de áudio conferência espacial implementando conceitos de realidade aumentada. / Spatial audio conference tool implementing augmented reality concepts.Romeo Bulla Junior 29 October 2009 (has links)
Este trabalho apresenta uma ferramenta para conferência de áudio 3D (espacial) implementando conceitos de Realidade Aumentada (RA). O objetivo desta ferramenta é aprimorar a sensação de presença e melhorar a interatividade entre seus participantes remotos, por meio de benefícios proporcionados pela utilização de técnicas de áudio espacial (implementadas em avatares de áudio) pela: maior facilidade de concentração e atenção em um único participante e pelos efeitos positivos na memorização dos conteúdos pelos participantes como conseqüência da melhor inteligibilidade e compreensão. A motivação desta implementação reside em sua utilização como ferramenta de comunicação síncrona no ambiente de aprendizagem eletrônica Tidia-Ae, auxiliando na realização de atividades colaborativas e, possivelmente, nos processos de ensino e aprendizagem à distância. A ferramenta implementada foi integrada ao sistema Tidia-Ae e os resultados dos experimentos realizados demonstraram sua efetividade com relação às melhorias proporcionadas pelo processamento de áudio espacial. / This work presents a 3D (spatial) audio conference tool implementing Augmented Reality (AR) concepts. The main intent of this tool is to enhance the sense of presence and increase the interactivity among remote participants, by implementing spatial audio techniques in audio avatars. The use of such techniques facilitates focusing the attention on anyone specific participant of the conference and enables a positive effect on memory retention, resulting in a better intelligibility and comprehension. The motivation of this implementation lies on its appliance as a synchronous communication tool within the Tidia-Ae e-Learning system, thus aiding on collaborative activities realization and, possibly, on teaching and learning processes. The results of the experiments showed the effectiveness provided by the spatial audio processing when applied in such environment.
|
70 |
The processing of pitch and temporal information in relational memory for melodiesByron, Timothy P., University of Western Sydney, College of Arts, School of Psychology January 2008 (has links)
A series of experiments investigate the roles of relational coding and expectancy in memory for melodies. The focus on memory for melodies was motivated by an argument that research on the evolutionary psychology of music cognition would be improved by further research in this area. Melody length and the use of transposition were identified in a literature review as experimental variables with the potential to shed light on the cognitive mechanisms in memory for melodies; similarly, pitch interval magnitude (PIM), melodic contour, metre, and pulse were identified as musical attributes that appear to be processed by memory for melodies. It was concluded that neither previous models of verbal short term memory (vSTM) nor previous models of memory for melodies are unable to satisfactorily explain current findings on memory for melodies. The model of relational memory for melodies that is developed here aims to explain findings from the memory for melodies literature. This model emphasises the relationship between: a) perceptual processes – specifically, a relational coding mechanism which encodes pitch and temporal information in a relational form; b) a short term store; and c) the redintegration of memory traces using schematic and veridical expectancies. The relational coding mechanism, which focuses on pitch and temporal accents (c.f., Jones, 1993), is assumed to be responsible for the salience of contour direction and note length, while the expectancy processes are assumed to be more responsible for the salience of increases in PIM or deviations from the temporal grid. Using a melody discrimination task, with key transposition within-pairs, in which melody length was manipulated, Experiments 1a, 1b, and 2 investigated the assumption that contour would be more reliant on the relational coding mechanism and PIM would be more reliant on expectancy processes. Experiment 1a confirmed this hypothesis using 8 and 16 note folk melodies. Experiment 1b used the same stimuli as Experiment 1a, except that the within-pair order was reversed in order to reduce the influence of expectancy processes. As expected, while contour was still salient under these conditions, PIM was not. Experiment 2 was similar to Experiment 1b, except that it avoided using the original melodies in same trials in order to specifically reduce the influence of veridical expectancy processes. This led to a floor effect. Overall, the results support the explanation of pitch processing in memory for melodies in the model. Experiments 3 and 4 investigated the assumption in the model that temporal processing in memory for melodies was reliant on the relational coding mechanism. Experiment 3 found that, with key transposition within-pairs, there was little difference between pulse alterations (which deviate more from the temporal grid) and metre alterations (which lengthen the note more) in short melodies, but that pulse alterations were more salient than metre alterations in long melodies. Experiment 4 showed that, with tempo transposition within-pairs, metre alterations were more salient than pulse alterations in short melodies, but that there was no difference in salience in long melodies. That metre alterations are more salient than pulse alterations in Experiment 4 strongly suggests that there is relational coding of temporal information, and that this relational coding uses note length to determine the presence of accents, as the model predicts. Experiments 5a and 5b, using a Garner interference task, transposition within-pairs, and manipulations of melody length, investigated the hypothesis derived from the model that pitch and temporal information would be integrated in the relational coding mechanism. Experiment 5b demonstrated an effect of Garner interference from pitch alterations on the discrimination of temporal alterations; Experiment 5a found a weaker effect of Garner interference from pitch alterations on the discrimination of temporal alterations. The presence of Garner interference in these tasks when there was transposition within melody pairs suggests that pitch and temporal information are integrated in the relational coding mechanism, as predicted in the model. Seven experiments therefore provide support for the assumption that a relational coding mechanism and LTM expectancies play a role in the discrimination of melodies. This has implications for other areas of research in music cognition. Firstly, theories of the evolution of music must be able to explain why features of these processing mechanisms could have evolved. Secondly, research into acquired amusia should have a greater focus on differences between perceptual, cognitive, and LTM processing. Thirdly, research into similarities between music processing and language processing would be improved by further research using PIM as a variable. / Doctor of Philosophy (PhD)
|
Page generated in 0.0674 seconds