• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • Tagged with
  • 12
  • 12
  • 9
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Voicing and voice assimilation in Russian stops

Kulikov, Vladimir 01 July 2012 (has links)
The main objective of this thesis is to investigate acoustic cues for the voicing contrast in stops in Russian for effects of speaking rate and phonetic environment. Although the laryngeal contrast in Russian is assumed to be a [voice] contrast, very few experimental studies have looked at the acoustic properties of Russian voiced and voiceless stops. Most claims about acoustic properties of stops and phonological processes that affect them (voice assimilation and final devoicing) have been made based on impressionistic transcriptions. The present study provides evidence that (1) voicing in voiced stops is affected by speaking rate manipulation, (2) stops in Russian retain underlying voicing contrast in presonorant position and voice assimilation occurs only in obstruent clusters, and (3) phonological processes of voice assimilation and final devoicing do not result in complete neutralization. The target of the investigation is voiced and voiceless intervocalic stops, stops in clusters, and final stops in different prosodic positions within a word and at the phrase level. The acoustic cues to voicing (duration of voicing, stop closure duration, vowel duration, f0, and F1) were measured from the production data of 14 monolingual speakers of Russian recorded in Russia. Speakers produced words and phrases with target stops in three speaking rate conditions: list reading, slow rate and fast rate. The data were analyzed in 5 blocks focusing on (1) word-internal stops, (2) voice assimilation in stops in prepositions, (3) cases of so-called "sonorant transparency", (4) voice assimilation in stops before /v/, and (5) voicing processes across a word boundary. The results of the study present a challenge to the widely-held assumption that phonological processes precede phonetic processes at the phonology-phonetics interface. It is shown that the underlying contrast leaves traces on assimilated and devoiced stops. To account for the findings, a phonology-phonetics interface that allows interaction between the modules is required. In addition, the results show that temporal cues are affected by speaking rate manipulation, but the effect of rate on voicing is found only in voiced stops. Duration of voicing and VOT in voiceless stops are not affected by speaking rate. The results also show that no effect of C2 is obtained on voicing in C1 stops in in obstruent-sonorant-obstruent clusters, thus no "phonological sonorant transparency to voice assimilation" is found in Russian. Rather, the study provides evidence that there is variation in production of voicing in stops in prepositions, and that voice assimilation in stops before /v/ followed by a voiced obstruent is optional for some speakers.
2

The acquisition of contrast : a longitudinal investigation of initial s+plosive cluster development in Swedish children

Karlsson, Fredrik January 2006 (has links)
<p>This Thesis explores the development of word-initial s+plosive consonant clusters in the speech of Swedish children between the ages of 1;6 and 4;6. Development in the word-initial consonant clusters is viewed as being determined by 1) the children’s ability to articulate the target sequence of consonants, 2) the level of understanding of which acoustic features in the adult model production are significant for the signalling of the intended distinction, and 3) the children’s ability to apply established production patterns only to productions where the acquired feature agrees with the adult target, to achieve a contrast between rival output forms. This Thesis employs a method where output forms are contrasted with attempted productions of potential homonym target words. Thus, development is quantified as an increase in the manifestations of phonetic features where it agrees with the adult norm, coupled by a decrease in the same feature in output forms where it is inappropriate according to the specifications of the phonological system of the ambient language. Acoustic investigations of cues of voicing, aspiration, place of articulation and syllable onset complexity, and auditory investigations of place, manner and syllable onset complexity were conducted. The Thesis has four outcomes. One, a description of the perceptual quality of the productions in terms of place, manner, voicing and syllable onset complexity is presented. Two, a developmental sequence of stable acquisition of these features is proposed; manner is shown to be acquired first, followed by syllable onset complexity and place of articulation. Evidence is provided that the voiced/aspirated distinction is still being acquired at the end of the investigated age period. Three, the developmental use of acoustic cues of place and voicing are described. Voice Onset Time and Spectral Skewness are shown to be used by children in order to increase the likeness to the adult target in terms of voicing and place of articulation. Aspiration Amplitude is shown to be used as an auxiliary cue to Voice Onset Time. The place cues Spectral Tilt Change, F2, Spectral Mean and Spectral Variance were shown to be used in order to refine already produced consonants rather than approach the adult target model. Four, the Thesis provides evidence of periods of confusions in the output of children. With the reductions of these patterns of confusion, evidence is provided of children’s re-organisation of their internal representation of the consonant to be produced.</p> / <p>Denna avhandling undersöker utvecklingen av ordinitiala konsonantkluster av formen s+klusil i talet hos svenska barn mellan åldrarna 1;6 och 4;6. Utvecklingen av de ordinitiala klustren betraktas som bestämd av 1) barnets förmåga att artikulera den konsonantsekvens som utgör målet, 2) barnets förståelse för vilka akustiska särdrag i den vuxna målproduktionen som är signifikanta för att signalera en viss distinktion och 3) barnets förmåga att tillämpa ett etablerat produktionsmönster endast på de produktioner där det tillägnade draget överensstämmer med den vuxna målproduktionen, så att en kontrast uppnås mellan konkurrerande utformer. Avhandlingen tillämpar en metod där producerade utformer kontrasteras med produktioner av målord som utgör potentiella homonymer till dessa. Utvecklingen kan då kvantifieras som en ökning av antalet förekomster av ett fonetiskt drag som överensstämmer med den vuxna normen för den relevanta kontexten, kopplad till en minskning av antalet förekomster av samma drag i kontext där draget är inkorrekt givet det fonologiska systemet i det språk som tillägnas. De drag som undersöktes var de akustiska korrelaten till stämbandston, aspiration, artikulationsställe och komplexitet i stavelseansatsen, och vidare de auditiva korrelaten till artikulationsställe, artikulationssätt och komplexitet i stavelseansatsen. Fyra resultat redovisas. För det första presenteras en beskrivning av den perceptuella kvaliteten hos barnens produktioner i termer av artikulationsställe, artikulationssätt och komplexitet. För det andra föreslås en utvecklingssekvens för stabilt tillägnande av dessa drag: artikulationssätt tillägnas först, följt av komplexitet hos stavelseansatsen och artikulationsställe. Sist tillägnas distinktionen mellan tonande och aspirerad klusil, där data visar att draget inte tillägnats till fullo vid slutet av det undersökta åldersintervallet. För det tredje ger avhandlingen en beskrivning av hur barnen utvecklar sitt bruk av akustiska korrelat till artikulationsställe och ton. Voice Onset Time och snedhet i spektrum används för att närma sig det vuxna målet gällande aspiration och artikulationsställe. Aspirationsamplitud används som ett hjälpkorrelat till VOT vad gäller fonetisk aspiration. Andra korrelat för artikulationsställe, som förändring i spektral lutning, F2, spektralt medelvärde och spektral varians, används för att förfina de egna produktionerna snarare än för att få produktionen att närma sig det vuxna målet. För det fjärde ges i avhandlingen evidens för perioder då barnens produktion uppvisar förväxlingar vad avser distributionen hos vissa fonetiska drag. Minskade förekomster av sådana förväxlingar visar på en omstrukturering av barnens underliggande representation för den konsonant som ska produceras.</p>
3

The acquisition of contrast : a longitudinal investigation of initial s+plosive cluster development in Swedish children

Karlsson, Fredrik January 2006 (has links)
This Thesis explores the development of word-initial s+plosive consonant clusters in the speech of Swedish children between the ages of 1;6 and 4;6. Development in the word-initial consonant clusters is viewed as being determined by 1) the children’s ability to articulate the target sequence of consonants, 2) the level of understanding of which acoustic features in the adult model production are significant for the signalling of the intended distinction, and 3) the children’s ability to apply established production patterns only to productions where the acquired feature agrees with the adult target, to achieve a contrast between rival output forms. This Thesis employs a method where output forms are contrasted with attempted productions of potential homonym target words. Thus, development is quantified as an increase in the manifestations of phonetic features where it agrees with the adult norm, coupled by a decrease in the same feature in output forms where it is inappropriate according to the specifications of the phonological system of the ambient language. Acoustic investigations of cues of voicing, aspiration, place of articulation and syllable onset complexity, and auditory investigations of place, manner and syllable onset complexity were conducted. The Thesis has four outcomes. One, a description of the perceptual quality of the productions in terms of place, manner, voicing and syllable onset complexity is presented. Two, a developmental sequence of stable acquisition of these features is proposed; manner is shown to be acquired first, followed by syllable onset complexity and place of articulation. Evidence is provided that the voiced/aspirated distinction is still being acquired at the end of the investigated age period. Three, the developmental use of acoustic cues of place and voicing are described. Voice Onset Time and Spectral Skewness are shown to be used by children in order to increase the likeness to the adult target in terms of voicing and place of articulation. Aspiration Amplitude is shown to be used as an auxiliary cue to Voice Onset Time. The place cues Spectral Tilt Change, F2, Spectral Mean and Spectral Variance were shown to be used in order to refine already produced consonants rather than approach the adult target model. Four, the Thesis provides evidence of periods of confusions in the output of children. With the reductions of these patterns of confusion, evidence is provided of children’s re-organisation of their internal representation of the consonant to be produced. / Denna avhandling undersöker utvecklingen av ordinitiala konsonantkluster av formen s+klusil i talet hos svenska barn mellan åldrarna 1;6 och 4;6. Utvecklingen av de ordinitiala klustren betraktas som bestämd av 1) barnets förmåga att artikulera den konsonantsekvens som utgör målet, 2) barnets förståelse för vilka akustiska särdrag i den vuxna målproduktionen som är signifikanta för att signalera en viss distinktion och 3) barnets förmåga att tillämpa ett etablerat produktionsmönster endast på de produktioner där det tillägnade draget överensstämmer med den vuxna målproduktionen, så att en kontrast uppnås mellan konkurrerande utformer. Avhandlingen tillämpar en metod där producerade utformer kontrasteras med produktioner av målord som utgör potentiella homonymer till dessa. Utvecklingen kan då kvantifieras som en ökning av antalet förekomster av ett fonetiskt drag som överensstämmer med den vuxna normen för den relevanta kontexten, kopplad till en minskning av antalet förekomster av samma drag i kontext där draget är inkorrekt givet det fonologiska systemet i det språk som tillägnas. De drag som undersöktes var de akustiska korrelaten till stämbandston, aspiration, artikulationsställe och komplexitet i stavelseansatsen, och vidare de auditiva korrelaten till artikulationsställe, artikulationssätt och komplexitet i stavelseansatsen. Fyra resultat redovisas. För det första presenteras en beskrivning av den perceptuella kvaliteten hos barnens produktioner i termer av artikulationsställe, artikulationssätt och komplexitet. För det andra föreslås en utvecklingssekvens för stabilt tillägnande av dessa drag: artikulationssätt tillägnas först, följt av komplexitet hos stavelseansatsen och artikulationsställe. Sist tillägnas distinktionen mellan tonande och aspirerad klusil, där data visar att draget inte tillägnats till fullo vid slutet av det undersökta åldersintervallet. För det tredje ger avhandlingen en beskrivning av hur barnen utvecklar sitt bruk av akustiska korrelat till artikulationsställe och ton. Voice Onset Time och snedhet i spektrum används för att närma sig det vuxna målet gällande aspiration och artikulationsställe. Aspirationsamplitud används som ett hjälpkorrelat till VOT vad gäller fonetisk aspiration. Andra korrelat för artikulationsställe, som förändring i spektral lutning, F2, spektralt medelvärde och spektral varians, används för att förfina de egna produktionerna snarare än för att få produktionen att närma sig det vuxna målet. För det fjärde ges i avhandlingen evidens för perioder då barnens produktion uppvisar förväxlingar vad avser distributionen hos vissa fonetiska drag. Minskade förekomster av sådana förväxlingar visar på en omstrukturering av barnens underliggande representation för den konsonant som ska produceras.
4

Vowel perception in severe noise

Swanepoel, Rikus 05 March 2013 (has links)
A model that can accurately predict speech recognition for cochlear implant (CI) listeners is essential for the optimal fitting of cochlear implants. By implementing a CI acoustic model that mimics CI speech processing, the challenge of predicting speech perception in cochlear implants can be simplified. As a first step in predicting the recognition of speech processed through an acoustic model, vowel perception in severe speech-shaped noise was investigated in the current study. The aim was to determine the acoustic cues that listeners use to recognize vowels in severe noise and make suggestions regarding a vowel perception predictor. It is known that formants play an important role in quiet, while in severe noise the role of formants is still unknown. The relative importance of F1 and F2 is also of interest, since the masking of noise is not always evenly distributed over the vowel spectrum. The problem was addressed by synthesizing vowels consisting of either detailed spectral shape or formant information. F1 and F2 were also suppressed to examine the effect in severe noise. The synthetic stimuli were presented to listeners in quiet and signal-to-noise ratios of 0 dB, -5 dB and -10 dB. Results showed that in severe noise, vowels synthesized according to the whole-spectrum were recognized significantly better than vowels containing only formants. Multidimensional scaling and FITA analysis indicated that formants were still perceived and extracted by the human auditory system in severe noise, especially when the vowel spectrum consisted of the whole spectral shape. Although F1 and F2 vary in importance in listening conditions of quiet and less noisy conditions, the role of the two cues appears to be similar in severe noise. It was suggested that not only the availability formants, but also details of the vowel spectral shape can help to predict vowel recognition in severe noise to a certain degree. / Dissertation (MEng)--University of Pretoria, 2010. / Electrical, Electronic and Computer Engineering / unrestricted
5

Perception of Synthetic Speech by a Language-Trained Chimpanzee (Pan troglodytes)

Heimbauer, Lisa A. 10 July 2009 (has links)
Ability of human listeners to understand altered speech is argued as evidence of uniquely human processing abilities, but early auditory experience also may contribute to this capability. I tested the ability of Panzee, a language-trained chimpanzee (Pan troglodytes), reared and spoken to from infancy by humans, to recognize synthesized words. Training and testing was conducted with different sets of English words in natural, “harmonics-only” (resynthesized using only voiced components), or “noise-vocoded” (based on amplitude-modulated noise bands) forms, with Panzee choosing from “lexigram” symbols that represented words. In Experiment 1 performance was equivalent with words in natural and harmonics-only form. In Experiment 2 performance with noise-vocoded words was significantly higher than chance but lower than with natural words. Results suggest specialized processing mechanisms are not necessary to speech perception in the absence of traditional acoustic cues, with the more important factor for speech-processing abilities being early immersion in a speech-rich environment.
6

Akustické vlastnosti slovního přízvuku ve čtené české anglictině / Acoustic properties of word stress in read Czech English

Liska, Jan January 2011 (has links)
key words: Czech English, foreign accent, word stress, word accent, stressed syllable, duration, f0, acoustic cues. This study investigates the acoustic properties of word stress in Czech English. The notion of foreign accent is introduced and its drawbacks are presented. Further on the various influences on the perceived degree, or strength, of foreign accent are discussed. Faulty realization of word stress is identified as one of the factors that contribute to unintelligibility of non-native speech (Benrabah, 1997; Hahn, 2004; Cutler, 1984). In Chapter 2 we compare the results of studies that used speakers of a variety of languages and form a basic theory on the acquisition of acoustic cues to word stress. We are mostly interested in f0 and duration. This theory, based on the feature hypothesis (McAllister et al., 2002 in Lee, Guion & Harada, 2006), states that languages that have a similar stress system to that of English (Dutch, Arabic) use their native cues to signal word stress, while non-contrastive languages (Vietnamese, Czech) prefer cue/s that are phonologically active on segmental level in their native language. Speakers of Vietnamese, a tone language, were found to prefer f0 over duration (Nguyen, 2003), so for Czech, a language that uses phonological vowel duration, it is expected that...
7

Etude de l’encodage des sons de parole par le tronc cérébral dans le bruit / Study of brainstem speech in noise processing

Richard, Céline 17 December 2010 (has links)
Ce travail s’est intéressé au traitement sous cortical de la parole dégradée par le bruit, notamment par la caractérisation première de l’importance de certains traits acoustiques dans la perception de la parole normale. Pour cela, nous avons d’abord participé à la mise au point de la technique électrophysiologique de potentiels évoqués auditifs obtenus en réponse à des sons de parole, technique proche de celle des potentiels évoqués auditifs précoces, mais qui a des exigences propres en matière de traitement du signal et de techniques de recueil, qui nécessitent une adaptation importante de part la nature différente des stimuli français par rapport aux stimuli anglais utilisés par l’équipe de référence américaine. Les différents axes de notre recherche ont, par ailleurs, permis de mettre en évidence l’importance de l’encodage sous cortical de certaines caractéristiques acoustiques telles que l’enveloppe temporelle, le voisement, mettant par là même en évidence un possible effet corticofuge sur l’encodage de celui-ci. Ces différentes expériences nous ont amenés à nous poser la question des conditions idéales de recueil des PEASP, et notamment l’effet de l’intensité sur le recueil de ceux-ci, mettant en évidence une relation non-linéaire entre l’intensité de stimulation, et les caractéristiques des PEAPSP. Si une intensité de 20 dB SL semble nécessaire au recueil d’un PEAPSP, les réponses montrent une variabilité qui reste très grande à l’échelon individuel, ce qui rend l’utilisation de l’outil PEAPSP à visée diagnostique, que ce soit dans les troubles du langage chez l’enfant, ou dans les troubles de l’audition dans le bruit, difficile. / The major purpose of my thesis was the investigation of brainstem structures implications into speech in noise processing, particularly by identifying the impact of acoustic cues on normal speech perception. Firstly, we were involved in the engineering of the speech auditory brainstem responses (SABR) recording system. SABR are similar to brainstem auditory evoked responses to clicks, but require different acquisition and signal processing set-ups, due to the difference between the French and the American stimuli used by the American reference team. The different studies presented here, permitted to emphasize the role of brainstem structures into the subcortical processing of acoustical cues, such as the temporal enveloppe, or the voicing, with a possible evidence of a corticofugal effect on SABR. These experimentations lead us to a more fundamental question on the best conditions required for PEASP collection, in particular, the best stimulation intensity needed. The results of the experiment on intensity effect showed a non linear relation between the stimulation intensity and PEASP characteristics. Even if an intensity of only 20 dB SL seems enough for SABR recording, individual results are still highly variable so that diagnostic application of SABR on, for example, children with language learning problems or subject suffering from speech in noise perception impairment remains difficult.
8

Son et posture : le rôle de la perception auditive spatiale dans le maintien de l'équilibre postural / Sound and posture : the role of the spatial auditory perception in maintaining balance

Gandemer, Lennie 12 December 2016 (has links)
Le maintien de la stabilité posturale est généralement décrit comme le résultat de l’intégration de plusieurs modalités sensorielles : vision, proprioception, tactile plantaire et système vestibulaire. Bien qu’étant une source riche d’informations spatiales, l’audition a été très peut étudiée dans ce cadre. Dans cette thèse, nous nous sommes intéressés à l’influence spécifique du son sur la posture.La première partie de ces travaux concerne la mise en place et la caractérisation perceptive d’un système de spatialisation ambisonique d’ordre 5. Ce système permet de générer et de déplacer des sons dans tout l’espace 3D entourant l’auditeur, ainsi que de synthétiser des espaces sonores immersifs et réalistes.Ensuite, ce système a été utilisé comme un outil pour la génération de stimuli adaptés à l’étude de l’influence du son sur la posture. Ainsi, la posture debout statique de sujets jeunes et en bonne santé a été étudiée dans un ensemble de cinq expériences posturales. Les résultats de ces différentes études montrent que l’information auditive spatiale peut être intégrée dans le système de régulation posturale, et permettre aux sujets d’atteindre une meilleure stabilité posturale.Deux pistes sont évoquées pour interpréter cette stabilisation : d’un côté, l’utilisation des indices acoustiques pour construire une carte spatiale de l’espace environnant, représentation par rapport à laquelle les sujets peuvent se stabiliser ; de l’autre, des phénomènes d’intégration multi-sensorielle, où la modalité auditive permettrait de potentialiser l’intégration des différentes informations fournies par les autres modalités impliquées dans le contrôle postural. / Postural control is known to be the result of the integration by the central nervous system of several sensory modalities. In the literature, visual, proprioceptive, plantar touch and vestibular inputs are generally mentioned, and the role of audition is often neglected, even though sound is a rich and broad source of information on the whole surroundind 3D space. In the frame of this PhD, we focused on the specific role of sound on posture. The first part of this work is related to the design, the set-up and the perceptual evaluation of a fifth order ambisonics sound spatialization system. This system makes it possible to generate and move sound sources in the 3D space surrounding the listener and also to synthesize immersive and realistic sound environments. Then, this sound spatialization system was used as a tool to generate sound stimuli used in five different postural tests. In these tests, we studied the static upright stance of young and healthy subjects. The results of these studies show that the spatial auditory information can be integrated in the postural control system, allowing the subjects to reach a better stability.Two complementary trails are proposed to explain these stabilizing effects. Firstly, the spatial acoustic cues can contribute to the building of a mental representation of the surrounding environment; given this representation, the subjects could improve their stability. Secondly, we introduce multisensory integration phenomena: the auditory component could facilitate the integration of the other modalities implied in the postural control system.
9

Vocal Expression of Emotion : Discrete-emotions and Dimensional Accounts

Laukka, Petri January 2004 (has links)
<p>This thesis investigated whether vocal emotion expressions are conveyed as discrete emotions or as continuous dimensions. </p><p>Study I consisted of a meta-analysis of decoding accuracy of discrete emotions (anger, fear, happiness, love-tenderness, sadness) within and across cultures. Also, the literature on acoustic characteristics of expressions was reviewed. Results suggest that vocal expressions are universally recognized and that there exist emotion-specific patterns of voice-cues for discrete emotions.</p><p>In Study II, actors vocally portrayed anger, disgust, fear, happiness, and sadness with weak and strong emotion intensity. The portrayals were decoded by listeners and acoustically analyzed with respect to 20 voice-cues (e.g., speech rate, voice intensity, fundamental frequency, spectral energy distribution). Both the intended emotion and intensity of the portrayals were accurately decoded and had an impact on voice-cues. Listeners’ ratings of both emotion and intensity could be predicted from a selection of voice-cues.</p><p>In Study III, listeners rated the portrayals from Study II on emotion dimensions (activation, valence, potency, emotion intensity). All dimensions were correlated with several voice-cues. Listeners’ ratings could be successfully predicted from the voice-cues for all dimensions except valence.</p><p>In Study IV, continua of morphed expressions, ranging from one emotion to another in equal steps, were created using speech synthesis. Listeners identified the emotion of each expression and discriminated between pairs of expressions. The continua were perceived as two distinct sections separated by a sudden category boundary. Also, discrimination accuracy was generally higher for pairs of stimuli falling across category boundaries than for pairs belonging to the same category. This suggests that vocal expressions are categorically perceived.</p><p>Taken together, the results suggest that a discrete-emotions approach provides the best account of vocal expression. Previous difficulties in finding emotion-specific patterns of voice-cues may be explained in terms of limitations of previous studies and the coding of the communicative process.</p>
10

Vocal Expression of Emotion : Discrete-emotions and Dimensional Accounts

Laukka, Petri January 2004 (has links)
This thesis investigated whether vocal emotion expressions are conveyed as discrete emotions or as continuous dimensions. Study I consisted of a meta-analysis of decoding accuracy of discrete emotions (anger, fear, happiness, love-tenderness, sadness) within and across cultures. Also, the literature on acoustic characteristics of expressions was reviewed. Results suggest that vocal expressions are universally recognized and that there exist emotion-specific patterns of voice-cues for discrete emotions. In Study II, actors vocally portrayed anger, disgust, fear, happiness, and sadness with weak and strong emotion intensity. The portrayals were decoded by listeners and acoustically analyzed with respect to 20 voice-cues (e.g., speech rate, voice intensity, fundamental frequency, spectral energy distribution). Both the intended emotion and intensity of the portrayals were accurately decoded and had an impact on voice-cues. Listeners’ ratings of both emotion and intensity could be predicted from a selection of voice-cues. In Study III, listeners rated the portrayals from Study II on emotion dimensions (activation, valence, potency, emotion intensity). All dimensions were correlated with several voice-cues. Listeners’ ratings could be successfully predicted from the voice-cues for all dimensions except valence. In Study IV, continua of morphed expressions, ranging from one emotion to another in equal steps, were created using speech synthesis. Listeners identified the emotion of each expression and discriminated between pairs of expressions. The continua were perceived as two distinct sections separated by a sudden category boundary. Also, discrimination accuracy was generally higher for pairs of stimuli falling across category boundaries than for pairs belonging to the same category. This suggests that vocal expressions are categorically perceived. Taken together, the results suggest that a discrete-emotions approach provides the best account of vocal expression. Previous difficulties in finding emotion-specific patterns of voice-cues may be explained in terms of limitations of previous studies and the coding of the communicative process.

Page generated in 0.0758 seconds