Spelling suggestions: "subject:"electromagnetic ventriculography"" "subject:"electromagnetic arteriography""
1 |
Physiological assessment of lingual function in adults with apraxia of speechMeyer, Carly Unknown Date (has links)
Apraxia of speech (AOS) is a neurogenic speech disorder that is characterised by deficits in the articulatory and prosodic domains of speech production. A range of physiologic assessment techniques have been employed in an attempt to elucidate the physiological underpinnings of articulatory and prosodic defects in AOS. However, despite the advancement of electromagnetic articulography (EMA), a technique that facilitates safe, non-invasive assessment of intra-oral structures, little research has investigated lingual kinematics during speech production in participants with AOS. Tongue-to-palate contact patterns, on the contrary, have been investigated in AOS. However, most of this research relied upon descriptive analysis, rather than instrumental techniques including electropalatography (EPG). Therefore, the present thesis aimed to utilise EMA and EPG to provide a comprehensive assessment of lingual movement and tongue-to-palate contact patterns during word-initial consonant singletons and consonant clusters, during mono- and multisyllabic words, in AOS. The strength of coupling between the tongue and jaw and tongue-tip and tongue-back was also examined, as was consonant cluster coarticulation. Five participants (three females and two males) with AOS and a concomitant non-fluent aphasia participated in the project. The mean age of the group at the time of the EMA assessment was 53.6 years (SD = 12.60; range 35 - 67 years). At the time of initial assessment, all participants were a minimum of 12 months post onset of stroke (M = 1.67 years; SD = 0.72). Perceptual analysis indicated that each of the five participants with AOS presented with the following mandatory characteristics: sound distortions, sound prolongations, syllabic speech output, and dysprosody. A control group of 12 neurologically unimpaired participants (8 male, 4 female; M = 52.08 years; SD = 12.52; age range = 29 - 70 years) also participated in the study. The apraxic speakers’ tongue-tip and tongue-back movements were initially profiled during monosyllabic word production using EMA. Movement duration, distance, maximum velocity, maximum acceleration and deceleration, and velocity profile index values were recorded during word-initial consonant singletons (i.e., /t, s, l, k/) and consonant clusters (i.e., /kl, sk/). Results indicated that the participants with AOS evidenced significantly prolonged movement durations and, in some instances, significantly greater articulatory distances, relative to the control group. All measures pertaining to speed appeared to be relatively unimpaired. Phonetic complexity had a variable impact on the articulation of word-initial consonants. The results were able to account for the overall slow rate of speech exhibited by the participants with AOS. In a subsequent study, EMA was employed to investigate the impact of increasing word length on lingual kinematics for five participants with AOS. Target consonant singletons and consonant clusters were embedded in the word-initial position of one, two, and three syllable words (e.g., tar, target, targeting). Movement duration appeared to be most sensitive to the effect of word length during consonant singleton production. However, word length effects were absent during consonant cluster production. The data were discussed in the context of motor theories of speech production. The final EMA investigation examined the strength of coupling between the tongue and jaw and tongue-tip and tongue-back during /ta, sa, la, ka/ syllable repetitions, in a group of five participants with AOS. In comparison to the control group, four participants with AOS exhibited significantly stronger articulatory coupling for alveolar and/or velar targets, indicative of decreased functional movement independence. The reduction in functional movement independence was thought to reflect an attempt to simplify articulatory control, or alternatively, a decrease in the ability to differentially control distinct articulatory regions. To complement the EMA data, EPG was employed to investigate the spatial characteristics of linguopalatal contact during word-initial consonant singletons (i.e., /t, s, l, k/) and consonant clusters (i.e., /kl, sk/) in three participants with AOS. Through the use of quantitative and qualitative analysis techniques, misdirected articulatory gestures (e.g., double articulation patterns), distorted linguopalatal contact patterns (alveolar fricatives), lingual overshoot, and for one participant, significantly greater spatial variability were identified in the linguopalatal contact data. Pattern of closure appeared to be relatively unimpaired during alveolar plosive and approximant productions, and lingual undershoot and true omission errors were absent. The results were discussed in relation to their impact on phonetic distortion. A subsequent EPG study examined the temporal and spatial aspects of consonant cluster coarticulation in three participants with AOS. Target stimuli included ‘scar’ and ‘class’. In contrast to what was expected, each of the participants with AOS appeared able to coproduce elements within a consonant cluster. Appropriately, pattern of linguopalatal contact did not appear to be influenced by coproduction. Amount of linguopalatal contact did differ significantly on occasion. Coarticulatory effects were appropriately absent for each of the participants with AOS during alveolar fricative production in ‘scar’; however, the control group and each of the apraxic speakers exhibited place of articulation assimilation during velar stop production. The control group and two participants with AOS produced discrete velar and alveolar articulations during ‘class’; one participant with AOS evidenced coarticulatory effects during the /kl/ cluster. The research findings indicated that consonant cluster coarticulation was generally maintained in word-onset position, and it was postulated that future research should endeavour to investigate consonant cluster coarticulation in consonant sequences that span a syllable boundary. The EMA and EPG research findings presented in this thesis inform about the underlying physiological nature of articulatory disturbances in AOS. These findings will be discussed in the context of contemporary theories of speech motor control.
|
2 |
Kinematic and Acoustic Adaptation in Response to Electromagnetic Articulography Sensor PerturbationBartholomew, Emily Adelaide 18 June 2020 (has links)
This study examined kinematic and acoustic adaptation following the placement of electromagnetic articulography (EMA) sensors, which measure speech articulator movements. Sixteen typical native English speakers had eight EMA sensors attached to obtain kinematic data: three to the tongue (front, mid, and back at midline), one on the lower incisors (jaw), two on the lips (one on each lip at midline), and two reference sensors on the eyeglass frames worn by the participants. They repeated the same sentence stimuli 5 times every two minutes (0, 2, 4, 6 minutes post-attachment) while both acoustic and kinematic data were recorded. Global kinematic measures of tongue activity were computed using articulatory stroke metrics, while point measures were gathered from one syllable in the target sentence. The first two formant frequencies of that syllable were measured. Statistical analysis revealed several significant changes over time and differences between genders. There was a significant increase in the syllable speed and decrease in sentence duration over time. The first formant was significantly lower over time correlating with decreased hull area, representing higher tongue position and smaller movements as speakers adapted to the sensors. Tongue displacement during the syllable production decreased over time; there was not a significant gender difference for displacement measures. The number of articulatory strokes decreased over time, suggesting improved articulatory steadiness. It can be concluded that participants demonstrated faster, smaller movements over time, but it is not clear how much of the change was a result of kinematic adaptation or task familiarity. Future research is needed to compare the direct relationship between kinematic, acoustic, and perceptual measures in response to the attachment of these EMA sensors.
|
3 |
Contrôle de têtes parlantes par inversion acoustico-articulatoire pour l’apprentissage et la réhabilitation du langage / Control of talking heads by acoustic-to-articulatory inversion for language learning and rehabilitationBen Youssef, Atef 26 October 2011 (has links)
Les sons de parole peuvent être complétés par l'affichage des articulateurs sur un écran d'ordinateur pour produire de la parole augmentée, un signal potentiellement utile dans tous les cas où le son lui-même peut être difficile à comprendre, pour des raisons physiques ou perceptuelles. Dans cette thèse, nous présentons un système appelé retour articulatoire visuel, dans lequel les articulateurs visibles et non visibles d'une tête parlante sont contrôlés à partir de la voix du locuteur. La motivation de cette thèse était de développer un tel système qui pourrait être appliqué à l'aide à l'apprentissage de la prononciation pour les langues étrangères, ou dans le domaine de l'orthophonie. Nous avons basé notre approche de ce problème d'inversion sur des modèles statistiques construits à partir de données acoustiques et articulatoires enregistrées sur un locuteur français à l'aide d'un articulographe électromagnétique (EMA). Notre approche avec les modèles de Markov cachés (HMMs) combine des techniques de reconnaissance automatique de la parole et de synthèse articulatoire pour estimer les trajectoires articulatoires à partir du signal acoustique. D'un autre côté, les modèles de mélanges gaussiens (GMMs) estiment directement les trajectoires articulatoires à partir du signal acoustique sans faire intervenir d'information phonétique. Nous avons basé notre évaluation des améliorations apportées à ces modèles sur différents critères : l'erreur quadratique moyenne (RMSE) entre les coordonnées EMA originales et reconstruites, le coefficient de corrélation de Pearson, l'affichage des espaces et des trajectoires articulatoires, aussi bien que les taux de reconnaissance acoustique et articulatoire. Les expériences montrent que l'utilisation d'états liés et de multi-gaussiennes pour les états des HMMs acoustiques améliore l'étage de reconnaissance acoustique des phones, et que la minimisation de l'erreur générée (MGE) dans la phase d'apprentissage des HMMs articulatoires donne des résultats plus précis par rapport à l'utilisation du critère plus conventionnel de maximisation de vraisemblance (MLE). En outre, l'utilisation du critère MLE au niveau de mapping direct de l'acoustique vers l'articulatoire par GMMs est plus efficace que le critère de minimisation de l'erreur quadratique moyenne (MMSE). Nous constatons également trouvé que le système d'inversion par HMMs est plus précis celui basé sur les GMMs. Par ailleurs, des expériences utilisant les mêmes méthodes statistiques et les mêmes données ont montré que le problème de reconstruction des mouvements de la langue à partir des mouvements du visage et des lèvres ne peut pas être résolu dans le cas général, et est impossible pour certaines classes phonétiques. Afin de généraliser notre système basé sur un locuteur unique à un système d'inversion de parole multi-locuteur, nous avons implémenté une méthode d'adaptation du locuteur basée sur la maximisation de la vraisemblance par régression linéaire (MLLR). Dans cette méthode MLLR, la transformation basée sur la régression linéaire qui adapte les HMMs acoustiques originaux à ceux du nouveau locuteur est calculée de manière à maximiser la vraisemblance des données d'adaptation. Finalement, cet étage d'adaptation du locuteur a été évalué en utilisant un système de reconnaissance automatique des classes phonétique de l'articulation, dans la mesure où les données articulatoires originales du nouveau locuteur n'existent pas. Finalement, en utilisant cette procédure d'adaptation, nous avons développé un démonstrateur complet de retour articulatoire visuel, qui peut être utilisé par un locuteur quelconque. Ce système devra être évalué de manière perceptive dans des conditions réalistes. / Speech sounds may be complemented by displaying speech articulators shapes on a computer screen, hence producing augmented speech, a signal that is potentially useful in all instances where the sound itself might be difficult to understand, for physical or perceptual reasons. In this thesis, we introduce a system called visual articulatory feedback, in which the visible and hidden articulators of a talking head are controlled from the speaker's speech sound. The motivation of this research was to develop such a system that could be applied to Computer Aided Pronunciation Training (CAPT) for learning of foreign languages, or in the domain of speech therapy. We have based our approach to this mapping problem on statistical models build from acoustic and articulatory data. In this thesis we have developed and evaluated two statistical learning methods trained on parallel synchronous acoustic and articulatory data recorded on a French speaker by means of an electromagnetic articulograph. Our Hidden Markov models (HMMs) approach combines HMM-based acoustic recognition and HMM-based articulatory synthesis techniques to estimate the articulatory trajectories from the acoustic signal. Gaussian mixture models (GMMs) estimate articulatory features directly from the acoustic ones. We have based our evaluation of the improvement results brought to these models on several criteria: the Root Mean Square Error between the original and recovered EMA coordinates, the Pearson Product-Moment Correlation Coefficient, displays of the articulatory spaces and articulatory trajectories, as well as some acoustic or articulatory recognition rates. Experiments indicate that the use of states tying and multi-Gaussian per state in the acoustic HMM improves the recognition stage, and that the minimum generation error (MGE) articulatory HMMs parameter updating results in a more accurate inversion than the conventional maximum likelihood estimation (MLE) training. In addition, the GMM mapping using MLE criteria is more efficient than using minimum mean square error (MMSE) criteria. In conclusion, we have found that the HMM inversion system has a greater accuracy compared with the GMM one. Beside, experiments using the same statistical methods and data have shown that the face-to-tongue inversion problem, i.e. predicting tongue shapes from face and lip shapes cannot be solved in a general way, and that it is impossible for some phonetic classes. In order to extend our system based on a single speaker to a multi-speaker speech inversion system, we have implemented a speaker adaptation method based on the maximum likelihood linear regression (MLLR). In MLLR, a linear regression-based transform that adapts the original acoustic HMMs to those of the new speaker was calculated to maximise the likelihood of adaptation data. Finally, this speaker adaptation stage has been evaluated using an articulatory phonetic recognition system, as there are not original articulatory data available for the new speakers. Finally, using this adaptation procedure, we have developed a complete articulatory feedback demonstrator, which can work for any speaker. This system should be assessed by perceptual tests in realistic conditions.
|
Page generated in 0.0776 seconds