• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 22
  • 10
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 55
  • 32
  • 20
  • 18
  • 16
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A comparative study of language use and its effect on communication interaction patterns in two groups of children with cerebral palsy (speaking and nonspeaking) from Hindu speaking families in Calcutta

Kaul, Sudha January 1999 (has links)
This study examines the effect of language use on the communication interaction patterns of ten children with cerebral palsy from Hindi speaking families, five children using AAC and five children using speech as their primary mode of communication. Children in both groups were within a chronological age band of six to eleven years with their receptive language levels ranging from three to four + word level. The study was conducted over a nineteen month period. Data was collected at four phases by video recording interactions between the children and their facilitators. Two intervention strategies were applied. The first intervention was the reformatting of individual display boards to include vocabulary that was within the receptive repertoire of each AAC user. The second intervention was a facilitator workshop to suggest strategies that speaking communication partners might use to support and/or augment AAC users communicative and linguistic skills. A three tier model of data analysis (Light, in press) was found to be an effective way of examining the data. The diversity of data gave empirical evidence on the relationship between use of language and communication competence, and the efficacy of the interventions used. The results confirmed existing evidence in literature that social intentions and the functional needs of speakers affect communicative functions. In addition, data provided evidence that communicative context and communicative partners have a direct impact on the communicative interaction patterns of AAC users. Children in the study showed developmental trends that were similar to those of typically developing children. This has important implications for language learning theories in AAC and for classroom practice. The major outcome of the study is the empirical evidence showing that communicative competence in AAC users' is enhanced by access to opportunities for developing and using linguistic skills. This research study has added to the existing knowledge base on the communicative competence of children who use AAC by providing evidence from a different cultural (Indian) and linguistic (Hindi) setting.
2

Design and Evaluation of a Vocalization Activated Assistive Technology for a Child with Dysarthric Cpeech

Thalanki Anantha, Nayanashri 28 November 2013 (has links)
Communication disorders affect one in ten Canadians and the incidence is particularly high among those with Cerebral Palsy. A vocalization-activated switch is often explored as an alternative means to communication. However, most commercial speech recognition tools to date have limited capability to accommodate dysarthric speech and thus are often prematurely abandoned. We developed and evaluated a novel vocalization-based access technology as a writing tool for a pediatric participant with cerebral palsy. It consists of a high quality condenser headmic, a custom classifier based on Gaussian Mixture Modeling (GMM) and Mel-frequency Cepstral Coefficients (MFCC) as features. The system was designed to discriminate among five vowel sounds while interfaced to an on-screen keyboard. We used response efficiency theory to assess this technology in terms of goal attainment and satisfaction. The participant’s primary goal to reduce switch activation time was achieved with increased satisfaction and lower physical effort when compared to her previous pathway.
3

Design and Evaluation of a Vocalization Activated Assistive Technology for a Child with Dysarthric Cpeech

Thalanki Anantha, Nayanashri 28 November 2013 (has links)
Communication disorders affect one in ten Canadians and the incidence is particularly high among those with Cerebral Palsy. A vocalization-activated switch is often explored as an alternative means to communication. However, most commercial speech recognition tools to date have limited capability to accommodate dysarthric speech and thus are often prematurely abandoned. We developed and evaluated a novel vocalization-based access technology as a writing tool for a pediatric participant with cerebral palsy. It consists of a high quality condenser headmic, a custom classifier based on Gaussian Mixture Modeling (GMM) and Mel-frequency Cepstral Coefficients (MFCC) as features. The system was designed to discriminate among five vowel sounds while interfaced to an on-screen keyboard. We used response efficiency theory to assess this technology in terms of goal attainment and satisfaction. The participant’s primary goal to reduce switch activation time was achieved with increased satisfaction and lower physical effort when compared to her previous pathway.
4

Listener Strategies in the Perception of Dysarthric Speech: A thesis submitted in partial fulfilment of the requirements for the Degree of Master of Speech Language Therapy, Department of Communication Disorders, University of Canterbury

Broadmore, Sharon January 2011 (has links)
When listeners are presented with stimuli from multiple speakers versus single speakers in a perception experiment, decreased speech recognition accuracy and increased response time results. These findings have been demonstrated in studies that have employed normal (Creelman, 1957; Mullennix & Pisoni, 1990; Nygaard & Pisoni, 1998) and accented speech (Bradlow & Bent, 2008). It is thought that perceptual normalisation processes are, in part, responsible for this perceptual cost (Bladon, Henton, & Pickering, 1984; Johnson, 2009; Magnuson & Nusbaum, 2007; Mullennix, Pisoni, & Martin, 1989). Interestingly, studies are yet to examine whether these same findings occur when listeners encounter dysarthric speech – a naturally degraded speech signal associated with neurological disorder or disease. It has also been found that when listeners are exposed to multiple speakers with dysarthria, they generally adapt to the dysarthric signal over time; resulting in an improved ability to decipher the signal (Liss, Spitzer, Caviness, & Adler, 2002; Tjaden & Liss, 1995a). However, the rate of this adaption when listeners are exposed to a single speaker is yet to be examined. This study aimed to determine: (1) whether the intelligibility of dysarthric speech (in this case, hypokinetic dysarthria associated with Parkinson‘s disease) varied across single versus multi-speaker conditions; and (2) whether intelligibility increased over time when a listener was exposed to a single speaker with dysarthria. To answer these questions, sixty young healthy listeners were randomly allocated to one of four experimental conditions, one multiple speaker and three single speaker conditions. Each listener transcribed 60 three to five word phrases over one session and the results were examined for percent words correct. Contrary to expectations, there was no significant difference in percent intelligibility scores of the listener group who transcribed in a multi-speaker versus transcriptions from the single speaker listener conditions. In addition, perceptual learning effects across the rating period were identified for two out of the three single speaker listener groups only. The absence of significant findings in the multi-speaker versus single speaker transcripts may be explained by further analysis of within speaker variability. Acoustic analysis of the speakers may also shed light on the reduced perceptual learning that occurred in one of the single speaker groups. Greater numbers of speakers and experimental phrases would be beneficial in expanding trends seen in intelligibility of the single speaker groups.
5

Production Knowledge in the Recognition of Dysarthric Speech

Rudzicz, Frank 31 August 2011 (has links)
Millions of individuals have acquired or have been born with neuro-motor conditions that limit the control of their muscles, including those that manipulate the articulators of the vocal tract. These conditions, collectively called dysarthria, result in speech that is very difficult to understand, despite being generally syntactically and semantically correct. This difficulty is not limited to human listeners, but also adversely affects the performance of traditional automatic speech recognition (ASR) systems, which in some cases can be completely unusable by the affected individual. This dissertation describes research into improving ASR for speakers with dysarthria by means of incorporated knowledge of their speech production. The document first introduces theoretical aspects of dysarthria and of speech production and outlines related work in these combined areas within ASR. It then describes the acquisition and analysis of the TORGO database of dysarthric articulatory motion and demonstrates several consistent behaviours among speakers in this database, including predictable pronunciation errors, for example. Articulatory data are then used to train augmented ASR systems that model the statistical relationships between vocal tract configurations and their acoustic consequences. I show that dynamic Bayesian networks augmented with instantaneous theoretical or empirical articulatory variables outperform even discriminative alternatives. This leads to work that incorporates a more rigid theory of speech production, i.e., task-dynamics, that models the high-level and long-term aspects of speech production. For this task, I devised an algorithm for estimating articulatory positions given only acoustics that significantly outperforms the state-of-the-art. Finally, I present ongoing work into the transformation and re-synthesis of dysarthric speech in order to make it more intelligible to human listeners. This research represents definitive progress towards the accommodation of dysarthric speech within modern speech recognition systems. However, there is much more research that remains to be undertaken and I conclude with some thoughts as to which paths we might now take.
6

Production Knowledge in the Recognition of Dysarthric Speech

Rudzicz, Frank 31 August 2011 (has links)
Millions of individuals have acquired or have been born with neuro-motor conditions that limit the control of their muscles, including those that manipulate the articulators of the vocal tract. These conditions, collectively called dysarthria, result in speech that is very difficult to understand, despite being generally syntactically and semantically correct. This difficulty is not limited to human listeners, but also adversely affects the performance of traditional automatic speech recognition (ASR) systems, which in some cases can be completely unusable by the affected individual. This dissertation describes research into improving ASR for speakers with dysarthria by means of incorporated knowledge of their speech production. The document first introduces theoretical aspects of dysarthria and of speech production and outlines related work in these combined areas within ASR. It then describes the acquisition and analysis of the TORGO database of dysarthric articulatory motion and demonstrates several consistent behaviours among speakers in this database, including predictable pronunciation errors, for example. Articulatory data are then used to train augmented ASR systems that model the statistical relationships between vocal tract configurations and their acoustic consequences. I show that dynamic Bayesian networks augmented with instantaneous theoretical or empirical articulatory variables outperform even discriminative alternatives. This leads to work that incorporates a more rigid theory of speech production, i.e., task-dynamics, that models the high-level and long-term aspects of speech production. For this task, I devised an algorithm for estimating articulatory positions given only acoustics that significantly outperforms the state-of-the-art. Finally, I present ongoing work into the transformation and re-synthesis of dysarthric speech in order to make it more intelligible to human listeners. This research represents definitive progress towards the accommodation of dysarthric speech within modern speech recognition systems. However, there is much more research that remains to be undertaken and I conclude with some thoughts as to which paths we might now take.
7

Upplevelser av kommunikationsproblem efter en stroke : En analys av självbiografier

Hindebo, Malin, Niklasson, Lisa January 2011 (has links)
Every year about 30000 people have a stroke. It is caused by either a bleeding or a blood clot, and depending on where the damage is, the patient can get different types of difficulties following after the stroke. About 40 % of all stroke patients sufferfrom some kind of speech and/or communication difficulty, like aphasia or dysarthria. As hospital staff it´s important to know how to communicate with these people, to get a deeper understanding regards their needs. Aim: The aim of this study is to out of a patient's perspectivedescribeexperiences ofcommunication difficulties that can follow after a stroke. Method: A content analysis with a qualitative approach was used to analyse five autobiographies. Results: The result showed in what way stroke patients experienced their communication difficulties, and how they felt about the hospital staffs treatment related to their communication difficulties. During the analysis two major themes emerged: suffering and wellbeing. Also eight subthemes emerged:encounters with hospital staff with a bad appearance,to be locked in, frustration, to lose identity, sorrow, chock, encounters with hospital staff with a good appearance andgratefulness. Conclusion: As hospital staff it's important to know how to communicatewith stroke patients suffering from communication difficulties, to be able to give them good care. It's necessary to be aware of these patients own experiences according to the communication difficulty, to know how to communicate with them properly.
8

Objective Assessment of Dysarthric Speech Intelligibility

HUMMEL, RICHARD 28 September 2011 (has links)
The de-facto standard for dysarthric intelligibility assessment is a subjective intelligibility test, performed by an expert. Subjective tests are often costly, biased and inconsistent because of their perceptual nature. Automatic objective assessment methods, in contrast, are repeatable and relatively cheap. Objective methods can be broken down into two subcategories: reference-free, and reference based. Reference-free methods employ estimation procedures that do not require information about the target speech material. This potentially makes the problem more difficult, and consequently, there is a deficit of research into reference-free dysarthric intelligibility estimation. In this thesis, we focus on the reference-free intelligibility estimation approach. To make the problem more tractable, we focus on the dysarthrias of cerebral palsy (CP). First, a popular standard for blind speech quality estimation, the ITU-T P.563 standard, is examined for possible application to dysarthric intelligibility estimation. The internal structure of the standard is discussed, along with the relevance of its internal features to intelligibility estimation. Afterwards, several novel features expected to relate to some of the acoustic properties of dysarthric speech are proposed. Proposed features are based on the high-order statistics of parameters derived from linear prediction (LP) analysis, and a mel-frequency filterbank. In order to gauge the complimentariness of P.563 and proposed features, a linear intelligibility model is proposed and tested. Intelligibility is expressed as a linear combination of acoustic features, which are selected from a feature pool using speaker-dependent and speaker-independent validation methods. An intelligibility estimator constructed with only P.563 features serves as the `baseline'. When proposed features are added to the feature pool, performance is shown to improve substantially for both speaker-dependent and speaker-independent methods when compared to the baseline. Results are also shown to compare favourably with those reported in the literature. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-09-28 18:44:51.103
9

Free Classification of Dysarthric Speech: A Taxonomical Approach

January 2012 (has links)
abstract: Often termed the "gold standard" in the differential diagnosis of dysarthria, the etiology-based Mayo Clinic classification approach has been used nearly exclusively by clinicians since the early 1970s. However, the current descriptive method results in a distinct overlap of perceptual features across various etiologies, thus limiting the clinical utility of such a system for differential diagnosis. Acoustic analysis may provide a more objective measure for improvement in overall reliability (Guerra & Lovely, 2003) of classification. The following paper investigates the potential use of a taxonomical approach to dysarthria. The purpose of this study was to identify a set of acoustic correlates of perceptual dimensions used to group similarly sounding speakers with dysarthria, irrespective of disease etiology. The present study utilized a free classification auditory perceptual task in order to identify a set of salient speech characteristics displayed by speakers with varying dysarthria types and perceived by listeners, which was then analyzed using multidimensional scaling (MDS), correlation analysis, and cluster analysis. In addition, discriminant function analysis (DFA) was conducted to establish the feasibility of using the dimensions underlying perceptual similarity in dysarthria to classify speakers into both listener-derived clusters and etiology-based categories. The following hypothesis was identified: Because of the presumed predictive link between the acoustic correlates and listener-derived clusters, the DFA classification results should resemble the perceptual clusters more closely than the etiology-based (Mayo System) classifications. Results of the present investigation's MDS revealed three dimensions, which were significantly correlated with 1) metrics capturing rate and rhythm, 2) intelligibility, and 3) all of the long-term average spectrum metrics in the 8000 Hz band, which has been linked to degree of phonemic distinctiveness (Utianski et al., February 2012). A qualitative examination of listener notes supported the MDS and correlation results, with listeners overwhelmingly making reference to speaking rate/rhythm, intelligibility, and articulatory precision while participating in the free classification task. Additionally, acoustic correlates revealed by the MDS and subjected to DFA indeed predicted listener group classification. These results beget acoustic measurement as representative of listener perception, and represent the first phase in supporting the use of a perceptually relevant taxonomy of dysarthria. / Dissertation/Thesis / M.S. Communication Disorders 2012
10

Audiovisual Perception of Dysarthric Speech in Older Adults Compared to Younger Adults

January 2014 (has links)
abstract: Everyday speech communication typically takes place face-to-face. Accordingly, the task of perceiving speech is a multisensory phenomenon involving both auditory and visual information. The current investigation examines how visual information influences recognition of dysarthric speech. It also explores where the influence of visual information is dependent upon age. Forty adults participated in the study that measured intelligibility (percent words correct) of dysarthric speech in auditory versus audiovisual conditions. Participants were then separated into two groups: older adults (age range 47 to 68) and young adults (age range 19 to 36) to examine the influence of age. Findings revealed that all participants, regardless of age, improved their ability to recognize dysarthric speech when visual speech was added to the auditory signal. The magnitude of this benefit, however, was greater for older adults when compared with younger adults. These results inform our understanding of how visual speech information influences understanding of dysarthric speech. / Dissertation/Thesis / M.S. Speech and Hearing Science 2014

Page generated in 0.0603 seconds