• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 754
  • 157
  • 78
  • 37
  • 11
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 1109
  • 1109
  • 262
  • 157
  • 136
  • 127
  • 123
  • 117
  • 98
  • 84
  • 74
  • 73
  • 72
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Perceptions of working memory use in communication by users of AAC

Danielson, Priscilla M. 22 June 2016 (has links)
<p> ABSTRACT Augmentative and alternative communication (AAC) is defined as &ldquo;all forms of communication (other than oral speech)&hellip;used to express thoughts, needs, wants and ideas&rdquo; (&ldquo;Augmentative and Alternative Communication (AAC),&rdquo; 2012). Working memory is a temporary cognitive process, which briefly maintains and manipulates information while it is being encoded as a part of long-term memory (Engle, Nations, &amp; Cantor, 1990; &ldquo;Introduction to Working Memory&rdquo;, 2007). It has been suggested that based upon the unique skill set and needs of users of AAC systems, the design of these systems should reflect knowledge gleaned from the cognitive sciences (Light &amp; Lindsay, 1991) with training and implementation of AAC incorporating an understanding of the cognitive processes impacting memory, learning, and visual processing (Light &amp; Lindsay, 1991; Wilkinson &amp; Jagaroo, 2004). This study sought to examine how users of AAC managed and perceived the cognitive load associated with working memory demands while communicating and what specific strategies and/or design features users of AAC perceived they used during conversation when using AAC. Results revealed an overall large amount of variability in participants&rsquo; responses. Length of symbol/word sequences, word prediction, seeing the message as it is being created, attention to the conversational topic, and attempting to remember what their conversational partner said appeared to be judged as having the highest degree of importance for the use of a speech generating device and success and message completion in conversation. Errors in conversational while using a speech generating device and stressors during the conversational process appeared to be most closely related to reported lack of time to create messages and the time it takes to create messages. Users of AAC did not report high frequency of actives attention to the working memory processes and design features.</p>
112

Self-reported and partner-reported functional communication and their relation to language and non-verbal cognition in mild to moderate aphasia

Messamer, Paula J. 03 June 2016 (has links)
<p> Purpose: Non-verbal cognition and language functions were examined in adult stroke survivors with aphasia. The specific purpose of the study was twofold: 1) to examine the relationship between self-reported outcomes from people with aphasia (PwA), measures of non-verbal cognition (Delis-Kaplan Executive Function Systems Test (D-KEFS), Delis, Kaplan, &amp; Kramer, 2001) and measures of language (Western Aphasia Battery-Revised (WAB-R), Kertesz, 2007; Boston Naming Test Second Edition (BNT-2), Kaplan, Goodglass, &amp; Weintraub, 2001) and 2) to examine these same relationships using partner-reported outcomes for that same group of PwA. This study used the Aphasia Communication Outcome Measure (ACOM, Doyle et al., 2013) to gather both self-reported ACOM data and partner-reported ACOM data (ratings of the person with aphasia&rsquo;s communication made by a regular conversation partner). </p><p> Method: Seventeen participants with aphasia underwent examination with an extensive test battery including measures of functional communication, non-verbal cognition, and language impairment. In addition, 16 of their regular communication partners rated functional communication performance. </p><p> Results: Self-reported functional communication is strongly related to the number of errors committed on the D-KEFS design fluency test (r = .81, p = .001). Furthermore, a modified form of the D-KEFS design fluency test (in which the examinee is allowed unlimited time) shows that the proportion of errors contributes significantly to a two- predictor linear regression model. These two predictors account for 66% of the variance in self-reported functional communication ratings. These results suggest that non-verbal cognition for people with mild to moderate aphasia may serve an important role in functional communication. By contrast, self-reported functional communication was uncorrelated with aphasia severity (r = .04, p = .88), naming performance on either the WAB-R (r=.059, p=.823) or the BNT-2 (r=.097, p=.713), and category fluency (r=.086, p=.741). Partner-reported functional communication was highly correlated to the naming subtest on Western Aphasia Battery-Revised (WAB-R) scores (r=.71, p=.02) and to performance on the Boston Naming Test (BNT-2; r=.56, p=.026). </p><p> Partner-reported functional communication was also strongly predicted based on the number of animals named during the category fluency task on the WAB-R (r=.782, p=.000). A linear regression model including WAB-R category fluency accounted for 61.1% of the variance in partner-reported ratings. A second linear regression adding naming as a predictor was not significant (F<sub>change</sub> = 2.18, p=.163). By contrast, none of the non-verbal cognition measures were useful predictors of partner-reported functional communication. These results suggest that aphasia severity serves an important role in partner ratings of functional communication whereas non-verbal cognition does not. </p><p> Taken together, these results suggest that PwA and their partners rely on different aspects of communication when judging functional communication. </p><p> Further work to explore the use of patient-reported outcome (PRO) measures and to identify factors that contribute to self-reported functional communication is needed. The discussion addresses the appropriateness of using PRO measures in aphasia and the use of surrogate reports.</p>
113

The impact of deep-brain stimulation on speech comprehensibility and swallowing in patients with idiopathic Parkinson's disease

Ryder, David E. 03 May 2016 (has links)
<p> <b><u>Objective:</u></b> This is a pilot study designed to assess speech and swallowing characteristics of participants with idiopathic Parkinson&rsquo;s disease (IPD) before deep brain stimulation surgery of the subthalamic nucleus (DBS-STN), after the DBS-STN surgery, and at follow up evaluation sessions.</p><p> <b><u>Method:</u></b> A within participant, single-subject experimental A-B-A-A design was used to measure changes in the dependent variables for each participant. The primary dependent variables were intelligibility scores of words and sentences, vowel space area (VSA), vocal sound pressure level (dB SPL) of sustained vowels, single words, and contextual speech, Multidimensional voice program (MDVP) analysis of phonatory stability of sustained vowel phonation, lip pressure, tongue tip to alveolar ridge pressure, maximum inspiratory pressure (MIP), maximum expiratory pressure (MEP), and diadochokinetic rate. The secondary dependent variables were: duration of sustained vowel phonation, Visual analog scales (VAS) for communicative difficulties and swallowing difficulties, the EAT-10 swallowing questionnaire, and the qualitative narrative of life with IPD before and after the DBS-STN surgery.</p><p> <b><u>Results:</u></b> DBS-01 had significant declines of intelligibility with individual words, but did not have statistically significant changes for complete sentences. The VSA declined over the course of the study. The MDVP analyses indicated general declines in phonatory stability, but not significantly. There was a statistically significant increase in dB SPL for sustained vowel phonation, but there were overall declines in loudness for connected speech. The duration of sustained vowel phonation increased and the DDK rate varied across the experiment. Left lip and tongue pressures had overall declines, but right and center lip pressures increased. The VAS for communicative difficulties revealed worsening of symptoms. The VAS and the EAT-10 questionnaire for swallowing difficulties both recorded worsening of symptoms after surgery, and symptom improvements later on. The timed swallow test did not show any meaningful impairment in drinking or eating.</p><p> DBS-02 had statistically significant gains of intelligibility with individual words after the DBS-STN surgery, but had statistically significantly declines later on. The changes in the intelligibility of complete sentences were not significant. The VSA contracted after the surgery, but it increased afterwards. The MDVP analyses indicated an overall significant increase of phonatory stability. The dB SPL had a statistically significant increase for sustained vowel phonation, but the connected speech loudness had mixed results. The duration of sustained vowel phonation increased after surgery, but then declined later on. The DDK rate varied across the experiment. Lip and tongue pressures had overall increases. The VAS for communication difficulties revealed an overall increase in communicative abilities. The VAS and the EAT-10 questionnaire for swallowing difficulties both recorded a decrease in symptoms after surgery, and an increase later on. The timed swallow test did not show any meaningful impairment in drinking or eating.</p><p> <b><u>Conclusions:</u></b> DBS-01 had an overall result that the DBS-STN surgery and electrode adjustments were not apparently beneficial to speech and swallowing symptoms, although the delay in assessment after the surgery made distinguishing the effects of the surgery from progressive IPD symptoms difficult. DBS-02 had an overall result that the DBS-STN surgery was beneficial to speech and swallowing symptoms in the short term, although later progression of IPD symptoms, as well as electrode adjustments likely caused later declines.</p>
114

Listener Ratings and Acoustic Characteristics of Intonation Contours Produced by Children with Cochlear Implants and Children with Normal Hearing

Barbu, Ioana 26 July 2016 (has links)
<p> Cochlear implants (CIs), although effective in restoring auditory sensation for deaf individuals, are lacking in fundamental frequency (F0 or pitch) and temporal fine structure information. Consequently, many aspects of speech perception are significantly compromised. It is reasonable then to suspect that with limited access to F0 and fine temporal structure of speech, the ability to produce intonation patterns by children with cochlear implants (CWCI) would be affected as well. Therefore, perceptual and acoustic analyses were conducted in order to examine production of intonation patterns by CWCI to signal yes/no question and statement contrasts as compared to an age matched control group of children with normal hearing (CWNH). Fourteen CWCI participated in the study, ranging in age from 3;7 to 7;5 years; and 14 CWNH were between the ages of 3;4 &ndash; 7;4 years. Statements and questions were elicited using an innovative methodology during a role-play session and were digitally recorded. The elicited productions were parsed, separate files were created for each utterance, and then utterances were randomly presented to a group of 10 normal hearing adult listeners via headphones. Listeners rated the intonation pattern of each production as ranging from falling to rising using a visual analog scale displayed on a computer screen. These represented the listener judgments data and analysis. For the acoustic analysis, the final two syllables of each utterance were identified and the beginning and end of each vocalic portion of the syllable (VPS) was marked using Praat software Version 5.3.51 (Boersma and Weenink, 2013). Mean F0) and intensity measures of the VPS were extracted. The results from the listener judgments task revealed that CWCI and CWNH could distinctively produce rising and falling intonation contrasts to signal a question or a statement. Results from the acoustic analyses suggested a systematic distinction in F0, and to a lesser extent, in intensity, between statements and questions. Examination of the relation between acoustic characteristics and adult listener perceived judgments of intonation revealed large, significant relationships between listener judgments and final syllable F0 as well as F0 and intensity changes between the final and penultimate syllables. Future research directions and clinical implications for evaluation and intervention of prosody are discussed.</p>
115

The use of microcomputers in home based redemption of dysphasic stroke victims

Petheram, Brian Leslie January 1992 (has links)
No description available.
116

The effects of augmented input on receptive and expressive language for native augmentative and alternative communication (AAC) users during shared storybook readings

Chipinka, Megan 04 January 2017 (has links)
<p> The study was a pre-experimental pre- and post-treatment single case study which focused on evaluating the effects of augmentative and alternative communication (AAC) modeling during shared storybook readings. Aided AAC modeling interventions provide verbal and visual language models to support language comprehension and use for children with complex communication needs (CCN). The study measured four aspects of change before and after the AAC modeling phase including a) the number of communicative turns by the AAC user; b) the complexity and length of the initiations and responses made by the AAC user; c) the accuracy in responses to comprehension questions following the story; d) perceptions of the parent participant in regards to the intervention. The results indicated that when aided AAC modeling was implemented the child participant demonstrated an increase in: the number of communicative turns, accuracy in answering comprehension questions, comprehension of story grammar terminology, and production of story retells.</p>
117

An investigation of the effectiveness of language retraining methods with aphastic stroke patients

Lincoln, Nadina B. January 1980 (has links)
Four main experiments were conducted to investigate the effectiveness of language treatment methods with aphasic stroke patients. Experiment 1 was designed to compare an operant speech training procedure devised by Goodkin (1966) with speech therapy and with an attention placebo treatment. Twenty-four patients with moderate aphasia (35 to 65 %ile on the PICA) received four weeks of speech therapy and four weeks of either operant training, or non-specific treatment. Results indicated no significant differences between the treatments. Patients showed significant improvement in language abilities but this was unrelated to age, months post onset or handedness. Experiment 2, was a preliminary investigation of speech therapy with eighteen severe aphasics (below 35 %ile on the PICA). Patients shov/ed significant improvement in language abilities but this was unrelated to age, months post onset or amount of speech therapy received. In Experiment 3 operant training and an attention placebo were each given for 4 weeks, in addition to speech therapy, to twelve severe aphasics. No significant differences occurred between treatments and patients showed significant change which was unrelated to age or months post onset. Experiment 4 compared the treated patients in Experiments 1 and 2 with a no treatment control group. Results indicated no significant differences between the groups over a four week interval. Three subsidiary experiments were carried out to assess the reliability of some assessment procedures used, the Token Test shortened version, the Object Naming test and the Speech Questionnaire. Language retraining methods, as used at Rivermead Rehabilitation Centre, were shown not to improve language abilities more than attention placebo treatments or no treatment. Patients' language abilities improved, but this was unrelated to biographical variables, such as age, months post onset and handedness.
118

Language modality during interactions between hearing parents learning ASL and their deaf/hard of hearing children

Brown, Lillian Mayhew 19 June 2019 (has links)
Research regarding language and communication modality in deaf or hard of hearing children and their parents is limited. Previous research often considered modality as any visual, gestural, or tactile communication, rather than distinct languages of different modalities. This study examined language and communication modality in hearing parents who have made a commitment to learning American Sign Language (ASL) and who use both ASL and spoken English to communicate with their deaf or hard of hearing children. Nine hearing parents and their deaf/hard of hearing children participated in naturalistic play sessions. The play sessions were recorded and transcribed for ASL, spoken English, and communicative interactions. Analysis of results indicated a positive correlation between the amount of ASL (tokens and duration of time) used by parents and their children. No relationship was indicated between the amount of spoken English (tokens and duration of time) by parents and their children, nor the amount (frequency and percent) of bimodal utterances used by parent and their children. Furthermore, there was no relationship found between families using the same versus different dominant language modality and their sustained interactions (frequency, duration, and number of turns). Findings indicated a relationship between parent and child language in a visually accessible language, ASL, but not in spoken language. Data regarding bimodal utterances suggested that parents and children successfully kept both ASL and spoken English separate during play. Finally, analysis of communicative interactions demonstrated similarities between parent-child dyads that had the same dominant communication modality and those with different dominant modalities, suggesting the possibility of successful communication despite language modality differences. Overall, findings from this study illustrated that hearing parents can successfully learn and use languages of different modalities with their deaf/hard of hearing children.
119

A thematic apperception comparison of stuttering and non-stuttering children

Isserow, Rachelle R. January 1957 (has links)
Thesis (Ed.M.)--Boston University
120

Three approaches to articulation errors of kindergarten children

Haage, Constance Lynn January 2010 (has links)
Digitized by Kansas Correctional Industries

Page generated in 0.0313 seconds