• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 5
  • 2
  • 1
  • Tagged with
  • 740
  • 740
  • 688
  • 518
  • 517
  • 507
  • 106
  • 88
  • 85
  • 77
  • 73
  • 61
  • 55
  • 52
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Single-route and dual-route approaches to reading aloud difficulties associated with dysphasia

Mack, S. K. January 1999 (has links)
The study of reading aloud is currently informed by two main types of theory: modular dual-route and connectionist single-route. One difference between then theories is the type of word classification system which they favour. Dual-route theory employs the regular-irregular dichotomy of classification, whereas single-route considers body neighbourhoods to be a more informative approach. This thesis explores the reading aloud performance of a group of people with dysphasia from the two theoretical standpoints by employing a specifically prepared set of real and pseudoword stimuli. As well as being classified according to regularity and body neighbourhood, all the real word stimuli were controlled for frequency. The pseudowords were divided into two groups, common pseudowords and pseudohomophones, and classified according to body neighbourhood. There were two main phases to the study. In the first phase, the stimuli were piloted and the response time performances of a group of people with dysphasia and a group of matcehd control people were compared. In the second phase, a series of tasks was developed to investigate which means of word classification best explained the visual lexical decision and reading aloud performance of people with dysphasia. The influence of word knowledge was also considered. The data was analysed both quantitatively and qualitatively. The quantitative analysis of the number of errors made indicated that classification of items by body neighbourhood and frequency provided the more comprehensive explanation of the data. Investigation of the types of errors that were made did not find a significant relationship between word type and error type, but again the results indicated that the influence of frequency and body neighbourhood was stronger than that of regularity. The findings are discussed both in terms of their implications for the two theories of reading aloud and their relevance to clinical practice.
12

Communication in young people with intellectual impairments : the influence of partnership

Walton, Anne P. January 2002 (has links)
Adults with intellectual impairments experience frequent communication breakdown in their everyday interactions. This can result from impairment of the linguistic skills required for effective communication and/or difficulties dealing with non-verbal information. Problems also exist, however, in the way that some non-impaired speakers, such as care providers, approach these communicative episodes. This thesis investigates communication in young adults with intellectual impairments with three different communication partners. These were a care provider, a student and a peer with intellectual impairments. Student partners were previously unknown to the main participants and not experienced in communicating with people with intellectual impairments. Communication structure and process are investigated according to the number of words and turns used to complete a co-operative problem-solving task and the types of conversational acts used by speakers and listeners. Non-verbal communication is investigated through the use of one non-verbal signal, gaze, during the task dialogues. An interactionist approach is taken to communication, where outcome or success is viewed as a product of the collaborative efforts of speakers and listeners. Communication is seen as multi-modal and involving the exchange of information via the verbal and non-verbal channels. The results show that when both parties were intellectually impaired performance was poorest. More surprisingly, dyads including a student partner communicated more effectively and efficiently than where the partner was a carer. One reason for this may be that carers used more complex, open questions to introduce new information into the task, and these were distracting rather than useful. Overusing open questions may be problematic for this population and less effective at establishing shared understanding than where listeners check their own interpretation of previous messages, a strategy preferred by student partners. Non-verbal signals can help to ease constraints on communication by providing interlocutors with feedback information on the levels of mutual understanding.
13

An investigation of motor control for speech in phonologically delayed children, normally developing children and adults

Waters, Daphne Margaret January 1992 (has links)
Difficulty with phonological acquisition in children is currently widely regarded as a linguistic/cognitive disability but, since speech is a motor as well as a linguistic activity, speech motor control abilities must have a bearing on acquisition of the speech sound system. On the basis of previous studies, measures of speech rate and temporal variability are regarded as indices of level of speech motor control ability. Evidence was sought concerning the possibility that slow maturation of speech motor control abilities may underlie phonological delay in children. Speech timing characteristics were compared in 12 adult speakers (Group A), 12 normal preschool children (Group N, aged 3;8 years -4;10 years, mean age 4;3 years) and 12 age-matched phenologically delayed children {Group P). Measurements were made of phrase and segment durations and temporal variability in multiple tokens of an experimental phrase. The phonological structure of the speech data was also analysed and a measure of speech rate {in segments/second) was derived. The N Group were found to exhibit slower speech rates, generally longer mean phrase and segment durations and higher levels of temporal variability than the A Group. The P Group exhibited significantly slower speech rates than the N Group and there was a trend towards longer phrase and segment durations in the P Group data. With one marginal exception, no significant differences were found between the two child groups on measures of temporal variability. The weight of evidence indicated that speech motor control was less mature in the P Group than in the N Group. The findings lend some support to the view that differences in speech motor maturity may be implicated in phonological acquisition differences. Some implications for the design of therapy procedures are explored. The importance of analysing and taking account of the phonological form of speech data in investigations of speech rate is highlighted.
14

An instrumental study of alveolar to velar assimilation in slow and fast speech using EPG and EMA techniques

Ellis, Lucy A. January 2000 (has links)
This thesis evaluates the widely-held notion that place assimilation is (i) more frequent at faster rates of speech and (ii) a gradual phonetic process. The latter view is based on previous small-scale EPG studies which showed evidence of partial alveolar assimilations lacking complete stop closure on the alveolar ridge but with a residual tongue body gesture. For the present study, EPG data from 10 speakers were collected. Two experimental sequences, /n#k/ and /ŋ#k/, embedded in meaningful sentences, were produced by subjects 10 times each in a slow/careful style and 10 times each in a fast/casual style. The first sequence captures the potential site of assimilation and the second is a lexical velar-velar sequence with which cases of complete assimilation can be compared. The results showed that, overall, assimilation was more frequent in fast speech than in careful speech, although timing analysis revealed that assimilation is not the automatic consequence of rate-induced changes in intergestural timing of /n#k/. In fast speech, six of the ten speakers showed relatively consistent assimilatory preferences: they either produced only complete assimilations or they never assimilated. However, four speakers showed considerable intra-speaker variability. Two of the four produced either full alveolars or complete assimilations in the manner of a categorical opposition (complete assimilations were indistinguishable from control /ŋ#k/ sequences). The other two speakers produced a continuum of forms that could be ranked from full alveolars to complete assimilations via partial assimilations. Using the same stimuli, a follow-up combined EPG/EMA study was carried out, the purpose of which was to look for reduced coronal gestures undetectable in tongue-palate contact-only data. Two 'categorical' assimilators were re-recorded and these gestures were not found. This supports the interpretation that for some speakers assimilation is determined at a higher level through the application of a cognitive rule, while for others variation is 'computed on-line' during speech production itself. Current phonological models of assimilation are found to be unable to capture both gradient effects and more radical feature-sized substitutions under a single framework.
15

An investigation into the ability of adults with post-stroke aphasia to learn new vocabulary

McGrane, Helen January 2006 (has links)
Recent studies have established that adults with post-stroke aphasia can learn to establish connections between familiar words and abstract images, and nonwords with familiar objects. What has not been investigated was whether adults with aphasia could learn non-words with abstract images/ novel meanings i.e. new vocabulary. The main objective of this study was to investigate whether adults with post-stroke aphasia could learn ‘novel’ word forms with ‘novel’ word meanings, despite phonological and/or semantic impairment. Specific research questions included: Can post-stroke adults with aphasia learn new vocabulary? If so, what factors affect their capacity to learn? Is it possible to predict which individuals will learn most successfully? The methodology was developed using preliminary studies both with adults of normal language and cognitive functioning and post-stroke non-aphasic and aphasic adults. It incorporated learning theory and a cognitive neuropsychological model of language. A range of assessments was used to facilitate the capture of new learning. ‘New learning’ was measured not only in terms of the accurate production of the new stimuli but also the recognition and knowledge of the word forms and meanings of this new vocabulary. In the main investigation twenty novel word forms with 20 novel meanings were taught to 12 aphasic adults (< 65 years), over a four day period, using an errorless learning paradigm. Immediate recall of these newly learnt representations was investigated as well as delayed recall. Quantitative and qualitative results from a case series of 12 participants are presented and discussed. Despite semantic and phonological difficulties, all but three participants demonstrated substantial learning of the new vocabulary. The participants’ range of learning ability (from both immediate and delayed recall data) was analysed in relation to severity of aphasia, cognitive factors (including attention, memory and executive function), as well as variables such as age, months post-stroke and number of years in education. With an intensive training period, these participants with aphasia demonstrate varying degrees of ability for new learning. Possible influencing factors and implications for speech and language therapy rehabilitation are discussed.
16

Language development and its relationship to theory of mind in children with high-functioning autism

Carroll, Lianne January 2007 (has links)
Impairments in language, prosodic and theory of mind (ToM) ability in individuals with high-functioning autism (HFA) have been widely reported. However, this PhD study is the first to investigate changes in receptive and expressive prosody skills over time. This is also the first study to report on the relationship between prosody and ToM, independent of language ability. Additionally, this study presents a new adaptation of a ToM assessment, on which prosodic and verbal input are carefully controlled. Language, prosody and ToM skills in 24 children aged 9 to 16 years with HFA were assessed approximately 2 1/2 years after participation in a study of language and prosody conducted at Queen Margaret University College (McCann, Peppe, Gibbon, O'Hare and Rutherford, 2006). The current study reports the skills and abilities of the children with HFA in the follow-up, using a battery of speech and language assessments, as well as assessments of expressive and receptive prosody and ToM abilities. The majority of the children with HFA continue to show expressive and receptive language impairments, with expressive language ability continuing to be the most impaired language skill, mirroring results at Time point 1. Children with HFA are developing language along the same, but delayed, developmental trajectory as children with typical development. Strong growth was noted on prosodic ability within structured tasks, as measured by the total score on the prosody assessment, as compared to verbal-age matched typically developing children. The statistical gap that was present between groups in the earlier study no longer remains. However, children with HFA continue to perform worse on the understanding and use of contrastive stress. Children who showed atypical sounding expressive prosody in conversational speech in the earlier study continue to do so in the follow-up. Children with HFA are developing early ToM abilities with the same developmental progress as typically developing children, but at a chronological age approximately seven years behind. However, children with HFA struggle with second-order ToM tasks. Results show that language, prosody and ToM abilities are highly correlated. Prosody and ToM show a relationship independent of language ability. Implications of these findings to theoretical understanding, future research, as well as to speech and language assessment and intervention are presented.
17

When your native language sounds foreign : a phonetic investigation into first language attrition

de Leeuw, Esther January 2008 (has links)
The research presented in this thesis comprises two experiments which investigated whether the domain of phonetics can undergo first language attrition, or be lost, when a second language is acquired in adulthood in a migrant context. Experiment I investigated the native speech of 57 German migrants to Anglophone Canada and the Dutch Netherlands. The bilingual migrants had grown up in a monolingual German environment and moved abroad in adolescence or adulthood. Their semispontaneous German speech was globally assessed for foreign accent by native German speakers in Germany. It was revealed that 14 bilingual migrants were perceived to be non-native speakers of German. Age of arrival to Canada or the Netherlands and contact with one’s native language played the most significant roles in determining whether the German speech of the migrants was assessed to be foreign accented. Crucially, it was not only the amount of contact, but also the type of contact which influenced foreign accented native speech. Monolingual settings, in which little language mixing was assumed to occur, were most conducive to maintaining non-foreign accented native German speech. These findings prompted Experiment II, in which the speech of 10 German migrants to Anglophone Canada was examined in fine phonetic detail. The participants in this experiment had similarly grown up in a German speaking environment and migrated to Canada in late adolescence or adulthood. Segmental and prosodic elements of speech, which generally differ between German and English, were selected for acoustic analyses. Given that each phonetic element was measured according to two dimensions, it was possible to determine that in the lateral phoneme /l/, the frequency of F1 was more likely to evidence first language attrition than the frequency of F2; and that in the prenuclear rise, the alignment of the start of the rise was more likely to display first language attrition than the alignment of the end. In addition to intrapersonal variation within the same phonetic variable, interpersonal variation was observed. Two participants evidenced no first language attrition, whilst one participant realised both dimensions of the lateral phoneme /l/ and prenuclear tonal alignment according to the Englishmonolingual norm in his German. When extralinguistic variables were investigated, age of arrival (and neither amount nor type of language contact) had a significant impact on determining first language attrition, although this effect was only observed in the alignment of the prenuclear rise. While the experiments revealed stability in the native speech of late consecutive bilingual migrants, first language attrition in the domain of phonetics was observed at both the level of perception and performance. Taken together, these findings challenge the traditional concept of native speech by revealing that indeed native speakers diverge from the norms of native (monolingual) speech.
18

A cross-linguistic study of affective prosody production by monolingual and bilingual children

Grichkovtsova, Ioulia January 2006 (has links)
The main objective of the research reported in the dissertation was to investigate the production of affective speech by monolingual and simultaneous bilingual children in Scottish English and French. The study was designed to address several important issues with respect to affective speech. First, possibility of identifying and compar- ing acoustic correlates of affective speech in productions of monolingual children was explored in a cross-linguistic perspective. Second, affective speech of bilingual chil- dren was examined in their two languages and compared to that of their monolingual peers. Third, vocal emotions encoded by monolingual and bilingual children were tested through the identification by French and Scottish monolingual adults. Five bilingual and twelve monolingual children were recorded for a cross-linguistically comparable corpus of affective speech. Children played four emotions (anger, fear, sadness and happiness) on one token utterance with the help of visual materials, which served as the reference of the expressed emotions, and as an affect inducing material. A large number of child speakers brings better understanding of cross- language and within-language variability in vocal affective expressions. The corpus was acoustically analysed and used in a cross-linguistic perception test with Scottish and French monolingual adults. The results of the perception test support the existing view in the cross-cultural research on emotions: even if people from different cultural groups could identify each others’ emotions, an in-group advantage was generally present. Another im- portant finding was that some affective states were more successfully identified in one of the languages by the two groups of listeners. Specifically, French anger, as expressed by bilingual and monolingual children, was identified more successfully by both French and Scottish listeners than anger, encoded by bilinguals and mono- linguals in Scottish English, thus suggesting that children showed some emotions more in one of the languages. The joint analysis of production and perception data confirmed the association of the studied acoustic correlates with affective states, but x also showed the variability of different strategies in their usage. While some speak- ers used all the measured acoustic correlates to a significantly large extent, other speakers used only some of them. Apparently, the usage of all the possible acoustic correlates is not obligatory for successful identification. Moreover, one of the studied affective states (fear) was characterised by more variable usage of acoustic correlates than others. Cross-linguistic differences were attested in the usage of some acoustic correlates and in the preferred strategies for the realisation of affective states. Simultaneous bilingual children could encode affective states in their two lan- guages; moreover, on average, their affective states are identified even better than those of monolingual children. This ability to successfully encode vocal emotions can be interpreted as a signal of high social competence in bilingual children. Produc- tion results show that all bilingual children realise some cross-linguistic differences in their affective speech. Nevertheless, interaction between the languages in the affec- tive speech was discovered both in the production and perception data for bilinguals. This finding comes in support of other studies which call language interaction as a characteristic feature of bilingual phonetic acquisition. The specific pattern of the affective speech realisation is individual for each bilingual child, depending on the affective state and on the used language. In this context, the theory of integrated continuum, developed by Cook (2003), is discussed for its possibility to describe the paralinguistic organisation in the bilingual mind. This thesis thus contributes to a better understanding of phonetic learning by monolingual and bilingual children in the context of affective speech. It also gives a detailed analysis of cross-language and within-language variability present in affec- tive speech. This new data will be of interest to the researchers working in speech sciences, psycholinguistics, developmental and cross-cultural psychology.
19

Ultrasound and acoustic analysis of lingual movement in teenagers with childhood apraxia of speech, control adults and typically developing children

Kocjancic, Tanja January 2010 (has links)
Childhood apraxia of speech (CAS) is a neurological motor speech disorder affecting spatiotemporal planning of speech movements. Speech characteristics of CAS are still not well defined and the main aim of this thesis was to reveal them by analysing acoustic and articulatory data obtained by ultrasound imaging. Ultrasound recording provided temporal and articulatory measurement of duration of syllables and segments, amount and rate of tongue movement over the syllables and observation of the patterns of tongue movement. Data was provided by three teenagers with CAS and two control groups, one of ten typically developing children and the other of ten adults. Results showed that, as a group, speakers with CAS differed from the adults but not from the typically developing children in syllable duration and in rate of tongue movement. They did not differ from either of the control groups in amount of tongue movement. Individually, speakers with CAS showed similar or even greater consistency on these features than the control speakers but displayed different abilities to adapt them to changes in the syllable structure. While all three adapted syllable duration and rate of tongue movement in the adult-like way, only two showed mature adaptation of segment durations and of the amount of tongue movement. Observing patterns of tongue movement showed that speakers with CAS produce different patterns than speakers in the control groups but are at the same time, like adults, very stable in their articulations. Also, speakers with CAS may move their tongues less in the oral space than speakers in the control groups. The differences between the control groups were similar to those found in previous studies. The results provide support for the validity of the methods used, new information about CAS and a promising direction for future research in differential diagnostic and therapy procedures.
20

An investigation of coarticulation resistance in speech production using ultrasound

Zharkova, Natalia January 2007 (has links)
Sound segments show considerable influence from neighbouring segments, which is described as being the result of coarticulation. None of the previous reports on coarticulation in vowel-consonant-vowel (VCV) sequences has used ultrasound. One advantage of ultrasound is that it provides information about the shape of most of the midsagittal tongue contour. In this work, ultrasound is employed for studying symmetrical VCV sequences, like /ipi/ and /ubu/, and methods for analysing coarticulation are refined. The use of electropalatography (EPG) in combination with ultrasound is piloted in the study. A unified approach is achieved to describing lingual behaviour during the interaction of different speech sounds, by using the concept of Coarticulation Resistance, which implies that different sounds resist coarticulatory influence to different degrees. The following research questions were investigated: how does the tongue shape change from one segment to the next in symmetrical VCV sequences? Do the vowels influence the consonant? Does the consonant influence the vowels? Is the vocalic influence on the consonant greater than the consonantal influence on the vowels? What are the differences between lingual and non-lingual consonants with respect to lingual coarticulation? Does the syllable/word boundary affect the coarticulatory pattern? Ultrasound data were collected using the QMUC ultrasound system, and in the final experiment some EPG data were also collected. The data were Russian nonsense VCVs with /i/, /u/, /a/ and bilabial stops; English nonsense VhV sequences with /i/, /u/, /a/; English /aka/, /ata/ and /iti/ sequences, forming part of real speech. The results show a significant vowel influence on all intervocalic consonants. Lingual consonants significantly influence their neighbouring vowels. The vocalic influence on the consonants is significantly greater than the consonantal influence on the vowels. Non-lingual consonants exhibit varying coarticulatory patterns. Syllable and word boundary influence on VCV coarticulation is demonstrated. The results are interpreted and discussed in terms of the Coarticulation Resistance theory: Coarticulation Resistance of speech segments varies, depending on segment type, syllable boundary, and language. A method of quantifying Coarticulation Resistance based on ultrasound data is suggested.

Page generated in 0.0781 seconds