• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 13
  • 13
  • 10
  • 10
  • 10
  • 9
  • 8
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Validity of Seven Syntactic Analyses Performed by the Computerized Profiling Software

Minch, Stacy Lynn 11 June 2009 (has links)
The Computerized Profiling (CP) software extracts several quantitative measures from a transcribed sample of a client's language. These analyses include the Mean Length of Utterance in Words (MLU-W) and in Morphemes (MLU-M), the Mean Syntactic Length (MSL), the Syntactic Complexity Score (SCS), Developmental Sentence Scoring (DSS), the Index of Productive Syntax (IPSyn), and the Picture-Elicited Screening Procedure for LARSP (PSL). The validity of these measures was examined by comparing them to the number of finite nominal, adverbial, and relative clauses contained in samples from 54 first-, 48 third-, and 48 fifth-grade students and 24 young adults. The DSS and SCS correlated highly with the frequency of complex constructions; MLU-W, MLU-M, and MSL correlated moderately; and IPSyn and PSL correlated minimally at best.
2

Comparing Relative and Absolute Reliability of Short Versus Long Narrative Retells

Hollis, Jenna 24 May 2022 (has links)
The purpose of the current study was to examine and compare relative and absolute reliability estimates between brief, linguistically compact narrative retells and longer, more linguistically diffuse narrative retells. The participants included 190 school-age children in firstsixth grade from Utah, Arizona, and Colorado. Participants completed two brief narrative retells using the Narrative Language Measures (NLM) Listening subtest of the CUBED assessment and one longer narrative retell using the wordless picture book Frog, Where Are You? (FWAY). These language samples were then analyzed for language productivity, complexity, and story grammar elements using the Systematic Analysis of Language Transcripts software program and the NLM Flow Chart. Analyses of relative reliability reveal that there are significant differences across all measures, when controlled for length, except for mean length of utterance in words. The language measures are higher in the shorter narrative NLM condition, while inclusion of story grammar is higher in the longer FWAY narrative retell. Additionally, all productivity and complexity measures have moderate to strong correlations between the NLM and FWAY narrative retells. Analyses of absolute reliability shows the FWAY narrative retell to demonstrate less variance across all measures when compared to the NLM, indicating that measures are more stable in the longer sample. Although the brief narrative retells does not demonstrate a sufficient degree of relative or absolute reliability, this study indicates that clinicians may be able to elicit brief narrative retells from school-age children without losing meaningful information on language complexity and productivity measures.
3

Precoding and the Accuracy of Automated Analysis of Child Language Samples

Winiecke, Rachel Christine 01 May 2015 (has links)
Language sample analysis is accepted as the gold standard in child language assessment. Unfortunately it is often viewed as too time consuming for the practicing clinician. Over the last 15 years a great deal of research has been invested in the automated analysis of child language samples to make the process more time efficient. One step in the analysis process may be precoding the sample, as is used in the Systematic Analysis of Language Transcripts (SALT) software. However, a claim has been made (MacWhinney, 2008) that such precoding in fact leads to lower accuracy because of manual coding errors. No data on this issue have been published. The current research measured the accuracy of language samples analyzed with and without SALT precoding. This study also compared the accuracy of current software to an older version called GramCats (Channell & Johnson 1999). The results presented support the use of precoding schemes such as SALT and suggest that the accuracy of automated analysis has improved over time.
4

The test of English as a foreign language sample test as a measure of adolescent language ability /

Osborn, Paul Gardiner. January 1988 (has links)
Thesis (M.S.)--Brigham Young University. Dept. of Educational Psychology. / Subjects taken from Timpview High School Seminary classes. Bibliography: leaves 27-31, 45-49.
5

Automated Grammatical Tagging of Clinical Language Samples with and Without SALT Coding

Hughes, Andrea Nielson 01 June 2015 (has links)
Language samples are naturalistic sources of information that supersede many of the limitations found in standardized test administration. Although language samples have clinical utility, they are often time intensive. Despite the usefulness of language samples in evaluation and treatment, clinicians may not perform language sample analyses due to the necessary time commitment. Researchers have developed language sample analysis software that automates this process. Coding schemes such as that used by the Systematic Analysis of Language Transcripts (SALT) software were developed to provide more information regarding appropriate grammatical tag selection. The usefulness of SALT precoding in aiding automated grammatical tagging accuracy was evaluated in this study. Results indicate consistent, overall improvement over an earlier version of the software at the tag level. The software was adept at coding samples from both developmentally normal and language impaired children. No significant differences between tagging accuracy of SALT coded versus non-SALT coded samples were found. As the accuracy of automated tagging software advances, the clinical usefulness of automated grammatical analyses improves, and thus the benefits of time savings may be realized.
6

A Comparison of Manual and Automated Grammatical Precoding on the Accuracy of Automated Developmental Sentence Scoring

Janis, Sarah Elizabeth 01 May 2016 (has links)
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that evaluates and scores a child's use of standard American-English grammatical rules within complete sentences. Automated DSS programs have the potential to increase the efficiency and reduce the amount of time required for DSS analysis. The present study examines the accuracy of one automated DSS software program, DSSA 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2-5 and 7-11. Additionally, this study seeks to determine the source of error in the automated score by comparing DSSA 2.0 analysis given manually versus automatedly assigned grammatical tag input. The overall accuracy of DSSA 2.0 was 86%; the accuracy of individual grammatical category-point value scores varied greatly. No statistically significant difference was found between the two DSSA 2.0 input conditions (manual vs. automated tags) suggesting that the underlying grammatical tagging is not the primary source of error in DSSA 2.0 analysis.
7

A comparison of language sample elicitation methods for dual language learners

Toscano, Jacqueline January 2017 (has links)
Language sample analysis has come to be considered the “gold standard” approach for cross-cultural language assessment. Speech-language pathologists assessing individuals of multicultural or multilinguistic backgrounds have been recommended to utilize this approach in these evaluations (e.g., Pearson, Jackson, & Wu, 2014; Heilmann & Westerveld, 2013). Language samples can be elicited with a variety of different tasks, and selection of a specific method by SLPs is often a major part of the assessment process. The present study aims to facilitate the selection of sample elicitation methods by identifying the method that elicits a maximal performance of language abilities and variation in children’s oral language samples. Analyses were performed on Play, Tell, and Retell methods across 178 total samples and it was found that Retell elicited higher measures of syntactic complexity (i.e., TTR, SI, MLUw) than Play as well as a higher TTR (i.e., lexical diversity) and SI (i.e., clausal density) than Tell; however, no difference was found between Tell and Retell for MLUw (i.e., syntactic complexity/productivity), nor was there a difference found between Tell and Play for TTR. Additionally, it was found that the two narrative methods elicited higher DDM (i.e., frequency of dialectal variation) than the Play method. No significant difference was found between Tell and Retell for DDM. Implications for the continued use of language sample for assessment of speech and language are discussed. / Communication Sciences
8

Accuracy of Automated Developmental Sentence Scoring Software

Judson, Carrie Ann 14 July 2006 (has links) (PDF)
Developmental Sentence Scoring (DSS; Lee 1974) is a well established, structured method for analyzing a child's expressive syntax within the context of a conversational speech sample. Automated DSS programs may increase efficiency of DSS analysis; however the program must be accurate in order to yield valid and reliable results. A recent study by Sagae, Lavie, and MacWhinney (2005) proposed a new method for analyzing the accuracy of automated language analysis programs. This method was used in addition to previously established methods to analyze the accuracy of a new automated DSS program, entitled DSSA (Channell, 2006). Previously collected language samples from 118 children between the ages of 3 and 11 years in age were manually and automatedly coded for DSS. The overall accuracy of DSSA was about 86%, while the mean point difference was approximately .7. DSSA generally scored language samples of children achieving lower manual DSS scores or children with language impairment with less accuracy than those of other children. While some precautions may need to be taken, accuracy levels are sufficiently high to allow the fully automated use of DSSA as an alternative to manual DSS scoring.
9

Automated Identification of Noun Clauses in Clinical Language Samples

Manning, Britney Richey 09 August 2009 (has links) (PDF)
The identification of complex grammatical structures including noun clauses is of clinical importance because differences in the use of these structures have been found between individuals with and without language impairment. In recent years, computer software has been used to assist in analyzing clinical language samples. However, this software has been unable to accurately identify complex syntactic structures such as noun clauses. The present study investigated the accuracy of new software, called Cx, in identifying finite wh- and that-noun clauses. Two sets of language samples were used. One set included 10 children with language impairment, 10 age-matched peers, and 10 language-matched peers. The second set included 40 adults with mental retardation. Levels of agreement between computerized and manual analysis were similar for both sets of language samples; Kappa levels were high for wh-noun clauses and very low for that-noun clauses.
10

The Test of English as a Foreign Language Sample Test as a Measure of Adolescent Language Ability

Osborn, Paul Gardiner 01 January 1988 (has links) (PDF)
Relative performance on the Test of English as a Foreign Language Sample Test (TOEFL-ST) was explored in sixty native English speaking high school students. Subjects also were administered the Fullerton Language Test for Adolescents and the Peabody Picture Vocabulary Test Revised. The TOEFL-ST was not difficult for this population, indicating that TOEFL tests taken by foreign speaking college students probably assess a level of native English competency well below the high school level. The three tests, including subtests, appear to measure a wide array of subdomains of language competency. The data do not support the conclusion that any of these tests could be substituted for the others in assessing language competency.

Page generated in 0.0577 seconds