1 |
Validity of Seven Syntactic Analyses Performed by the Computerized Profiling SoftwareMinch, Stacy Lynn 11 June 2009 (has links)
The Computerized Profiling (CP) software extracts several quantitative measures from a transcribed sample of a client's language. These analyses include the Mean Length of Utterance in Words (MLU-W) and in Morphemes (MLU-M), the Mean Syntactic Length (MSL), the Syntactic Complexity Score (SCS), Developmental Sentence Scoring (DSS), the Index of Productive Syntax (IPSyn), and the Picture-Elicited Screening Procedure for LARSP (PSL). The validity of these measures was examined by comparing them to the number of finite nominal, adverbial, and relative clauses contained in samples from 54 first-, 48 third-, and 48 fifth-grade students and 24 young adults. The DSS and SCS correlated highly with the frequency of complex constructions; MLU-W, MLU-M, and MSL correlated moderately; and IPSyn and PSL correlated minimally at best.
|
2 |
A Comparison of Manual and Automated Grammatical Precoding on the Accuracy of Automated Developmental Sentence ScoringJanis, Sarah Elizabeth 01 May 2016 (has links)
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that evaluates and scores a child's use of standard American-English grammatical rules within complete sentences. Automated DSS programs have the potential to increase the efficiency and reduce the amount of time required for DSS analysis. The present study examines the accuracy of one automated DSS software program, DSSA 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2-5 and 7-11. Additionally, this study seeks to determine the source of error in the automated score by comparing DSSA 2.0 analysis given manually versus automatedly assigned grammatical tag input. The overall accuracy of DSSA 2.0 was 86%; the accuracy of individual grammatical category-point value scores varied greatly. No statistically significant difference was found between the two DSSA 2.0 input conditions (manual vs. automated tags) suggesting that the underlying grammatical tagging is not the primary source of error in DSSA 2.0 analysis.
|
3 |
Comparing Relative and Absolute Reliability of Short Versus Long Narrative RetellsHollis, Jenna 24 May 2022 (has links)
The purpose of the current study was to examine and compare relative and absolute reliability estimates between brief, linguistically compact narrative retells and longer, more linguistically diffuse narrative retells. The participants included 190 school-age children in firstsixth grade from Utah, Arizona, and Colorado. Participants completed two brief narrative retells using the Narrative Language Measures (NLM) Listening subtest of the CUBED assessment and one longer narrative retell using the wordless picture book Frog, Where Are You? (FWAY). These language samples were then analyzed for language productivity, complexity, and story grammar elements using the Systematic Analysis of Language Transcripts software program and the NLM Flow Chart. Analyses of relative reliability reveal that there are significant differences across all measures, when controlled for length, except for mean length of utterance in words. The language measures are higher in the shorter narrative NLM condition, while inclusion of story grammar is higher in the longer FWAY narrative retell. Additionally, all productivity and complexity measures have moderate to strong correlations between the NLM and FWAY narrative retells. Analyses of absolute reliability shows the FWAY narrative retell to demonstrate less variance across all measures when compared to the NLM, indicating that measures are more stable in the longer sample. Although the brief narrative retells does not demonstrate a sufficient degree of relative or absolute reliability, this study indicates that clinicians may be able to elicit brief narrative retells from school-age children without losing meaningful information on language complexity and productivity measures.
|
4 |
A comparison of language sample elicitation methods for dual language learnersToscano, Jacqueline January 2017 (has links)
Language sample analysis has come to be considered the “gold standard” approach for cross-cultural language assessment. Speech-language pathologists assessing individuals of multicultural or multilinguistic backgrounds have been recommended to utilize this approach in these evaluations (e.g., Pearson, Jackson, & Wu, 2014; Heilmann & Westerveld, 2013). Language samples can be elicited with a variety of different tasks, and selection of a specific method by SLPs is often a major part of the assessment process. The present study aims to facilitate the selection of sample elicitation methods by identifying the method that elicits a maximal performance of language abilities and variation in children’s oral language samples. Analyses were performed on Play, Tell, and Retell methods across 178 total samples and it was found that Retell elicited higher measures of syntactic complexity (i.e., TTR, SI, MLUw) than Play as well as a higher TTR (i.e., lexical diversity) and SI (i.e., clausal density) than Tell; however, no difference was found between Tell and Retell for MLUw (i.e., syntactic complexity/productivity), nor was there a difference found between Tell and Play for TTR. Additionally, it was found that the two narrative methods elicited higher DDM (i.e., frequency of dialectal variation) than the Play method. No significant difference was found between Tell and Retell for DDM. Implications for the continued use of language sample for assessment of speech and language are discussed. / Communication Sciences
|
5 |
Automated Identification of Noun Clauses in Clinical Language SamplesManning, Britney Richey 09 August 2009 (has links) (PDF)
The identification of complex grammatical structures including noun clauses is of clinical importance because differences in the use of these structures have been found between individuals with and without language impairment. In recent years, computer software has been used to assist in analyzing clinical language samples. However, this software has been unable to accurately identify complex syntactic structures such as noun clauses. The present study investigated the accuracy of new software, called Cx, in identifying finite wh- and that-noun clauses. Two sets of language samples were used. One set included 10 children with language impairment, 10 age-matched peers, and 10 language-matched peers. The second set included 40 adults with mental retardation. Levels of agreement between computerized and manual analysis were similar for both sets of language samples; Kappa levels were high for wh-noun clauses and very low for that-noun clauses.
|
6 |
Mean Length of Utterance and Developmental Sentence Scoring in the Analysis of Children's Language SamplesChamberlain, Laurie Lynne 01 June 2016 (has links)
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that uses complete sentences to evaluate and score a child’s use of standard American-English grammatical rules. Automated DSS software can potentially increase efficiency and decrease the time needed for DSS analysis. This study examines the accuracy of one automated DSS software program, DSSA Version 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2;5 and 7;11 (years;months). The overall accuracy of DSSA 2.0 was 86%. Additionally, the present study sought to determine the relationship between DSS, DSSA Version 2.0, the mean length of utterance (MLU), and age. MLU is a measure of linguistic ability in children, and is a widely used indicator of language impairment. This study found that MLU and DSS are both strongly correlated with age and these correlations are statistically significant, r = .605, p < .001 and r = .723, p < .001, respectively. In addition, MLU and DSSA were also strongly correlated with age and these correlations were statistically significant, r = .605, p < .001 and r = .669, p < .001, respectively. The correlation between MLU and DSS was high and statistically significant r = .873, p < .001, indicating that the correlation between MLU and DSS is not simply an artifact of both measures being correlated with age. Furthermore, the correlation between MLU and DSSA was high, r = .794, suggesting that the correlation between MLU and DSSA is not simply an artifact of both variables being correlated with age. Lastly, the relationship between DSS and age while controlling for MLU was moderate, but still statistically significant r = .501, p = .006. Therefore, DSS appears to add information beyond MLU.
|
7 |
Scoring Sentences Developmentally: An Analog of Developmental Sentence ScoringSeal, Amy 01 January 2002 (has links) (PDF)
A variety of tools have been developed to assist in the quantification and analysis of naturalistic language samples. In recent years, computer technology has been employed in language sample analysis. This study compares a new automated index, Scoring Sentences Developmentally (SSD), to two existing measures. Eighty samples from three corpora were manually analyzed using DSS and MLU and the processed by the automated software. Results show all three indices to be highly correlated, with correlations ranging from .62 to .98. The high correlations among scores support further investigation of the psychometric characteristics of the SSD software to determine its clinical validity and reliability. Results of this study suggest that SSD has the potential to compliment other analysis procedures in assessing the language development of young children.
|
Page generated in 0.0527 seconds