A Comparison of Manual and Automated Grammatical Precoding on the Accuracy of Automated Developmental Sentence Scoring

Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that evaluates and scores a child's use of standard American-English grammatical rules within complete sentences. Automated DSS programs have the potential to increase the efficiency and reduce the amount of time required for DSS analysis. The present study examines the accuracy of one automated DSS software program, DSSA 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2-5 and 7-11. Additionally, this study seeks to determine the source of error in the automated score by comparing DSSA 2.0 analysis given manually versus automatedly assigned grammatical tag input. The overall accuracy of DSSA 2.0 was 86%; the accuracy of individual grammatical category-point value scores varied greatly. No statistically significant difference was found between the two DSSA 2.0 input conditions (manual vs. automated tags) suggesting that the underlying grammatical tagging is not the primary source of error in DSSA 2.0 analysis.

Identiferoai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-6891
Date01 May 2016
CreatorsJanis, Sarah Elizabeth
PublisherBYU ScholarsArchive
Source SetsBrigham Young University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceAll Theses and Dissertations
Rightshttp://lib.byu.edu/about/copyright/

Page generated in 0.0024 seconds