Spelling suggestions: "subject:"educational tests& measurements"" "subject:"cducational tests& measurements""
211 |
INVESTIGATING THE LINK BETWEEN CURRENT CLASSROOM TEACHERS’ CONCEPTIONS, LITERACY, AND PRACTICES OF ASSESSMENTSnyder, Mark Richard January 2017 (has links)
Teachers’ assessment conceptions, assessment literacy, and self-reported assessment practices were investigated using a single administration survey of U.S. classroom teachers. These phenomena were investigated both individually and in there inter relationships. Assessment conceptions were measured with the Teachers’ Conceptions of Assessment III – abridged survey and assessment literacy with the Assessment Literacy Inventory. Self-reported classroom assessment practices were analyzed with factor analysis to determine a set of five assessment practice factors that indicate a set of classroom assessment practice behaviors. Analysis suggested certain assessment conceptions held by teachers and aspects of their assessment literacy were significant predictors in their loadings for certain assessment practice factors. One of these significant relationships was that the degree to which the teachers held the conceptions that assessment holds schools accountable and that it aids in student improvement predicted the frequency with which they reported using tests and quizzes in their classroom. There were also significant differences in the assessment practices self-reported based upon the grade level of student instructed, years of teaching experience, as well as other demographic variables. These findings suggest that study and use of the three assessment phenomena would inform practitioners about what may influence classroom teachers’ assessment practices, and how they can best be remediated. / Educational Psychology
|
212 |
An Investigation of the Content and Concurrent Validity of the School-wide Evaluation ToolBloomfield, Alison Elizabeth January 2015 (has links)
The School-wide Evaluation Tool (SET) is a commonly used measure of the implementation fidelity of school-wide positive behavior interventions and supports (SWPBIS) programs. The current study examines the content and concurrent validity of the SET to establish whether an alternative approach to weighting and scoring the SET might provide a more accurate assessment of SWPBIS implementation fidelity. Twenty published experts in the field of SWPBIS completed online surveys to obtain ratings of the relative importance of each item on the SET to sustainable SWPBIS implementation. Using the experts' mean ratings, four novel SET scoring approaches were developed: unweighted, reweighted using mean ratings, unweighted dropping lowest quartile items, and reweighted dropping lowest quartile items. SET 2.1 data from 1,018 schools were used to compare the four novel and two established SET scoring methods and examine their concurrent validity with the Team Implementation Checklist 3.1 (TIC; across a subsample of 492 schools). Correlational data indicated that the two novel SET scoring methods with dropped items were both significantly stronger predictors of TIC scores than the established SET scoring methods. Continuous SET scoring methods have greater concurrent validity with the TIC overall score and greater sensitivity than the dichotomous SET 80/80 Criterion. Based on the equivalent concurrent validity of the unweighted SET with dropped items and the reweighted SET with dropped items compared to the TIC, this study recommends that the unweighted SET with dropped items be used by schools and researchers to obtain a more cohesive and prioritized set of SWPBIS elements than the existing or other SET scoring methods developed in this study. / School Psychology
|
213 |
Assessing English Environment Personality and its Role in Oral ProficiencyKarlin, Omar Christopher January 2015 (has links)
The general areas of research for this study are personality and second language acquisition. The three goals of this study are to (a) develop a personality instrument (the Questionnaire of English Environment Personality [QuEEP]) that accounts for second language influences on personality, and more effectively captures personality than an established personality instrument (the International Personality Item Pool Big Five Factor Markers [IPIP BFFM]), (b) determine if personality changes after studying abroad for a month, and (c) determine if certain personality types are likely to improve oral proficiency when studying abroad. In relation to the study’s first goal, 262-items, using a five-point Likert scale, were created and administered to 287 Japanese university students to measure five personality factors based on the extraversion, emotional stability, openness, agreeableness, and conscientiousness factors of the Big Five model of personality (McCrae & Costa, 1987). These items were then culled to 50 items by examining their suitability through factor analysis and Rasch analysis. Two 50-item versions of the QuEEP were drawn from the same 262-items, one based on three factor analyses, and the other based on Rasch analysis. Both versions of the QuEEP included 10 items for each of the five personality factors in the Big Five. Both versions of the QuEEP outperformed the IPIP BFFM on four measures of validity, including content validity, structural validity, external validity, and generalizability, while the IPIP BFFM outperformed both versions of the QuEEP on the substantive aspect of construct validity. As a result, it was concluded that the QuEEP, specifically the version derived from the Rasch analysis, was more effective at capturing personality that was influenced by a second language than the IPIP BFFM. In relation to the study’s second goal, the personality for 38 study-abroad students was assessed, through a pre-departure and post-return administration of the QuEEP and IPIP BFFM, to determine if the participants’ personality changed after one month abroad. The results indicated that the personality measures of extraversion and emotional stability increased significantly after one month abroad, as measured by the QuEEP. The IPIP BFFM did not indicate any significant personality changes. In relation to the study’s third goal, the 38 study-abroad students also completed a pre-departure and post-return interview test to determine if certain personality types benefited more from studying abroad in terms of oral proficiency, which included eight measures of fluency, complexity, and accuracy. The results indicated that when the participants were divided into high and low groups for each personality dimension (e.g., a high extraversion and a low extraversion group), the only significant differences between the groups in measures of oral proficiency involved the pauses fluency variable (low QuEEP emotional stability group), the words per second fluency variable (high IPIP BFFM extraversion group), the pauses fluency variable (high IPIP BFFM extraversion group), and the accuracy variable (low IPIP BFFM openness group). After Bonferroni adjustments were conducted, these findings were rendered not significant. However, when analyzed cross-sectionally rather than longitudinally, there were several significant correlations involving the QuEEP pretest and pre-interview test data, most notably between oral proficiency and extraversion and emotional stability. The IPIP BFFM posttest also indicated significant correlations between oral proficiency and agreeableness and openness. The QuEEP posttest and post-interview test data, and the IPIP BFFM pretest and pre-interview test data indicated fewer significant correlations with oral proficiency. / Language Arts
|
214 |
An Examination of English Language Proficiency and Achievement Test OutcomesMojica, Tammy Christina January 2013 (has links)
The purpose of the study was to compare the relationship between grade eight English language proficiency as measured by the ACCESS for ELL's assessment (Assessing Comprehension and Communication in English State to State for English Language Learners) and achievement test outcomes on the Pennsylvania System of School Assessment, a state mandated test. The ACCESS for ELLs is an annual, large-scale English language proficiency assessment given to kindergarten through grade twelve students who have been identified as English language learners. The ACCESS assessment is administered in English. Data from the Nation's Report Card (US. Department of National Center for Education Statistics, 2007 a & 2007 b) show that ELL students lag behind their English proficient peers on standardized tests of reading. The inclusion of English language learners in state assessments has prompted issues regarding the validity and equity of assessment practices (Abedi, 2004). The data for the study were gathered from an analyses of 8th grade ELL students' scores on the 2011 PSSA standardized assessment test administered in the Philadelphia, Pennsylvania public school district. Data were also gathered from the analysis of 8th grade ELL assessments for the 2010-2011 school year. The study also assessed the predictive values of the criterion variables and the moderating effects of categorical variables by school: Ethnicity (Black, White, Hispanic), ELL status (English Language Learner), Students with Disabilities status (SWD), Socioeconomic status (SES), which contribute to Pennsylvania's Adequate Yearly Progress (AYP) status. The study showed strong evidence that there is a significant relationship between the PSSA and language background as measured by the ACCESS assessment. Assessment. The implications of these data for the testing and assessment of ELL learners was discussed. / Educational Leadership
|
215 |
Predicting Success: An Examination of the Predictive Validity of a Measure of Motivational-Developmental Dimensions in College AdmissionsParis, Joseph January 2018 (has links)
Although many colleges and universities use a wide range of criteria to evaluate and select admissions applicants, much of the variance in college student success remains unexplained. Thus, success in college, as defined by academic performance and student retention, may be related to other variables or combinations of variables beyond those traditionally used in college admissions (high school grade point average and standardized test scores). The current study investigated the predictive validity of a measure of motivational-developmental dimensions as a predictor of the academic achievement and persistence of college students as measured by cumulative undergraduate grade point average and retention. These dimensions are based on social-cognitive (self-concept, self-set goals, causal attributions, and coping strategies) and developmental-constructivist (self-awareness and self-authorship) perspectives. Motivational-developmental constructs are under-explored in terms of the predictive potential derived from their use in evaluating admission applicants’ ability to succeed and persevere despite the academic and social challenges presented by postsecondary participation. Therefore, the current study aimed to generate new understandings to benefit the participating institution and other institutions of higher education that seek new methodologies for evaluating and selecting college admission applicants. This dissertation describes two studies conducted at a large, urban public university located in the Northeastern United States. Participants included 10,149 undergraduate students who enrolled as first-time freshmen for the Fall 2015 (Study 1) and Fall 2016 (Study 2) semesters. Prior to matriculation, participants applied for admission using one of two methods: standard admissions or test-optional admissions. Standard admission applicants submitted standardized test scores (e.g., SAT) whereas test-optional applicants responded to four short-answer essay questions, each of which measured a subset of the motivational-developmental dimensions examined in the current study. Trained readers evaluated the essays to produce a “test-optional essay rating score,” which served as the primary predictor variable in the current study. Quantitative analyses were conducted to investigate the predictive validity of the “test-optional essay rating score” and its relationship to cumulative undergraduate grade point average and retention, which served as the outcome variables in the current study. The results revealed statistically significant group differences between test-optional applicants and standard applicants. Test-optional admission applicants are more likely to be female, of lower socioeconomic status, and ethnic minorities as compared to standard admission applicants. Given these group differences, Pearson product-moment correlation coefficients were computed to determine whether the test-optional essay rating score differentially predicted success across racial and gender subgroups. There was inconclusive evidence regarding whether the test-optional essay rating score differentially predicts cumulative undergraduate grade point average and retention across student subgroups. The results revealed a weak correlation between the test-optional essay rating score and cumulative undergraduate grade point average (Study 1: r = .11, p < .01; Study 2: r = .07, p < .05) and retention (Study 1: r = .08, p < .05; Study 2: r = .10, p < .01), particularly in comparison to the relationship between these outcome variables and the criteria most commonly considered in college admissions (high school grade point average, SAT Verbal, SAT Quantitative, and SAT Writing). Despite these findings, the test-optional essay rating score contributed nominal value (R2 = .07) in predicting academic achievement and persistence beyond the explanation provided by traditional admissions criteria. Additionally, a ROC analysis determined that the test-optional essay rating score does not predict student retention in a way that is meaningfully different than chance and therefore is not an accurate binary classifier of retention. Further research should investigate the validity of other motivational-developmental dimensions and the fidelity of other methods for measuring them in an attempt to account for a greater proportion of variance in college student success. / Educational Leadership
|
216 |
Constructing a Polysemous Academic Vocabulary Extent Test Via Polytomous Rasch Model Measurement AnalysesRowles, Phillip Bruce January 2015 (has links)
Educational measurement research faces an unresolved dilemma: competently meeting the longstanding demand for improved vocabulary strength (depth) aspect assessments. My original contribution to knowledge in the written receptive vocabulary knowledge construct research domain is twofold. My first contribution is proposing an a priori metasynonymy awareness hypothesis based on a vocabulary strength aspect extension of O’Connor’s (1940) written receptive vocabulary acquisition developmental stage theory. My second contribution is designing and constructing a vocabulary extent (the nexus between vocabulary size (breadth) and strength aspects) test. The test, called the Polysemous Academic Vocabulary Extent Test, utilizes ordered triple rank (OTR) responses and a complementary six-tier incremental scoring guide rubric. An example test item includes a sentence stem with a bold keyword and three options, such as: All the reviews of the movie were positive. positive: a) sure b) good c) enviro / Language Arts
|
217 |
The positive and negative effects of testing in lifelong learningJanuary 2011 (has links)
Formal classroom learning is a lifelong pursuit. Many older adults return to school to advance their careers, learn new skills, or simply for personal fulfillment. As such, methods for improving learning should be considered in relation to both younger and older learners in order to properly assess their ultimate usefulness. A technique that has been demonstrably effective at improving learning and memory in younger students is testing. Testing improves memory more than mere exposure to material (e.g., restudying), a benefit known as the positive testing effect. However, recognition tests, where learners are exposed to correct and incorrect information (e.g., multiple-choice tests), also introduce false information to test-takers. While evidence shows that testing improves memory for tested material, this can include the incorrect material presented on recognition tests manifested as increased reproduction of incorrect answers (lures), a phenomenon known as the negative testing effect. These effects of testing, however, have only been studied in younger learners. Older learners, on the other hand, may show decreased positive testing effects and increased negative testing effects because of poorer long-term episodic and source memory, perhaps making them less receptive to the positive effects of testing and more susceptible to the negative effects of testing. Therefore, this study examined the positive and negative effects of testing on learning in 60 younger university students aged 18-25, 60 younger community adults aged 18-25, and 60 older community adults aged 55-65. This research also scrutinized how individual differences, including intelligence, previous knowledge, initial performance, and source memory were related to the positive and negative effects of testing. All groups showed positive testing effects, but these were larger for younger adults, for individuals with higher initial performance, and for people with more previous knowledge of the topics. Additionally, though no age group showed reliable negative testing effects, they increased for individuals with lower initial performance and previous knowledge and, surprisingly, for learners with higher nonverbal reasoning and verbal intelligence scores. These findings have important implications for the education of people of all ages and show that testing can be a beneficial learning tool for both younger and older learners.
|
218 |
Native American Students' Perceptions of High-Stakes Testing in New MexcioJanuary 2012 (has links)
abstract: Given the political and public demands for accountability, using the voices of students from the frontlines, this study investigated student perceptions of New Mexico's high-stakes testing program taking public schools in the right direction. Did the students perceive the program having an impact on retention, drop outs, or graduation requirements? What were the perceptions of Navajo students in Navajo reservation schools as to the impact of high-stakes testing on their emotional, physical, social, and academic well-being? The specific tests examined were the New Mexico High School Competency Exam (NMHSCE) and the New Mexico Standard Based Assessment (SBA/ High School Graduation Assessment) on Native American students. Based on interviews published by the Daily Times of Farmington, New Mexico, our local newspaper, some of the students reported that the testing program was not taking schools in the right direction, that the test was used improperly, and that the one-time test scores were not an accurate assessment of students learning. In addition, they were cited on negative and positive effects on the curriculum, teaching and learning, and student and teacher motivation. Based on the survey results, the students' positive and negative concerns and praises of high-stakes testing were categorized into themes. The positive effects cited included the fact that the testing held students, educators, and parents accountable for their actions. The students were not opposed to accountability, but rather, opposed to the manner in which it was currently implemented. Several implications of these findings were examined: (a) requirements to pass the New Mexico High School Competency Exam; (b) what high stakes testing meant for the emotional well-being of the students; (c) the impact of sanctions under New Mexico's high-stakes testing proficiency; and (d) the effects of high-stakes tests on students' perceptions, experiences and attitudes. Student voices are not commonly heard in meetings and discussions about K-12 education policy. Yet, the adults who control policy could learn much from listening to what students have to say about their experiences. / Dissertation/Thesis / Ed.D. Educational Administration and Supervision 2012
|
219 |
Investigating variability in student performance on DIBELS Oral Reading Fluency third grade progress monitoring probes: Possible contributing factorsBriggs, Rebecca N. 06 1900 (has links)
xv, 109 p. : col. ill. / The current study investigated variability in student performance on DIBELS Oral Reading Fluency (DORF) Progress Monitoring passages for third grade and sought to determine to what extent the variability in weekly progress monitoring scores is related to passage-level factors (e.g., type of passage [i.e., narrative or expository]), readability of the passage, reading rate for words in lists, passage specific comprehension, background knowledge, and interest in the topic of the passage) and student-level factors (e.g., the student's initial skill and variability across benchmark passages).
In light of recent changes in IDEIA legislation allowing for the use of Response to Intervention models and formative assessment practices in the identification of specific learning disabilities, it was intent of this study to identify factors associated with oral reading fluency that, once identified, could potentially be altered or controlled during progress monitoring and decision-making to allow for more defensible educational decisions.
The sample for analysis included 70 third grade students from one school in Iowa. Results of two-level HLM analyses indicated significant effects for background knowledge, interest in the passage, type of passage, retell fluency, readability, and word reading, with type of passage and readability demonstrating the largest magnitude effects. Magnitude of effect was based upon a calculation of proportion of reduction in level 1 residual variance. At level 2, initial risk status demonstrated a significant effect on a student's initial oral reading fluency score, while the benchmark variability demonstrated a significant effect on a student's growth over time.
Results demonstrate support for readability as an indicator of passage difficulty as it relates to predicting oral reading fluency for students and suggest that consideration for the type of passage may be warranted when interpreting student ORF scores. Additionally, results indicated possible student-level effects of variables such as background knowledge and word list that were not investigated within the current study. Limitations of the study, considerations for future research, and implications for practice are discussed. / Committee in charge: Roland Good, Chairperson/Advisor;
Laura Lee McIntyre, Member;
Joe Stevens Member;
Robert Davis, Outside Member;
Scott Baker, Member
|
220 |
Integration of Traditional Assessment and Response to Intervention in Psychoeducational Evaluations of Culturally and Linguistically Diverse StudentsJanuary 2014 (has links)
abstract: The popularity of response-to-intervention (RTI) frameworks of service delivery has increased in recent years. Scholars have speculated that RTI may be particularly relevant to the special education assessment process for culturally and linguistically diverse (CLD) students, due to its suspected utility in ruling out linguistic proficiency as the primary factor in learning difficulties. The present study explored how RTI and traditional assessment methods were integrated into the psychoeducational evaluation process for students suspected of having specific learning disabilities (SLD). The content of psychoeducational evaluation reports completed on students who were found eligible for special education services under the SLD category from 2009-2013 was analyzed. Two main research questions were addressed: how RTI influenced the psychoeducational evaluation process, and how this process differed for CLD and non-CLD students. Findings indicated variability in the incorporation of RTI in evaluation reports, with an increase across time in the tendency to reference the prereferral intervention process. However, actual RTI data was present in a minority of reports, with the inclusion of such data more common for reading than other academic areas, as well as more likely for elementary students than secondary students. Contrary to expectations, RTI did not play a larger role in evaluation reports for CLD students than reports for non-CLD students. Evaluations of CLD students also did not demonstrate greater variability in the use of traditional assessments, and were more likely to rely on nonverbal cognitive measures than evaluations of non-CLD students. Methods by which practitioners addressed linguistic proficiency were variable, with parent input, educational history, and individually-administered proficiency test data commonly used. Assessment practices identified in this study are interpreted in the context of best practice recommendations. / Dissertation/Thesis / Doctoral Dissertation Educational Psychology 2014
|
Page generated in 0.1936 seconds