Spelling suggestions: "subject:"essessment literacy"" "subject:"essessment iteracy""
11 |
THE POLITICS OF PLACEMENT: A HISTORICAL EXAMINATION OF STUDENT, FACULTY, AND ADMINISTRATOR PERSPECTIVES OF PLACEMENT PRACTICESDavis-Cosby, Nicki 07 August 2015 (has links)
No description available.
|
12 |
VISIONER OM FORMATIVA PRAKTIKER : Lärares och elevers levda erfarenheter av formativ bedömning och bedömningsmatriser i skolans fysikundervisning / VISIONS OF FORMATIVE ASSESSMENT : Teachers’ and Students’ Lived Experiences of Formative Assessment and Rubric Use in Physics EducationHallström, Henrik January 2023 (has links)
In the wake of declining student performance and interest in science education, efforts to improve the quality of science teaching have intensified, including physics education. A recurring proposal to improve physics teaching is the use of formative assessment. Policy reforms tend to view the implementation of formative assessment as easy, but studies indicate that integrating these strategies into teachers’ practices can be challenging. Using a phenomenological approach and hermeneutic reflections, the present study explores the opportunities and challenges that teachers’ and students’ experience when implementing formative assessments in the physics classroom. For example, teachers may encounter resistance from their students and colleagues with different expectations of physics teaching, limiting teachers’ opportunities to ‘break free’ from established traditions. However, the study also highlights opportunities for physics teachers to evolve by taking risks and embracing formative assessment as an overarching learning assessment approach. Furthermore, the present study confirms the results of previous research indicating that students may see assessment rubrics in a positive light as their use can clarify teachers’ expectations and reduce uncertainty in this regard. However, the results of the present study also show that students may approach rubrics only as mechanical and strategic tools to obtain their desired grades, which risks conveying the message to students that physics knowledge is quantitative in nature. The students’ experiences also demonstrated that the use of rubrics could cause stress and anxiety, limiting the formative potential of rubrics. The results of the study are discussed in relation to the support that teachers and students need in implementing formative assessment and rubric use, and they have implications for teachers’ assessment literacy, including their ability to implement formative assessments in relation to different purposes of physics teaching. One conclusion is that teachers’ and students’ lived experiences of formative assessment and rubric use need to be understood in relation to the wider context of their lifeworlds, which is marked by an increased focus on performance and results. This is crucial so that teachers and students would not be portrayed as the problems when investments in formative assessment do not meet expectations.
|
13 |
Exploring Teacher Assessment Literacy through the Process of Training Teachers to Write Assessment ItemsWright, Heather Peltier 29 March 2017 (has links)
The purpose of this study was to examine the process and impact of assessment training content and delivery mode on the quality of assessment items developed by the teachers in a two-year assessment development project. Teacher characteristics were examined as potential moderating factors. Four types of delivery mode were employed in the project: synchronous online, asynchronous online, in-person workshop, and blended (a combination of online and in-person training). The quality of assessment items developed by participating teachers was measured via: 1) item acceptance rate, 2) number of item reviews (as an indicator of how many times accepted items were rejected before being approved), and 3) psychometric properties of the items (item difficulty and item discrimination) in the field test data.
A teacher perception survey with quantitative and qualitative data was used to explore teacher perception of the training across the four modes and the anticipated impact of the project participation the teachers expected on their classroom assessment practices.
Multilevel modeling and multiple regression were used to examine the quality of items developed by participants, while constant comparative analysis, a chi-square test, and ANOVA were employed to analyze participants’ responses to a participation survey.
No pre-existing teacher variables were found to have a significant impact on the item discrimination values, though prior assessment development experience beyond that of the classroom level was found to have a significant relationship with the number of reviews per item. After controlling for prior assessment development experience, participant role was found to have a significant (p < .01) impact on the number of reviews per item. Items written by participants who served as both item writers and reviewers had a significantly lower number of reviews per item, meaning their items were rejected less frequently than items written by participants who served as item writers only. No differences in item quality were found based on the mode of training in which item writers participated.
Responses to the training evaluation survey differed significantly by mode of training at p < .001. The in-person trained group had the lowest total rating, followed by the online asynchronous group, while the online synchronous group had the highest overall rating of the training. Participant responses to open-ended questions also differed significantly by mode of training.
|
14 |
THE USE OF LANGUAGE PROFICIENCY TEST SCORES IN GRADUATE ADMISSIONSSharareh Taghizadeh Vahed (11185131) 26 July 2021 (has links)
<p>The purpose of this research is to reveal and compare the language
proficiency profiles of Purdue’s Chinese and Indian graduate applicants in
various disciplines to take a step towards the development of Language
Proficiency Literacy (LPL) of graduate admissions decision makers. The study
argues that before being able to offer LPL development opportunities to
admissions decision-makers, language testers need to gain admissions literacy
in their specific academic context. One way this can be achieved is by
analyzing graduate admissions data to see patterns of test score use in each
discipline and to reveal language proficiency profiles of graduate applicants.
Providing admissions decision makers with information about the linguistic
characteristics of their applicants can be a very helpful step towards
enhancing LPL in the context of graduate admissions. </p>
<p>One of the analyses conducted towards the goal LPL
development in the context of graduate admissions was a Cluster Analysis
procedure followed by a Chi-square analysis to compare the language proficiency
profiles of graduate applicants from various L1 backgrounds based on scores on
the Test of English as a Foreign Language (TOEFL). The study found three
language proficiency profiles in graduate applicants’ TOEFL data: 1) the
‘unbalanced’ profile, which consists of applicants who have higher scores in
the subskills of reading and listening, and comparatively lower scores on
speaking and writing, 2) the ‘balanced medium’ profile, which represents
students who have moderate scores across all four subskills, and 3) the
‘balanced high’ profile, which consists of applicants who have high scores
across all four subskills. The study found evidence for the interaction between
graduate applicant test-takers’ L1 background and belonging to a balanced or an
unbalanced language proficiency profile, which highlights the importance of
considering subskill scores in addition the total score when using language
proficiency test scores to select graduate students from specific L1
backgrounds.</p>
|
15 |
The development of assessment literacy in Chinese pre-service primary teachersYan, Bing 01 January 2015 (has links)
Over the past decades, there has been a growing consensus among researchers and teacher educators that more support and training should be provided for pre-service and in-service teachers in order to help them acquire basic assessment knowledge and competence. Using a quasi-experimental research design, this dissertation study examined the effectiveness of a backward-designed assessment training course for improving the assessment literacy levels of pre-service primary teachers who were participating in college-level teacher preparation programs in Shanghai. Two extant naturally formed classes, within which the eighty pre-service primary teachers from a private pre-service teacher education institution XT in Shanghai fit the participants recruiting criterion, were used to serve as the treatment and control groups. Framed by the design approach of Understanding by Design (UbD) developed by Grant Wiggins and Jay McTighe (2005), an assessment training program was developed and provided for those in the treatment group during a 12-week period of time; in contrast, those in the control group were not provided with any assessment-related courses. For all the participants, their levels of assessment literacy were measured twice, before and after the intervention, by using the Chinese version of the Assessment Literacy Inventory (Mertler & Campbell, 2005) which I modified further to better meet the context of this study. Results of the study suggest that: 1) among the courses (excluding the intervention itself) provided for the pre-service primary teachers involved in this study, limited efforts had been made to prepare the pre-service teachers for their future assessing tasks; 2) due to the inadequacy of assessment training, most of the Chinese pre-service teachers being tested were not initially literate enough in their assessment knowledge or practice; and 3) whether or not one participates in the assessment training course is a statistically significant predictor of pre-service teachers' assessment literacy, with their previous assessment literacy controlled. In other words, with the embedded theoretical framework of UbD, the designed assessment literacy training course appears to have had a large positive impact on improving pre-service teachers’ assessment competency ( F (1, 77) = 135.91, p 2 partial = .638).
|
16 |
INVESTIGATING THE LINK BETWEEN CURRENT CLASSROOM TEACHERS’ CONCEPTIONS, LITERACY, AND PRACTICES OF ASSESSMENTSnyder, Mark Richard January 2017 (has links)
Teachers’ assessment conceptions, assessment literacy, and self-reported assessment practices were investigated using a single administration survey of U.S. classroom teachers. These phenomena were investigated both individually and in there inter relationships. Assessment conceptions were measured with the Teachers’ Conceptions of Assessment III – abridged survey and assessment literacy with the Assessment Literacy Inventory. Self-reported classroom assessment practices were analyzed with factor analysis to determine a set of five assessment practice factors that indicate a set of classroom assessment practice behaviors. Analysis suggested certain assessment conceptions held by teachers and aspects of their assessment literacy were significant predictors in their loadings for certain assessment practice factors. One of these significant relationships was that the degree to which the teachers held the conceptions that assessment holds schools accountable and that it aids in student improvement predicted the frequency with which they reported using tests and quizzes in their classroom. There were also significant differences in the assessment practices self-reported based upon the grade level of student instructed, years of teaching experience, as well as other demographic variables. These findings suggest that study and use of the three assessment phenomena would inform practitioners about what may influence classroom teachers’ assessment practices, and how they can best be remediated. / Educational Psychology
|
17 |
TOWARD ASSESSMENT LEADERSHIP: STUDY OF ASSESSMENT PRACTICES AMONG SCHOOL AND CLASSROOM LEADERSEubank Morris, Carrie Elizabeth 01 January 2017 (has links)
Traditionally, models of instructional leadership espouse data-informed decision making in response to student assessment outcomes as one of the core school leader behaviors. In recent years, rising expectations from accountability policies and related assessment practices have myriad implications for school districts, specifically in the areas of standards-driven reform, student assessment systems, and professional development models. As a result, demands on schools to collect and use student assessment data to inform curricular and instructional decisions has expanded. While principals are typically held responsible for school improvement efforts, more contemporary models of instructional leadership incorporate teachers as classroom-based leaders of assessment practices in forums such as professional learning communities.
School and classroom assessment leaders engage in behaviors such as (a) identifying an assessment vision, (b) fostering group goals, (c) providing a model of data- informed decision making, (d) promoting teacher job-embedded professional learning experiences, (e) evaluating instructional practices with specific feedback, and (f) strategically aligning resources to school improvement goals. Unfortunately, school districts face many challenges with assessment leadership due to barriers in beliefs about assessments, time with and access to tools and training, and knowledge and skills about how to operationalize effective assessment practices that yield positive student outcomes.
The purpose of this study was to explore assessment leadership as a construct among P-12 school and classroom leaders in one large district in Florida. Data were collected using an Internet-based survey constructed from existing qualitative and quantitative measures of key components of assessment leadership established in the literature. A series of descriptive and inferential analyses were conducted to (a) explore the factor structure of the instrument and (b) evaluate the influence of assessment learning experiences, beliefs, and knowledge on assessment practices. Relationships among variables were examined when considering moderating variables for school role (i.e., school-level administrator or classroom teacher as professional learning communities facilitator) and school type (elementary or secondary). Limitations were discussed to inform future research in this critical area of school improvement.
|
18 |
Characterizing Multiple-Choice Assessment Practices in Undergraduate General ChemistryJared B Breakall (8080967) 04 December 2019 (has links)
<p>Assessment of
student learning is ubiquitous in higher education chemistry courses because it
is the mechanism by which instructors can assign grades, alter teaching
practice, and help their students to succeed. One type of assessment that is
popular in general chemistry courses, yet difficult to create effectively, is
the multiple-choice assessment. Despite its popularity, little is known about
the extent that multiple-choice general chemistry exams adhere to accepted
design practices or the processes that general chemistry instructors engage in
while creating these assessments. Further understanding of multiple-choice
assessment quality and the design practices of general chemistry instructors
could inform efforts to improve the quality of multiple-choice assessment
practice in the future. This work attempted to characterize multiple-choice
assessment practices in undergraduate general chemistry classrooms by, 1)
conducting a phenomenographic study of general chemistry instructor’s
assessment practices and 2) designing an instrument that can detect violations
of item writing guidelines in multiple-choice chemistry exams. </p>
<p>The
phenomenographic study of general chemistry instructors’ assessment practices
included 13 instructors from the United States who participated in a
three-phase interview. They were asked to describe how they create multiple-choice
assessments, to evaluate six multiple-choice exam items, and to create two
multiple-choice exam items using a think-aloud protocol. It was found that the
participating instructors considered many appropriate assessment design
practices yet did not utilize, or were not familiar with, all the appropriate
assessment design practices available to them. </p>
<p>Additionally, an
instrument was developed that can be used to detect violations of item writing guidelines
in multiple-choice exams. The instrument, known as the Item Writing Flaws
Evaluation Instrument (IWFEI) was shown to be reliable between users of the
instrument. Once developed, the IWFEI was used to analyze 1,019 general
chemistry exam items. This instrument provides a tool for researchers to use to
study item writing guideline adherence, as well as, a tool for instructors to
use to evaluate their own multiple-choice exams. The use of the IWFEI is hoped
to improve multiple-choice item writing practice and quality.</p>
<p>The results of
this work provide insight into the multiple-choice assessment design practices of
general chemistry instructors and an instrument that can be used to evaluate multiple-choice
exams for item writing guideline adherence. Conclusions, recommendations for
professional development, and recommendations for future research are discussed.</p>
|
Page generated in 0.1143 seconds