Spelling suggestions: "subject:"essessment evaluation"" "subject:"bioassessment evaluation""
71 |
An Analysis of the Relationships Between State Mandates for Financial Education and Young Adults' Financial Literacy and Financial CapabilityCarlson, Elise 01 January 2020 (has links)
The study sought to understand the relationship between the type of state mandate for financial education and 18-24-year-olds' financial literacy and financial capability. Using extant data from national surveys about financial literacy and financial capability in 2015 and 2018, this study determined there was rarely a significant difference in young adults' financial literacy and financial capability as related to the level of financial education they received in high school. For 2015 literacy, the education mandate as a main effect within ethnicity was p = .025. Within certain demographic main effects, there were significant results. In 2015, ethnicity and educational attainment were each significant for financial literacy p = .000. In 2018, gender, ethnicity and educational attainment were each significant for financial literacy, p = .000, while income was significant p = .005. In 2015, ethnicity was significant for financial capability p = .001, while educational attainment and income were each p = .000. In 2018, gender was significant for financial capability p = .016, while ethnicity, educational attainment, and income were each significant p = .000. Interaction effects existed in some cases, with 2015 financial literacy at gender by education mandate p = .008 and income by education mandate p = .040; for 2015 capability, gender by education mandate p = .019; for 2018 capability, educational attainment by education mandate p = .024. Understanding how demographic factors influence financial literacy and financial capability and can influence how policymakers and educators address these differences to provide effective financial education for all students.
|
72 |
A Priori Analysis of Error and Bias in Value-Added ModelsLavery, Matthew 01 January 2016 (has links)
Over the past 20 years, value-added models (VAMs) have become increasingly popular in educational assessment and accountability policies because of the sophisticated statistical controls these models use to purportedly isolate the effect of a single teacher on the learning gains of his or her students. The present research uses a Monte Carlo simulation study design in order to investigate whether VAMs are able to provide accurate estimates of teacher effectiveness when all assumptions are met and to determine how robust the models are to endogenous peer effects and nonrandom assignment of students to classroom. The researcher generates three years of simulated achievement data for 18,750 students taught by 125 teachers, and analyzes this data with a linear mixed model similar to the SAS EVAAS Multivariate Response Model (MRM; M1), a basic covariate adjustment model (M2), and variations on these models designed to estimate random classroom effects. Findings indicate that the modified EVAAS may be too computationally onerous to be of practical use, and that modified covariate adjustment models do not perform significantly differently than the basic covariate adjustment model. When all assumptions are met, M1 is more accurate than M2, but both models perform reasonably well, misclassifying fewer than 5% of teachers on average. M1 is more robust to endogenous peer effects than M2, however both models misclassified more teachers than when all assumptions are met. M2 is more robust to nonrandom assignment of students than M1. Assigning teachers a balanced schedule of nonrandom classes with low, medium, and high prior achievement seemed to mitigate the problems that nonrandom assignment caused for M1, but made M2 less accurate. Implications for practice and future research are discussed.
|
73 |
Pooling Correlation Matrices Corrected for Selection Bias: Implications for Meta-analysisMatthews, Kenneth 01 January 2019 (has links)
Selection effects systematically attenuate correlations and must be considered when performing meta-analyses. No research domain is immune to selection effects, evident whenever self-selection or attrition take place. In educational research, selection effects are unavoidable in studies of postsecondary admissions, placement testing, or teacher selection. While methods to correct for selection bias are well documented for univariate meta-analyses, they have gone unexamined in multivariate meta-analyses, which synthesize more than one correlation from each study (i.e., a correlation matrix). Multivariate meta-analyses of correlations provide opportunities to explore complex relationships and correcting for selection effects improves the summary effect estimates. I used Monte Carlo simulations to test two methods of correcting selection effects and evaluate a method for pooling the corrected matrices. First, I examined the performance of Thorndike's corrections (for both explicit and incidental selection) and Lawley's multivariate correction for selection on correlation matrices when explicit selection takes place on a single variable. Simulation conditions included a wide range of selection ratios, samples sizes, and population correlations. The results indicated that univariate and multivariate correction methods perform equivalently. I provide practical guidelines for choosing between the two methods. In a second Monte Carlo simulation, I examined the confidence interval coverage rates of a Robust Variance Estimation (RVE) procedure when it is used to pool correlation matrices corrected for selection effects under a random-effects model. The RVE procedure empirically estimates the standard errors of the corrected correlations and has the advantage of having no distributional assumptions. Simulation conditions included tau-squared ratio, within-study sample size, number of studies, and selection ratio. The results were mixed, with RVE performing well under higher selection ratios and larger unrestricted sample sizes. RVE performed consistently across values of tau-squared. I recommend applications of the results, especially for educational research, and opportunities for future research.
|
74 |
Faculty Perceptions as a Foundation for Evaluating Use of Student Evaluations of TeachingBaker, Scott Hamilton 01 January 2014 (has links)
Amidst ever-growing demands for accountability and increased graduation rates to help justify the rising costs of higher education, few topics in undergraduate education elicit a broader range of responses than student evaluations of teaching (SETs). Despite debates over their efficacy, SETs are increasingly used as formative (pedagogical practices) and summative (employee reviews) assessments of faculty teaching. Proponents contend SETs are a necessary component in measuring the quality of education a student receives, arguing that they further enable educators to reflect upon their own pedagogy and thus informing best practices, and that they are a valid component in summative evaluations of faculty. Skeptics argue that SETs are ineffective as the measurements themselves are invalid and unreliable, students are not qualified evaluators of teaching, and faculty may lower educational standards due to pressure for higher ratings in summative evaluations. This study dives more deeply into this debate by exploring faculty perceptions of SETs.
Through the use of surveys of 27 full- and part-time faculty within one division at a private, four-year teaching-focused college, this study explored faculty perceptions of SETs primarily as an initial step in a larger process seeking to evaluate perceived and potential efficacy of SETs. Both quantitative and qualitative data were collected and analyzed using Patton's (2008) Utilization-Focused Evaluation (UFE) framework for engaging evidence based upon a four-stage process in which evaluation findings are analyzed, interpreted, judged, and recommendations for action are generated, with all steps involving intended users. Overall, the study data suggests that faculty were generally very supportive of SETs for formative assessments, and strongly reported their importance and use for evaluating their own pedagogy. Findings also indicated faculty relied primarily upon the students' written qualitative comments over the quantitative reports generated by externally determined scaled-questions on the SETs. Faculty also reported the importance of SETs as part of their own summative evaluations, yet expressed concern about overreliance upon them and again indicated a desire for a more meaningful process.
The utility of the UFE framework for SETs, has implications beyond the institution studied, nearly every higher education institution is faced with increasing demands for accountability of student learning from multiple stakeholders. Additionally, many institutions are grappling with policies on SETs in summative and formative evaluation and to what extent faculty and administrators do--and perhaps should--utilize SETs in measuring teaching effectiveness is a pertinent question for any institution of higher education to examine. Thus, the study suggests that to what extent faculty reflect upon SETs, and to what extent they utilize feedback, is a salient issue at any institution; and Patton's model has the potential to maximize the utility of SETs for many relevant stakeholders, especially faculty.
|
75 |
Rate of advanced placement (AP) exam taking among AP-enrolled students: A study of New Jersey high schoolsFithian, Ellen C. 01 January 2003 (has links)
No description available.
|
76 |
Use and effectiveness of test accommodations for students with learning disabilities on the Stanford Achievement Test, Ninth EditionRullman, Deborah Whisler 01 January 2003 (has links)
No description available.
|
77 |
An experimental study of the Effect of Puppetry on Pupil Growth in School Achievement, Personal Adjustment, and Manipulative SkillHaak, Albert Edward 01 January 1952 (has links)
No description available.
|
78 |
The origins and development of Virginia's student assessment policy: A case studySims, Serbrenia J. 01 January 1989 (has links)
The purpose of this study was to review the historical origins and chronology of the student assessment movement in the United States and to describe and analyze the development of Virginia's higher education student assessment policy within that movement. "Student assessment," the process of determining whether or not students have met educational goals set by their programs of study, institutions of higher education, or the state is a relatively new event in Virginia. Major participants involved in the passage and implementation of Virginia's policy were identified from historical documents and interviewed based on their specific areas of knowledge.;From the interviews and document analysis it was found that the historical origins for Virginia's student assessment policy were synonymous with the history of accrediting agencies. A second possible origin for student assessment was the response to periods of expansion and curriculum development that occurred from 1918-1928 and again from 1952-1983.;The recent push for student assessment was spurred in the mid-1980's by the release of several national studies on the condition of the curriculum, instruction, and student achievement in higher education in the United States. These reports caused the states to question the credibility of regional accrediting agencies as a means of ensuring educational quality. as a result, at least two-thirds of the states have instituted some form of student assessment legislation since 1984.;The state of Virginia's student assessment policy began in 1985 with the passage of Senate Joint Resolution 125 which called on the State Council for Higher Education for Virginia (SCHEV) to investigate means by which student achievement could be measured to assure the citizens of Virginia of the continuing high quality of higher education in the state. The study was conducted and presented to the 1986 General Assembly of Virginia as Senate Document No. 14 and was accepted in Senate Joint Resolution 83. This resolution requested the state-supported institutions of higher education to establish student assessment programs in consultation with SCHEV. In 1989, Senate Bill 534 amended The Code of Virginia giving SCHEV formal authority to oversee student assessment activities.;After completing the case study, the study was compared for fit with six models of policy formulation (elite, rational, incremental, group, systems, and institutional) as proposed by Thomas Dye in his 1972 book, Understanding Public Policy. It was found that the systems model was the best fit of the six models. However, since vestiges of the other models existed within Virginia's student assessment policy formulation process the study proposed a revised systems model that included each of Dye's six models.
|
79 |
A Comparison of the Performance of Five Randomly Selected Groups of 1978-1979 Eighth Grade Students on Five Different Stanford Achievement Test Batteries Standardized in 1929, 1940, 1952, 1964, and 1973Chambers, Vaughn D. 01 December 1979 (has links)
The purpose of this study was to examine the test performance of five randomly selected groups of 1978 students on five different versions of the Stanford Achievement Test. Three types of comparisons were made. First, the test scores of the five groups of 1978 students in grade 8.1 were compared with each other on the 1929, 1940, 1952, 1964, and 1973 Stanford Achievement Tests. Second, the test scores of each 1978 test group were compared with the test scores of the 8.1 normlng group for each test. Last, the test scores of 1978 students were compared with the test scores of students of the same age in the normlng groups for the five different tests. A total of 236 subjects from one middle school in Upper East Tennessee was used. The 236 subjects were randomly assigned to five groups. The five groups were randomly paired with the five different Stanford Achievement Tests and were tested under the same testing conditions. A computer comparison of the past achievement of the five 1978 test groups proved the groups equal in ability at the time of testing. In making the comparisons, it was found that students in the 1978 test groups were not achieving less than students in the past in all subjects. Reading and language achievement scores were as high or higher than in the past. Mathematics scores were lower than in the past except for 1973. Recommendations for future research were given.
|
80 |
A Comparison of the Performance of Five Randomly Selected Groups of 1978-1979 Eighth Grade Students on Five Different Stanford Achievement Test Batteries Standardized in 1929, 1940, 1952, 1964, and 1973Chambers, Vaughn D. 01 December 1979 (has links)
The purpose of this study was to examine the test performance of five randomly selected groups of 1978 students on five different versions of the Stanford Achievement Test. Three types of comparisons were made. First, the test scores of the five groups of 1978 students in grade 8.1 were compared with each other on the 1929, 1940, 1952, 1964, and 1973 Stanford Achievement Tests. Second, the test scores of each 1978 test group were compared with the test scores of the 8.1 normlng group for each test. Last, the test scores of 1978 students were compared with the test scores of students of the same age in the normlng groups for the five different tests. A total of 236 subjects from one middle school in Upper East Tennessee was used. The 236 subjects were randomly assigned to five groups. The five groups were randomly paired with the five different Stanford Achievement Tests and were tested under the same testing conditions. A computer comparison of the past achievement of the five 1978 test groups proved the groups equal in ability at the time of testing. In making the comparisons, it was found that students in the 1978 test groups were not achieving less than students in the past in all subjects. Reading and language achievement scores were as high or higher than in the past. Mathematics scores were lower than in the past except for 1973. Recommendations for future research were given.
|
Page generated in 0.1256 seconds