• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1475
  • 109
  • 18
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • 11
  • 10
  • 8
  • 6
  • 5
  • 4
  • Tagged with
  • 1763
  • 1763
  • 1763
  • 632
  • 298
  • 227
  • 224
  • 209
  • 204
  • 190
  • 187
  • 169
  • 161
  • 159
  • 159
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Making the objective subjective a sociopsychometric exploration of fairness and standardized testing /

Yates, Kristin E. January 1900 (has links)
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2006. / Title from title screen (site viewed on August 28, 2006). PDF text of dissertation: 116 p. : ill. ; 1.29Mb. UMI publication number: AAT 3208122. Includes bibliographical references. Also available in microfilm, microfiche and paper format.
2

A proposed placement program for freshman chemistry students at Kansas State College

Homman, Guy Burger January 2011 (has links)
Typescript, etc. / Digitized by Kansas State University Libraries
3

Meta-analysis of the predictive validity of Scholastic Aptitude Test (SAT) and American College Testing (ACT) scores for college GPA

Curabay, Muhammet 04 January 2017 (has links)
<p> The college admission systems of the United States require the Scholastic Aptitude Test (SAT) and American College Testing (ACT) examinations. Although, some resources suggest that SAT and ACT scores give some meaningful information about academic success, others disagree. The objective of this study was to determine whether there is significant predictive validity of SAT and ACT exams for college success. This study examined the effectiveness of SAT and ACT scores for predicting college students&rsquo; first year GPA scores with a meta-analytic approach. Most of the studies were retrieved from Academic Search Complete and ERIC databases, published between 1990 and 2016. In total, 60 effect sizes were obtained from 48 studies. The average correlation between test score and college GPA was 0.36 (95% confidence interval: .32, .39) using a random effects model. There was a significant positive relationship between exam score and college success. Moderators examined were publication status and exam type with no effect found for publication status. A significant effect of exam type was found, with a slightly higher average correlation for SAT compared to ACT score and college GPA. No publication bias was found in the study.</p>
4

From Cribs to Crayons| A Study on the Use of Universal Curriculum and Assessment of Preschool Students and Teachers in the Classroom

Williams, Karen 01 December 2016 (has links)
<p> Current research indicates there is a correlation between participating in an early childhood program and a student&rsquo;s performance on future standardized measures, including the challenge of using early learning standards (Feldman, 2010). This research study focused on state initiatives, and student participation in an early childhood preschool model centered on the use of universal curriculum and assessment designed to measure student outcomes aligned to learning targets, outlined in state preschool curriculum standards. Research shows learning decreases for students who have not participated in an early childhood program, while those who have participated in some kind of early childhood program show progress (Heckman, 2011). Young children come to school with varying degrees of experiences, which may or may not enhance their learning. Educators are responsible for providing positive experiences and provide academic activities to develop academic awareness, social/emotional skills, in addition to displaying appropriate behavioral skills. Participation in preschool should also build a student&rsquo;s level of independence and competency skills. This research study examined state initiatives and curriculum materials, and assessment tools related to the importance of early childhood education programming and teacher practices, and the impact of universal curriculum and assessment implemented in the classroom during the school year. In addition, it further explored teacher perspectives on educational programming, Louisiana&rsquo;s early childhood initiatives, and the use of universal curriculum and assessment in their classroom.</p>
5

Making test anxiety a laughing matter| A quantitative study

Repass, Jim T. 04 April 2017 (has links)
<p> Relieving test anxiety actions range from relaxation exercises to prescription medication. Humor can be a simple method of test anxiety relief. The current study was used to determine if humor, in the form of a cartoon, placed on the splash page of an online exam improved the test scores of students who have high test anxiety. In the current study, 2 theories were used to guide the research. The interference theory by Ralf Schwarzer and Matthias Jerusalem indicated students have difficulty separating competing thoughts during an exam. In the adult learning theory by Malcolm Knowles, the learning of children and adults was differentiated, while explaining how adults learn. A quasi-experimental quantitative design was used to find a possible correlation between humor and test anxiety relief. The study sample comprised an equal number of students with high test anxiety and students with low test anxiety. The low test anxiety group comprised the control group. A 2-sample <i>t</i> test was used to search for a correlation between the cartoon and the exam scores. Intended benefits of the study included: (a) students with test anxiety find relief from test anxiety, (b) instructors achieve reliable assessments of students with test anxiety, and (c) confident, well-educated graduates. The current study results showed the opposite of expected results. The high test anxiety group did worse on the exam with the cartoon. The 2-sample <i> t</i> test showed a negative improvement of &ndash;6.222 between midterm and final exams for the high test anxiety group.</p>
6

Assessment services at a university education clinic

Dangor, Zubeda 20 February 2015 (has links)
No description available.
7

The Effect of Population Shifts on Teacher Vam Scores

Unknown Date (has links)
Value-Added Models (VAMs) require consistent longitudinal data that includes student test scores coming from sequential years. However, longitudinal data is usually incomplete for several reasons, including year-to-year changes in student populations. This study explores the implications of yearly population changes on teacher VAM scores. I used the North Carolina End of Grade student data sets, created artificial sub-samples, and run separate VAMs for each sub-sample. Results of this study indicate that changes in student population could affect teacher VAM scores. / A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the Doctor of Philosophy. / Fall Semester 2015. / November 12, 2015. / hierarchical linear modeling, Value-added models / Includes bibliographical references. / Russell Almond, Professor Directing Dissertation; Elizabeth Jakubowski, University Representative; Betsy Jane Becker, Committee Member; Insu Paek, Committee Member.
8

The Comparison of Standard Error Methods in the Marginal Maximum Likelihood Estimation of the Two-Parameter Logistic Item Response Model When the Distribution of the Latent Trait Is Nonnormal

Unknown Date (has links)
A Monte Carlo simulation study was conducted to investigate the accuracy of several item parameter standard error (SE) estimation methods in item response theory (IRT) when the marginal maximum likelihood (MML) estimation method was used and the distribution of the underlying latent trait was nonnormal in the two-parameter logistic (2PL) model. The manipulated between-subject factors were sample size (N), test length (TL), and the shape of the latent trait distribution (Shape). The within-subject factor was the SE estimation method, which includes the expected Fisher information method (FIS), the empirical cross-product method (XPD), the supplemented-EM method (SEM), the forward difference method (FDM), the Richardson extrapolation method (REM), and the sandwich-type covariance method (SW). The commercial IRT software flexMIRT was used for item parameter estimation and SE estimation. Results showed that when other factors were hold equal, all of the SE methods studied were apt to produce less accurate SE estimates when the distribution of the underlying trait was positively skewed or positively skewed-bimodal, as compared to what they would produce when the distribution was normal. The degree of inaccuracy of each method for an individual item parameter depended on the magnitude of the relevant a and b parameter, and were affected more by the magnitude of the b parameter. On the test level, the overall average performance of the SE methods interact with N, TL, and Shape. The FIS was not viable when TL=40 and was only run when TL=15. For such a short test, it remained to be the “gold standard” as it estimated the SEs most accurately among all the methods, although it requires relatively longer time to run. The XPD method was the least time-consuming option and it generally performed very well when Shape is normal. However, it tended to produce positively biased results when a short test was paired with a small sample. The SW did not outperform other SE methods when Shape is nonnormal as the theory suggests. The FDM had somewhat larger variations when TL=1500 and TL=3000. The SEM and REM were most accurate among the SE methods in this study and appeared to be a good choice both for normal or non-normal cases. For each simulated condition, the average shape of the raw-score distribution was presented to help practitioners better infer the shape of the underlying distribution of latent trait when the truth about the latent trait distribution shape is unknown, thereby leading to more informed decisions of SE methods using the results of this study. Implications, limitations and future directions were discussed. / A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester 2018. / April 9, 2018. / Includes bibliographical references. / Insu Paek, Professor Directing Dissertation; Fred Huffer, University Representative; Betsy Jane Becker, Committee Member; Yanyun Yang, Committee Member.
9

Critical Issues in Survey Meta-Analysis

Unknown Date (has links)
In research synthesis, researchers may aim at summarizing peoples' attitudes and perceptions of phenomena that have been assessed using different measures. Self-report rating scales are among the most commonly used measurement tools to quantify such latent constructs in education and psychology. However, self-report rating-scale questions measuring the same construct may differ from each other in many ways. Scale format, number of response options, wording of questions, and labeling of response option categories may vary across questions. Consequently, variations across the measures of the same construct bring about the issue of comparability of the results across the studies in meta-analytic investigations. In this study, I examine the complexities of summarizing the results of different survey questions about the same construct in the meta-analytic fashion. More specifically, this study focuses on the practical problems that arise when combining survey items that differ from one another in the wording of question stems, numbers of response option categories, scale direction (i.e., unipolar and bipolar scales), response scale labeling (i.e., fully-labeled scales and endpoints-labeled scales), and response-option labeling (e.g., "extremely happy" - "completely happy" - "most happy", "pretty happy", "quite happy"- "moderately happy", and "not at all happy" - "least happy" - "most unhappy"). In addition, I propose practical solutions to handle the issues that arise due to such variations when conducting a meta-analysis. I discuss the implications of the proposed solutions from the perspective of meta-analysis. Examples are obtained from the collection of studies in the World Happiness Database (Veenhoven, 2006), which includes various single-item happiness measures. / A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester 2018. / November 9, 2018. / meta-analysis, scale transformations, survey / Includes bibliographical references. / Betsy J. Becker, Professor Directing Dissertation; Fred W. Huffer, University Representative; Yanyun Yang, Committee Member; Insu Paek, Committee Member.
10

The Impact of Unbalanced Designs on the Performance of Parametric and Nonparametric DIF Procedures: A Comparison of Mantel Haenszel, Logistic Regression, SIBTEST, and IRTLR Procedures

Unknown Date (has links)
The current study examined the impact of unbalanced sample sizes between focal and reference groups on the Type I error rates and DIF detection rates (power) of five DIF procedures (MH, LR, general IRTLR, IRTLR-b, and SIBTEST). Five simulation factors were used in this study. Four factors were for generating simulation data and they were sample size, DIF magnitude, group mean ability difference (impact), and the studied item difficulty. The fifth factor was the DIF method factor that included MH, LR, general IRTLR, IRTLR-b, and SIBTEST. A repeated-measures ANOVA, where the DIF method factor was the within-subjects variable, was performed to compare the performance of the five DIF procedures and to discover their interactions with other factors. For each data generation condition, 200 replications were made. Type I error rates for MH and IRTLR DIF procedures were close to or lower than 5%, the nominal level for different sample size levels. On average, the Type I error rates for IRTLR-b and SIBTEST were 5.7%, and 6.4%, respectively. In contrast, the LR DIF procedure seems to have a higher Type I error rate, which ranged from 5.3% to 8.1% with 6.9% on average. When it comes to the rejection rate under DIF conditions, or the DIF detection rate, the IRTLR-b showed the highest DIF detection rate followed by SIBTEST with averages of 71.8% and 68.4%, respectively. Overall, the impact of unbalanced sample sizes between reference and focal groups on the performance of DIF detection showed a similar tendency for all methods, generally increasing DIF detection rates as the total sample size increased. In practice, IRTLR-b, which showed the best performance for DIF detection rates and controlled for the Type I error rates, should be the choice when the model-data fit is reasonable. If other non-IRT DIF methods are considered, MH or SIBTEST could be used, depending on which type of error (Type I or II) is more seriously considered. / A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester 2017. / November 6, 2017. / Includes bibliographical references. / Insu Paek, Professor Directing Dissertation; Fred Huffer, University Representative; Betsy Jane Becker, Committee Member; Yanyun Yang, Committee Member.

Page generated in 0.1392 seconds