• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 3
  • 1
  • 1
  • Tagged with
  • 56
  • 56
  • 50
  • 17
  • 12
  • 12
  • 12
  • 11
  • 11
  • 8
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The Effects of a Web-Based Instructional Program: Promoting Student Growth in Reading and Mathematics Achievement

Hill, Penelope Pritchett 14 December 2018 (has links)
As advances in technology allowed national and state education assessments to be administered digitally, many school districts transitioned to computer-based instructional programs and assessments to improve student achievement and better prepare students for high-stakes computerized assessments. One such rural public school district in Mississippi implemented a supplemental web-based instructional program, i-Ready, for the first time in the 2017-2018 school year. The purpose of this study was (a) to investigate the effects of the i-Ready program on student achievement in Grades 4 – 5 reading/language arts and mathematics and (b) to determine if there were significant differences in growth (from pretest to posttest) among performance levels of students in Grades 4 – 5 on the 2017 state assessment in reading/language arts and mathematics. A quantitative research design using existing data was used to conduct the study, and the paired-samples t-test provided the primary means of analysis for research questions one and two to determine the effect of the i-Ready program on student achievement. The one-way analysis of variance (ANOVA) was used as the primary means of analysis for research questions three and four to determine if there were significant differences in growth among students across Performance Levels 1 – 5. The results from the research study showed the i-Ready program had a positive impact on student achievement in reading and math for Grades 4 – 5. No statistically significant differences were found in student growth among the performance level groups indicating all students were impacted by the program. Recommendations for future research include: (a) conducting longitudinal studies to determine long-term effects of participation in the i-Ready program, (b) analyzing methods of implementation by classroom teachers, (c) measuring i-Ready’s predictability of proficiency and growth on state assessments, and (d) conducting studies of other online instructional programs using control groups.
32

A Comparison of the Effectiveness of Computer Adaptive Testing and Computer Administered Testing

Fielder, Patrick J. (Patrick Joseph) 08 1900 (has links)
The problem with which this study is concerned is determining the effectiveness of a computer adaptive test as compared to the effectiveness of using the entire test. The study has a twofold purpose. The first is to determine whether the two test versions generate equivalent scores, despite being of different lengths. The second is to determine whether the difference in time needed to take the computer adaptive test is significantly shorter than the computer administered full test.
33

An Empirical Evaluation of Student Learning by the Use of a Computer Adaptive System

Belhumeur, Corey T 19 April 2013 (has links)
Numerous methods to assess student knowledge are present throughout every step of a students€™ education. Skill-based assessments include homework, quizzes and tests while curriculum exams comprise of the SAT and GRE. The latter assessments provide an indication as to how well a student has retained a learned national curriculum however they are unable to identify how well a student performs at a fine grain skill level. The former assessments hone in on a specific skill or set of skills, however, they require an excessive amount of time to collect curriculum-wide data. We've developed a system that assesses students at a fine grain level in order to identify non- mastered skills within each student€™s zone of proximal development. €œPLACEments€� is a graph-driven computer adaptive test which not only provides thorough student feedback to educators but also delivers a personalized remediation plan to each student based on his or her identified non-mastered skills. As opposed to predicting state test scores, PLACEments objective is to personalize learning for students and encourage teachers to employ formative assessment techniques in the classroom. We have conducted a randomized controlled study to evaluate the learning value PLACEments provides in comparison to traditional methods of targeted skill mastery and retention.
34

Can a computer adaptive assessment system determine, better than traditional methods, whether students know mathematics skills?

Whorton, Skyler 19 April 2013 (has links)
Schools use commercial systems specifically for mathematics benchmarking and longitudinal assessment. However these systems are expensive and their results often fail to indicate a clear path for teachers to differentiate instruction based on students’ individual strengths and weaknesses in specific skills. ASSISTments is a web-based Intelligent Tutoring System used by educators to drive real-time, formative assessment in their classrooms. The software is used primarily by mathematics teachers to deliver homework, classwork and exams to their students. We have developed a computer adaptive test called PLACEments as an extension of ASSISTments to allow teachers to perform individual student assessment and by extension school-wide benchmarking. PLACEments uses a form of graph-based knowledge representation by which the exam results identify the specific mathematics skills that each student lacks. The system additionally provides differentiated practice determined by the students’ performance on the adaptive test. In this project, we describe the design and implementation of PLACEments as a skill assessment method and evaluate it in comparison with a fixed-item benchmark.
35

规则空间模型在诊断性计算机化自适应测验中的应用: Application of the rule space model in computerized adaptive testing for diagnostic assessment. / Application of the rule space model in computerized adaptive testing for diagnostic assessment / Application of the rule space model in computerized adaptive testing for diagnostic assessment / CUHK electronic theses & dissertations collection / Gui ze kong jian mo xing zai zhen duan xing ji suan ji hua zi shi ying ce yan zhong de ying yong: Application of the rule space model in computerized adaptive testing for diagnostic assessment.

January 2003 (has links)
文剑冰. / 论文(哲学博士)--香港中文大学, 2003. / 参考文献 (p. 138-147). / 中英文摘要. / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Wen Jianbing. / Zhong Ying wen zhai yao. / Lun wen (zhe xue bo shi)--Xianggang Zhong wen da xue, 2003. / Can kao wen xian (p. 138-147).
36

Optimizing design of incorporating off-grade items for constrained computerized adaptive testing in K-12 assessment

Liu, Xiangdong 01 August 2019 (has links)
Incorporating off-grade items within an on-grade item pool is often seen in K-12 testing programs. Incorporating off-grade items may provide improvements in measurement precision, test length, and content blueprint fulfillment, especially for high- and low-performing examinees, but it may also identify some concerns when using too many off-grade items on tests that are primarily designed to measure grade-level standards. This dissertation investigates how practical constraints such as the number of on-grade items, the proportion, and range of off-grade items, and the stopping rules affect item pool characteristics and item pool performance in adaptive testing. This study includes simulation conditions with four study factors: (1) three on-grade pool sizes (150, 300, and 500 items), (2) three proportions of off-grade items in the item pool (small, moderate, and large), (3) two ranges of off-grade items (one grade level and two grade levels), and (4) two stopping rules (variable- and fixed-length stopping rule) with two SE threshold levels. All the results are averaged across 200 replications for each simulation condition. The item pool characteristics are summarized using descriptive statistics and histograms of item difficulty (the b-parameters), descriptive statistics and plots of test information functions (TIFs), and the standard errors of the ability estimate (SEEs). The item pool performance is evaluated based on the descriptive statistics of measurement precision, test length and exposure properties, content blueprint fulfillment, and mean proportion of off-grade items for each test. The results show that there are some situations in which incorporating off-grade items would be beneficial. For example, a testing organization with a small item pool attempting to improve item pool performance for high- and low-performing examinees. The results also show that practical constraints of incorporating off-grade items, organized here from most impact to least impact in item pool characteristics and item pool performance, are: 1) incorporating off-grade items into small baseline pool or large baseline pool; 2) broadening the range of off-grade items from one grade level to two grade levels; 3) increasing the proportion of off-grade items in the item pool; and 4) applying variable- or fixed-length CAT. The results indicated that broadening the range of off-grade items yields improvements in measurement precision and content blueprint fulfillment when compared to increasing the proportion of off-grade items. This study could serve as guidance for test organizations when considering the benefits and limitations of incorporating off-grade items into on-grade item pools.
37

Training Set Design for Test Removal Classication in IC Test

Hassan Ranganath, Nagarjun 20 October 2014 (has links)
This thesis reports the performance of a simple classifier as a function of its training data set. The classifier is used to remove analog tests and is named the Test Removal Classifier (TRC). The thesis proposes seven different training data set designs that vary by the number of wafers in the data set, the source of the wafers and the replacement scheme of the wafers. The training data set size ranges from a single wafer to a maximum of five wafers. Three of the training data sets include wafers from the Lot Under Test (LUT). The training wafers in the data set are either fixed across all lots, partially replaced by wafers from the new LUT or fully replaced by wafers from the new LUT. The TRC's training is based on rank correlation and selects a subset of tests that may be bypassed. After training, the TRC identifies the dies that bypass the selected tests. The TRC's performance is measured by the reduction in over-testing and the number of test escapes after testing is completed. The comparison of the different training data sets on the TRC's performance is evaluated using production data for a mixed-signal integrated circuit. The results show that the TRC's performance is controlled by a single parameter- the rank correlation threshold.
38

A feasibility study of a computerized adaptive test of the international personality item pool NEO

McClarty, Katie Larsen, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
39

Modeling differential pacing trajectories in high stakes computer adaptive testing using hierarchical linear modeling and structural equation modeling

Thomas, Marie Huffmaster. January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of North Carolina at Greensboro, 2006. / Title from PDF title page screen. Advisor: Richard Luecht; submitted to the School of Education. Includes bibliographical references (p. 89-94).
40

The application of cognitive diagnosis and computerized adaptive testing to a large-scale assessment

McGlohen, Meghan Kathleen 28 August 2008 (has links)
Not available / text

Page generated in 0.086 seconds