• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 3
  • 1
  • Tagged with
  • 48
  • 48
  • 48
  • 15
  • 12
  • 12
  • 11
  • 10
  • 10
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Optimizing design of incorporating off-grade items for constrained computerized adaptive testing in K-12 assessment

Liu, Xiangdong 01 August 2019 (has links)
Incorporating off-grade items within an on-grade item pool is often seen in K-12 testing programs. Incorporating off-grade items may provide improvements in measurement precision, test length, and content blueprint fulfillment, especially for high- and low-performing examinees, but it may also identify some concerns when using too many off-grade items on tests that are primarily designed to measure grade-level standards. This dissertation investigates how practical constraints such as the number of on-grade items, the proportion, and range of off-grade items, and the stopping rules affect item pool characteristics and item pool performance in adaptive testing. This study includes simulation conditions with four study factors: (1) three on-grade pool sizes (150, 300, and 500 items), (2) three proportions of off-grade items in the item pool (small, moderate, and large), (3) two ranges of off-grade items (one grade level and two grade levels), and (4) two stopping rules (variable- and fixed-length stopping rule) with two SE threshold levels. All the results are averaged across 200 replications for each simulation condition. The item pool characteristics are summarized using descriptive statistics and histograms of item difficulty (the b-parameters), descriptive statistics and plots of test information functions (TIFs), and the standard errors of the ability estimate (SEEs). The item pool performance is evaluated based on the descriptive statistics of measurement precision, test length and exposure properties, content blueprint fulfillment, and mean proportion of off-grade items for each test. The results show that there are some situations in which incorporating off-grade items would be beneficial. For example, a testing organization with a small item pool attempting to improve item pool performance for high- and low-performing examinees. The results also show that practical constraints of incorporating off-grade items, organized here from most impact to least impact in item pool characteristics and item pool performance, are: 1) incorporating off-grade items into small baseline pool or large baseline pool; 2) broadening the range of off-grade items from one grade level to two grade levels; 3) increasing the proportion of off-grade items in the item pool; and 4) applying variable- or fixed-length CAT. The results indicated that broadening the range of off-grade items yields improvements in measurement precision and content blueprint fulfillment when compared to increasing the proportion of off-grade items. This study could serve as guidance for test organizations when considering the benefits and limitations of incorporating off-grade items into on-grade item pools.
32

Training Set Design for Test Removal Classication in IC Test

Hassan Ranganath, Nagarjun 20 October 2014 (has links)
This thesis reports the performance of a simple classifier as a function of its training data set. The classifier is used to remove analog tests and is named the Test Removal Classifier (TRC). The thesis proposes seven different training data set designs that vary by the number of wafers in the data set, the source of the wafers and the replacement scheme of the wafers. The training data set size ranges from a single wafer to a maximum of five wafers. Three of the training data sets include wafers from the Lot Under Test (LUT). The training wafers in the data set are either fixed across all lots, partially replaced by wafers from the new LUT or fully replaced by wafers from the new LUT. The TRC's training is based on rank correlation and selects a subset of tests that may be bypassed. After training, the TRC identifies the dies that bypass the selected tests. The TRC's performance is measured by the reduction in over-testing and the number of test escapes after testing is completed. The comparison of the different training data sets on the TRC's performance is evaluated using production data for a mixed-signal integrated circuit. The results show that the TRC's performance is controlled by a single parameter- the rank correlation threshold.
33

A feasibility study of a computerized adaptive test of the international personality item pool NEO

McClarty, Katie Larsen, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
34

Modeling differential pacing trajectories in high stakes computer adaptive testing using hierarchical linear modeling and structural equation modeling

Thomas, Marie Huffmaster. January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of North Carolina at Greensboro, 2006. / Title from PDF title page screen. Advisor: Richard Luecht; submitted to the School of Education. Includes bibliographical references (p. 89-94).
35

The application of cognitive diagnosis and computerized adaptive testing to a large-scale assessment

McGlohen, Meghan Kathleen 28 August 2008 (has links)
Not available / text
36

A feasibility study of a computerized adaptive test of the international personality item pool NEO

McClarty, Katie Larsen 28 August 2008 (has links)
Not available / text
37

Adaptive selection of personality items to inform a neural network predicting job performance /

Thissen-Roe, Anne. January 2005 (has links)
Thesis (Ph. D.)--University of Washington, 2005. / Vita. Includes bibliographical references (p. 87-91).
38

A preliminary investigation into the patterns of performance on a computerized adaptive test battery implications for admissions and placement

Vorster, Marlene January 2002 (has links)
The fallibility of human judgment in the making of decisions requires the use of tests to enhance decision-making processes. Although testing is surrounded with issues of bias and fairness, it remains the best means of facilitating decisions over more subjective alternatives. As a country in transition, all facets of South African society are being transformed. The changes taking place within the tertiary education system to redress the legacy of Apartheid, coincide with an international trend of transforming higher education. One important area that is being transformed relates to university entrance requirements and admissions procedures. In South Africa, these were traditionally based on matriculation performance, which has been found to be a more variable predictor of academic success for historically disadvantaged students. Alternative or revised admissions procedures have been implemented at universities throughout the country, in conjunction with academic development programmes. However, it is argued in this dissertation that a paradigm shift is necessary to conceptualise admissions and placement assessment in a developmentally oriented way. Furthermore, it is motivated that it is important to keep abreast of advances in theory, such as item response theory (IRT) and technology, such as computerized adaptive testing (CAT), in test development to enhance the effectiveness of selecting and placing learners in tertiary programmes. This study focuses on investigating the use of the Accuplacer Computerized Placement Tests (CPTs), an adaptive test battery that was developed in the USA, to facilitate unbiased and fair admissions, placement and development decisions in the transforming South African context. The battery has been implemented at a university in the Eastern Cape and its usefulness was investigated for 193 participants, divided into two groups of degree programmes, depending on whether or not admission to the degree required mathematics as a matriculation subject. Mathematics based degree programme learners (n = 125) wrote three and non-mathematics based degree programme learners (n = 68) wrote two tests of the Accuplacer test battery. Correlations were computed between the Accuplacer scores and matriculation performance, and between the Accuplacer scores, matriculation performance and academic results. All yielded significant positive relationships excepting for the one subtest of the Accuplacer with academic performance for the non-mathematics based degree group. Multiple correlations for both groups indicated that the Accuplacer scores and matriculation results contribute unique information about academic performance. Cluster analysis for both groups yielded three underlying patterns of performance in the data sets. An attempt was made to validate the cluster groups internally through a MANOVA and single-factor ANOVAs. It was found that Accuplacer subtests and matriculation results do discriminate to an extent among clusters of learners in both groups of degree programmes investigated. Clusters were described in terms of demographic information and it was determined that the factors of culture and home language and how they relate to cluster group membership need further investigation. The main suggestion flowing from these findings is that an attempt be made to confirm the results with a larger sample size and for different cultural and language groups.
39

Die ontwikkeling van 'n aanlegtoets vir die leerarea rekenaarstudie as hulpmiddel by voorligting (Afrikaans)

Grobbelaar, Rika 03 November 2005 (has links)
Please read the abstract in the section 00front of this document / Thesis (PhD (Educational Guidance and Counseling))--University of Pretoria, 2005. / Educational Psychology / unrestricted
40

Stratified item selection and exposure control in unidimensional adaptive testing in the presence of two-dimensional data.

Kalinowski, Kevin E. 08 1900 (has links)
It is not uncommon to use unidimensional item response theory (IRT) models to estimate ability in multidimensional data. Therefore it is important to understand the implications of summarizing multiple dimensions of ability into a single parameter estimate, especially if effects are confounded when applied to computerized adaptive testing (CAT). Previous studies have investigated the effects of different IRT models and ability estimators by manipulating the relationships between item and person parameters. However, in all cases, the maximum information criterion was used as the item selection method. Because maximum information is heavily influenced by the item discrimination parameter, investigating a-stratified item selection methods is tenable. The current Monte Carlo study compared maximum information, a-stratification, and a-stratification with b blocking item selection methods, alone, as well as in combination with the Sympson-Hetter exposure control strategy. The six testing conditions were conditioned on three levels of interdimensional item difficulty correlations and four levels of interdimensional examinee ability correlations. Measures of fidelity, estimation bias, error, and item usage were used to evaluate the effectiveness of the methods. Results showed either stratified item selection strategy is warranted if the goal is to obtain precise estimates of ability when using unidimensional CAT in the presence of two-dimensional data. If the goal also includes limiting bias of the estimate, Sympson-Hetter exposure control should be included. Results also confirmed that Sympson-Hetter is effective in optimizing item pool usage. Given these results, existing unidimensional CAT implementations might consider employing a stratified item selection routine plus Sympson-Hetter exposure control, rather than recalibrate the item pool under a multidimensional model.

Page generated in 0.087 seconds