• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 88
  • 9
  • 9
  • 8
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 392
  • 392
  • 392
  • 80
  • 79
  • 79
  • 77
  • 73
  • 64
  • 63
  • 63
  • 55
  • 48
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Assessment of Competencies Among Doctoral Trainees in Psychology

Price, Samantha 08 1900 (has links)
The recent shift to a culture of competence has permeated several areas of professional psychology, including competency identification, competency-based education training, and competency assessment. A competency framework has also been applied to various programs and specialty areas within psychology, such as clinical, counseling, clinical health, school, cultural diversity, neuro-, gero-, child, and pediatric psychology. Despite the spread of competency focus throughout psychology, few standardized measures of competency assessment have been developed. To the authors' knowledge, only four published studies on measures of competency assessment in psychology currently exist. While these measures demonstrate significant steps in progressing the assessment of confidence, three of these measures were designed for use with individual programs, two of these international (i.e., UK and Taiwan). The current study applied the seminal Competency Benchmarks, via a recently adapted benchmarks form (i.e., Practicum Evaluation form; PEF), to practicum students at the University of North Texas. In addition to traditional supervisor ratings, the present study also involved self-, peer supervisor, and peer supervisee ratings to provide 360-degree evaluations. Item-response theory (IRT) was used to evaluate the psychometric properties of the PEF and inform potential revisions of this form. Supervisor ratings of competency were found to fit the Rasch model specified, lending support to use of the benchmarks framework as assessed by this form. Self- and peer-ratings were significantly correlated with supervisor ratings, indicating that there may be some utility to 360-degree evaluations. Finally, as predicted, foundational competencies were rated as significantly higher than functional competencies, and competencies improved significantly with training. Results of the current study provide clarity about the utility of the PEF and inform our understanding of practicum-level competencies.
172

Rule-based Risk Monitoring Systems for Complex Datasets

Haghighi, Mona 28 June 2016 (has links)
In this dissertation we present rule-based machine learning methods for solving problems with high-dimensional or complex datasets. We are applying decision tree methods on blood-based biomarkers and neuropsychological tests to predict Alzheimer’s disease in its early stages. We are also using tree-based methods to identify disparity in dementia related biomarkers among three female ethnic groups. In another part of this research, we tried to use rule-based methods to identify homogeneous subgroups of subjects who share the same risk patterns out of a heterogeneous population. Finally, we applied a network-based method to reduce the dimensionality of a clinical dataset, while capturing the interaction among variables. The results show that the proposed methods are efficient and easy to use in comparison to the current machine learning methods.
173

Item Analysis for the Development of the Shirts and Shoes Test for 6-Year-Olds

Tucci, Alexander, Tucci, Alexander January 2017 (has links)
The development of a standardized assessment can, in general, be broken into multiple stages. In the first, items to be used in the assessment are generated according to the skills and abilities that are to be assessed and the needs of the developers. These items are then, ideally, tested in the field on members of the population for which the assessment is intended. Item Response Theory (IRT) analysis is used to reveal items in the item pool which are unusable due to measurement error, redundancy in the level of item difficulty, or bias. More potential items may be generated and tested until there is a set of valid items with which the developers can move forward. The present study focused on the steps of item tryout and analysis for the establishment of demonstrable item-level validity. Fifty-one potential test items were analyzed for a version of the Shirts and Shoes Test (Plante & Vance, 2012) for 6-year-olds. A total of 23 items were discarded due to error in one or more of the measures mentioned above, and one item was discarded due to its low difficulty. The remaining 27 items were deemed suitable for the 6-year-old population.
174

DIMENSIONALITY ANALYSIS OF THE PALS CLASSROOM GOAL ORIENTATION SCALES

Tombari, Angela K. 01 January 2017 (has links)
Achievement goal theory is one of the most broadly accepted theoretical paradigms in educational psychology with over 35 years of influencing research and educational practice. The longstanding use of this construct has led to two consequences of importance for this research: 1) many different dimensionality representations have been debated, and 2) methods used to confirm dimensionality of the scales have been supplanted from best practice. A further issue is that goal orientations are used to inform classroom practice, whereas most measurement studies focus on the structure of the personal goal orientation scales rather than the classroom level structure. This study aims to provide an updated understanding of one classroom goal orientation scale using the modern psychometric techniques of multidimensional item response theory and bifactor analysis. The most commonly used scale with K-12 students is the Patterns of Adaptive Learning Scales (PALS); thus, the PALS classroom goal orientation scales will be the subject of this study.
175

Stratified item selection and exposure control in unidimensional adaptive testing in the presence of two-dimensional data.

Kalinowski, Kevin E. 08 1900 (has links)
It is not uncommon to use unidimensional item response theory (IRT) models to estimate ability in multidimensional data. Therefore it is important to understand the implications of summarizing multiple dimensions of ability into a single parameter estimate, especially if effects are confounded when applied to computerized adaptive testing (CAT). Previous studies have investigated the effects of different IRT models and ability estimators by manipulating the relationships between item and person parameters. However, in all cases, the maximum information criterion was used as the item selection method. Because maximum information is heavily influenced by the item discrimination parameter, investigating a-stratified item selection methods is tenable. The current Monte Carlo study compared maximum information, a-stratification, and a-stratification with b blocking item selection methods, alone, as well as in combination with the Sympson-Hetter exposure control strategy. The six testing conditions were conditioned on three levels of interdimensional item difficulty correlations and four levels of interdimensional examinee ability correlations. Measures of fidelity, estimation bias, error, and item usage were used to evaluate the effectiveness of the methods. Results showed either stratified item selection strategy is warranted if the goal is to obtain precise estimates of ability when using unidimensional CAT in the presence of two-dimensional data. If the goal also includes limiting bias of the estimate, Sympson-Hetter exposure control should be included. Results also confirmed that Sympson-Hetter is effective in optimizing item pool usage. Given these results, existing unidimensional CAT implementations might consider employing a stratified item selection routine plus Sympson-Hetter exposure control, rather than recalibrate the item pool under a multidimensional model.
176

An item response theory analysis of the Rey Osterrieth Complex Figure Task.

Everitt, Alaina 12 1900 (has links)
The Rey-Osterrieth Complex Figure Task (ROCFT) has been a standard in neuropsychological assessment for six decades. Many researchers have contributed administration procedures, additional scoring systems and normative data to improve its utility. Despite the abundance of research, the original 36-point scoring system still reigns among clinicians despite documented problems with ceiling and floor effects and poor discrimination between levels of impairment. This study is an attempt to provide a new method based upon item response theory that will allow clinicians to better describe the impairment levels of their patients. Through estimation of item characteristic curves, underlying traits can be estimated while taking into account varying levels of difficulty and discrimination within the set of individual items. The ultimate goal of the current research is identification of a subset of ROCFT items that can be examined in addition to total scores to provide an extra level of information for clinicians, particularly when they are faced with a need to discriminate severely and mildly impaired patients.
177

Effects of test administrations on general, test, and computer anxiety, and efficacy measures

Kiskis, Susan 01 January 1991 (has links)
No description available.
178

Decision consistency and accuracy indices for the bifactor and testlet response theory models

LaFond, Lee James 01 July 2014 (has links)
The primary goal of this study was to develop a new procedure for estimating decision consistency and accuracy indices using the bifactor and testlet response theory (TRT) models. This study is the first to investigate decision consistency and accuracy from a multidimensional perspective, and the results have shown that the bifactor model at least behaved in way that met the author's expectations and represents a potential useful procedure. The TRT model, on the other hand, did not meet the author's expectations and generally showed poor model performance. The multidimensional decision consistency and accuracy indices proposed in this study appear to provide good performance, at least for the bifactor model, in the case of a substantial testlet effect. For practitioners examining a test containing testlets for decision consistency and accuracy, a recommended first step is to check for dimensionality. If the testlets show a significant degree of multidimensionality, then the usage of the multidimensional indices proposed can be recommended as the simulation study showed an improved level of performance over unidimensional IRT models. However, if there is a not a significant degree of multidimensionality then the unidimensional IRT models and indices would perform as well, or even better, than the multidimensional models. Another goal of this study was to compare methods for numerical integration used in the calculation of decision consistency and accuracy indices. This study investigated a new method (M method) that sampled ability estimates through a Monte-Carlo approach. In summary, the M method seems to be just as accurate as the other commonly used methods for numerical integration. However, it has some practical advantages over the D and P methods. As previously mentioned, it is not as nearly as computationally intensive as the D method. Also, the P method requires large sample sizes. In addition, the P method has conceptual disadvantage in that the conditioning variable, in theory, should be the true theta, not an estimated theta. The M method avoids both of these issues and seems to provide equally accurate estimates of decision consistency and accuracy indices, which makes it a strong option particularly in multidimensional cases.
179

The Ability-weighted Bayesian Three-parameter Logistic Item Response Model for the Correction of Guessing

Zhang, Jiaqi 01 October 2019 (has links)
No description available.
180

Férovost didaktických testů a jejich položek / Test and Item Fairness

Vlčková, Katarína January 2015 (has links)
No description available.

Page generated in 0.0629 seconds