1 |
An investigation of attentional bias in test anxietyBuck, Robert January 2018 (has links)
Test anxiety is an individual personality trait, which results in elevated state anxiety in situations of performance evaluation. For school-age children, high-stakes examinations occurring at the culmination of programmes of study are where they frequently experience such evaluation. Alongside its impact on an individual's wellbeing, heightened test anxiety has been reliably linked to deficits in performance on examinations and assessments. Attentional bias has been shown to be an aspect of many forms of anxiety and is considered to have role in the maintenance of state anxiety, though the mechanisms underlying this are not fully clear. However, Attentional Control Theory (Eysenck, Derakshan, Santos, & Calvo, 2007) implicates preferential allocation of attention to threat in its explanation of performance deficits associated with test anxiety. The presence of attentional bias in test anxiety appears theoretically plausible with some empirical support (e.g. Putwain, Langdale, Woods and Nicholson, 2011); however, its reliability is under question. This study aims to investigate the presence of attentional bias in test anxiety, with a view to further understanding its underlying mechanisms and informing the development of interventions to ameliorate its effects. To ensure ecological validity, this study was conducted in schools and colleges, with a sample of 16-18-year olds following high-stakes programmes of study. Full investigation of test anxiety requires individuals to experience heightened state anxiety through performance evaluation threat; hence, the Trier Social Stress Test (TSST) was modified to make it applicable to this context and population. This study was conducted in two experimental phases, both of which adopted a mixed methodological approach to provide quantitative and qualitative data. The preliminary phase evaluated the materials and anxiety manipulation protocols. The main phase employed the modified-TSST in collaboration with a dot-probe task to investigate participants' attentional bias when under high performance evaluation threat. No patterns of attentional bias were uncovered to indicate a consistent relationship to either trait test anxiety or attentional control. However, there was a level of congruence between how some individuals describe themselves in evaluative situations and the attentional bias they displayed. Further investigation employing mixed methodological approaches such as Single Case Experimental Design is recommended to identify and address attentional bias in test anxiety.
|
2 |
Evaluating the USDA's Farm Balance Sheet ForecastsPedro Antonio Diaz Cachay (16631448) 26 July 2023 (has links)
<p>The United States Department of Agriculture (USDA) forecasts the Farm Balance Sheet each year. The Farm Balance Sheet provides an estimate of the value of physical and financial assets in the United States agriculture sector over time (USDA, 2023). The forecasts evaluated in this paper are related to assets and debt in the farm sector, including total farm assets, farm assets real estate, total farm debt, farm debt real estate, and farm debt non-real estate. These forecasts predict the growth in the agricultural sector and help various stakeholders, such as policy makers, USDA program administrators, and agricultural lenders make important decisions. Given the importance of these forecasts in the agricultural sector, it is the main objective of this research to examine the degree to which the Farm Balance Sheet forecasts are optimal (unbiased and efficient). During this study, forecasts from the Farm Balance Sheet in the 1986-2021 period are found to be unbiased using Holden and Peel test (1990). Also, using efficiency tests by Nordhaus (1987), it was found that forecasts from the Farm Balance Sheet are inefficient. This, suggests all the information is not efficiently incorporated when the forecast is produced .</p>
|
3 |
Detection and Classification of DIF Types Using Parametric and Nonparametric Methods: A comparison of the IRT-Likelihood Ratio Test, Crossing-SIBTEST, and Logistic Regression ProceduresLopez, Gabriel E. 01 January 2012 (has links)
The purpose of this investigation was to compare the efficacy of three methods for detecting differential item functioning (DIF). The performance of the crossing simultaneous item bias test (CSIBTEST), the item response theory likelihood ratio test (IRT-LR), and logistic regression (LOGREG) was examined across a range of experimental conditions including different test lengths, sample sizes, DIF and differential test functioning (DTF) magnitudes, and mean differences in the underlying trait distributions of comparison groups, herein referred to as the reference and focal groups. In addition, each procedure was implemented using both an all-other anchor approach, in which the IRT-LR baseline model, CSIBEST matching subtest, and LOGREG trait estimate were based on all test items except for the one under study, and a constant anchor approach, in which the baseline model, matching subtest, and trait estimate were based on a predefined subset of DIF-free items. Response data for the reference and focal groups were generated using known item parameters based on the three-parameter logistic item response theory model (3-PLM). Various types of DIF were simulated by shifting the generating item parameters of select items to achieve desired DIF and DTF magnitudes based on the area between the groups' item response functions. Power, Type I error, and Type III error rates were computed for each experimental condition based on 100 replications and effects analyzed via ANOVA. Results indicated that the procedures varied in efficacy, with LOGREG when implemented using an all-other approach providing the best balance of power and Type I error rate. However, none of the procedures were effective at identifying the type of DIF that was simulated.
|
Page generated in 0.0397 seconds