1 |
Examinees' Perceptions of the Physical Aspects of the Testing Environment During the National Physical Therapy ExaminationDonald, Ellen Kroog 04 July 2016 (has links)
Despite the increasing number of individuals taking computer-based tests, little is known about how examinees perceive computer-based testing environments and the extent to which these testing environments are perceived to affect test performance. The purpose of the present study was to assess the testing environment as perceived by individuals taking the National Physical Therapy Examination (NPTE), a high-stakes licensure examination. Perceptions of the testing environments were assessed using an examinee self-report questionnaire. The questionnaire included items that measured individuals’ preference and perception of specific characteristics of the environment, along with demographic information and one open-ended item. Questionnaires were distributed by email to the 210 accredited physical therapy programs at the time, encouraging programs to forward the instrument by email to the most recent class of physical therapy graduates. Two hundred and sixteen respondents completed the study, representing 101 testing centers in 31 states.
Data from these 216 examinees were used to answer four research questions. The first research question focused on the examinees’ environmental preferences for the NPTE testing environment and the relation between these preferences and examinees’ background characteristics (e.g., sex, program GPA, age, online experience, online testing experience, comfort level with online testing, and preferred testing time). A clear preference toward one end of the scale was observed for preferring a quiet room and a desktop area that had a great deal of adjustability. Examinees’ preferences and their demographic characteristics were not strongly related with the seven demographic variables accounting for < 7% of the variability in examinees’ environmental preferences.
The second research question used the data from multiple examinees nested within the same testing center to examine the within- and between-center variability in examinees’ perceptions of the testing environment and their satisfaction with the environment. Results indicated that the majority of the variance in these variables was within testing centers with average between-center variability equal to .032 for the perception ratings and .078 for the satisfaction ratings. Research questions (RQ) three and four explored whether examinees’ background characteristics (RQ 3) and center characteristics (RQ 4) were significantly related to the 12 environmental perception ratings, 12 satisfaction ratings, and two items representing examinees’ perceptions of the effect of the testing environment on their performance and the likelihood they would choose the same center again. In terms of examinee characteristics, age, online testing experience, and comfort with online testing were the most consistent predictors of the various examinee ratings. The most consistent predictors for the satisfaction ratings were examinees’ online test comfort, online test experience, and age. For center characteristics, the newness of the center and the room density of the center were the most consistent predictors of examinee ratings. For satisfaction ratings, the most consistent predictor was the newness of the center. Center newness was significantly related to the outcome variables related to the size, lighting and sound of the center which may reflect changes in building standards and materials.
The results of the study suggest the need for further exploration of the environmental and human factors that may impact individuals taking high stakes examinations in testing centers. Although there may not be an effect on all examinees, there may be subsets of individuals who are more sensitive to the effects of the testing environment on performance. Further exploration of the uniformity of testing environments is also needed to minimize error and maximize potential threats to test security.
|
2 |
Automatizuoto žinių patikrinimo ir vertinimo priemonių lyginamoji analizė / Comparative analysis of the automatic mark testing and assessment toolsGasporovič, Marija 16 August 2007 (has links)
Magistro darbo tema „Automatizuoto žinių patikrinimo ir vertinimo priemonių lyginamoji analizė“. Tema aktuali, nes paskutiniuoju metu aktyviai vystoma neakivaizdinio mokymo(si) alternatyva – nuotolinis mokymas(is). Vienas iš būtiniausių nuotolinio mokymo(si) elementų yra efektyvi žinių patikrinimo sistema, kurios labiau perspektyvi forma yra testavimas. Šiame darbe yra lyginamos automatizuoto žinių patikrinimo ir vertinimo priemonės, siekiant išaiškinti efektyvias priemones, atitinkančias visus vertinimo kriterijus, tinkančias įvertinti tiksliųjų mokslų žinias bei jų pritaikymą. Šio darbo uždaviniai yra įvertinti kelias Lietuvoje plačiai naudojamas virtualiąsias mokymo(si) aplinkas ir remiantis šio tyrimo vertinimo rezultatais pasirinkti tinkamą nuotolinio mokymo(si) kursui bei žinių patikrinimui ir vertinimui aplinką, atlikti pasirinktos virtualios mokymo(si) aplinkos išplėtimą, panaudojant grafinio testavimo aplinką. Palyginus automatizuotas žinių patikrinimo ir vertinimo priemones bei priemones, esančias virtualiose mokymo(si) aplinkose, galima daryti išvadą, kad tiksliųjų mokslų žinių ir gebėjimų patikrinimui ir vertinimui tinka virtualiosios mokymo(si) aplinkos išplėtimas grafine testavimo sistema. Tokia sistema tinkama automatizuotam tiksliųjų mokslų žinių patikrinimui ir vertinimui. / The topic of this master paper is “Comparative analysis of the automatic mark testing and assessment tools”. Lately, distance learning as an alternative of correspondence learning has been actively developed. One of the most obligatory elements of distance learning is an effective knowledge examination system the most perspective form of which is testing. This paper presents the comparison of automatic mark testing and assessment tools in order to find out the effective tools complying with all assessment criteria and being suitable for assessing exact science knowledge and its application. The tasks of this paper are to assess several virtual learning environments widely used in Lithuania and based on the results of this analysis to select the environment suitable for distance learning course, knowledge examination and assessment as well as to perform the extension of the selected virtual learning environment, using graphical testing environment. Having compared the automatic mark testing and assessment tools as well as the tools available in virtual learning environment, it is possible to draw a conclusion that virtual learning environment extension by the graphical testing system is suitable for exact science knowledge and skills examination and assessment. Such system is suitable for automatic exact knowledge examination and assessment.
|
Page generated in 0.1034 seconds