Spelling suggestions: "subject:"tem development"" "subject:"stem development""
1 |
Use of Evidence-Based Test Development in Pre-Licensure Nursing programs: A Descriptive Study of Faculty Beliefs, Attitudes and ValuesBerrick, Richild 01 January 2019 (has links)
Background: Effective testing in pre-licensure nursing programs is a challenge in nursing education. Implementing evidence-based test development is essential to successful assessment of students’ competence and preparation for licensure. Purpose: Identifying the beliefs, attitudes and values of nursing faculty will contribute to the use of best practices in student assessments, ultimately contributing to increased retention of competent students and increasing the workforce within the healthcare industry. Theoretical Framework: This study is based on Rokeach’s theory of beliefs, attitudes and values. Methods: A quantitative descriptive research methodology was used in this study using survey data collection. A purposive, non-probability, convenience sample was the sampling strategy. The instrument utilized was developed and validated in a previous study and additional researcher-developed items were added. These additional items were field tested for readability and structure by current nursing educators. Results: The results revealed that nursing faculty are not consistent with utilizing evidence-based test development practices within their nursing programs. The beliefs and attitudes identified from the data indicate a concern with the understanding and confidence towards evidence-based practices. Several challenges were identified in implementing test development practices such as addressing linguistic and cultural biases, faculty time constraints, and utilization of test banks. Conclusions: Identifying faculty beliefs, attitudes, and values of evidence-based test development practices offers insight into the challenges facing nursing faculty, nursing programs and nursing students. These challenges affect and influence the retention and persistence of nursing students in prelicensure programs which ultimately affects diversity in the nursing workforce.
|
2 |
Exploring Teacher Assessment Literacy through the Process of Training Teachers to Write Assessment ItemsWright, Heather Peltier 29 March 2017 (has links)
The purpose of this study was to examine the process and impact of assessment training content and delivery mode on the quality of assessment items developed by the teachers in a two-year assessment development project. Teacher characteristics were examined as potential moderating factors. Four types of delivery mode were employed in the project: synchronous online, asynchronous online, in-person workshop, and blended (a combination of online and in-person training). The quality of assessment items developed by participating teachers was measured via: 1) item acceptance rate, 2) number of item reviews (as an indicator of how many times accepted items were rejected before being approved), and 3) psychometric properties of the items (item difficulty and item discrimination) in the field test data.
A teacher perception survey with quantitative and qualitative data was used to explore teacher perception of the training across the four modes and the anticipated impact of the project participation the teachers expected on their classroom assessment practices.
Multilevel modeling and multiple regression were used to examine the quality of items developed by participants, while constant comparative analysis, a chi-square test, and ANOVA were employed to analyze participants’ responses to a participation survey.
No pre-existing teacher variables were found to have a significant impact on the item discrimination values, though prior assessment development experience beyond that of the classroom level was found to have a significant relationship with the number of reviews per item. After controlling for prior assessment development experience, participant role was found to have a significant (p < .01) impact on the number of reviews per item. Items written by participants who served as both item writers and reviewers had a significantly lower number of reviews per item, meaning their items were rejected less frequently than items written by participants who served as item writers only. No differences in item quality were found based on the mode of training in which item writers participated.
Responses to the training evaluation survey differed significantly by mode of training at p < .001. The in-person trained group had the lowest total rating, followed by the online asynchronous group, while the online synchronous group had the highest overall rating of the training. Participant responses to open-ended questions also differed significantly by mode of training.
|
3 |
The development and evaluation of Africanised items for multicultural cognitive assessmentBekwa, Nomvuyo Nomfusi 01 1900 (has links)
Nothing in life is to be feared, it is only to be understood. Now is the time to understand more,
so that we may fear less.
Marie Curie
Debates about how best to test people from different contexts and backgrounds
continue to hold the spotlight of testing and assessment. In an effort to contribute to
the debates, the purpose of the study was to develop and evaluate the viability and
utility of nonverbal figural reasoning ability items that were developed based on
inspirations from African cultural artefacts such as African material prints, art,
decorations, beadwork, paintings, et cetera. The research was conducted in two
phases, with phase 1 focused on the development of the new items, while phase 2
was used to evaluate the new items. The aims of the study were to develop items
inspired by African art and cultural artefacts in order to measure general nonverbal
figural reasoning ability; to evaluate the viability of the items in terms of their
appropriateness in representing the African art and cultural artefacts, specifically to
determine the face and content validity of the items from a cultural perspective; and
to evaluate the utility of the items in terms of their psychometric properties.
These elements were investigated using the exploratory sequential mixed method
research design with quantitative embedded in phase 2. For sampling purposes, the
sequential mixed method sampling design and non-probability sampling strategies
were used, specifically the purposive and convenience sampling methods. The data
collection methods that were used included interviews with a cultural expert and
colour-blind person, open-ended questionnaires completed by school learners and
test administration to a group of 946 participants undergoing a sponsored basic
career-related training and guidance programme. Content analysis was used for the
qualitative data while statistical analysis mainly based on the Rasch model was
utilised for quantitative data.
The results of phase 1 were positive and provided support for further development of
the new items, and based on this feedback, 200 new items were developed. This
final pool of items was then used for phase 2 – the evaluation of the new items. The
v
statistical analysis of the new items indicated acceptable psychometric properties of
the general reasoning (“g” or fluid ability) construct. The item difficulty values (pvalues)
for the new items were determined using classical test theory (CTT) analysis
and ranged from 0.06 (most difficult item) to 0.91 (easiest item). Rasch analysis
showed that the new items were unidimensional and that they were adequately
targeted to the level of ability of the participants, although there were elements that
would need to be improved. The reliability of the new items was determined using
the Cronbach alpha reliability coefficient (α) and the person separation index (PSI),
and both methods indicated similar indices of internal consistency (α = 0.97; PSI =
0.96). Gender-related differential item functioning (DIF) was investigated, and the
majority of the new items did not indicate any significant differences between the
gender groups. Construct validity was determined from the relationship between the
new items and the Learning Potential Computerised Adaptive Test (LPCAT), which
uses traditional item formats to measure fluid ability. The correlation results for the
total score of the new items and the pre- and post-tests were 0.616 and 0.712
respectively. The new items were thus confirmed to be measuring fluid ability using
nonverbal figural reasoning ability items. Overall, the results were satisfactory in
indicating the viability and utility of the new items.
The main limitation of the research was that because the sample was not
representative of the South African population, there were limited for generalisation.
This led to a further limitation, namely that it was not possible to conduct important
analysis on DIF for various other subgroups. Further research has been
recommended to build on this initiative. / Industrial and Organisational Psychology
|
Page generated in 0.0702 seconds