• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 154
  • 76
  • 24
  • 18
  • 16
  • 16
  • 11
  • 9
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 859
  • 434
  • 422
  • 136
  • 127
  • 124
  • 118
  • 117
  • 115
  • 109
  • 101
  • 86
  • 86
  • 86
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Examining the equivalence of the PIRLS 2016 released texts in South Africa across three languages

Roux, Karen January 2020 (has links)
The Progress in International Reading Literacy Study (PIRLS) is a large-scale reading comprehension assessment, which assesses Grade 4 learners’ reading literacy achievement. The findings from the last cycle of PIRLS 2016 indicated that South African Grade 4 and 5 learners performed poorly in reading comprehension. This finding confirms the previous cycles’ results where South African learners achieved the lowest results across the participating countries. Approximately eight out of ten Grade 4 learners cannot read for meaning in any of the tested languages. Due to the poor results in PIRLS, the President of South Africa stated that every ten-year old child should be able to read for meaning, thus cementing reading literacy as a national aim. The aim of this mixed methods research was to determine whether the PIRLS Literacy 2016 and PIRLS 2016 limited release texts are equivalent across languages, specifically English, Afrikaans and isiZulu. Four research sub-questions were explored to assist in addressing the main research question posed by this study: To what extent are the PIRLS 2016 released texts in English, Afrikaans and isiZulu, in Grade 4 and Grade 5 equivalent? As this study took the form of a sequential explanatory mixed methods approach, the first phase investigated the South African Grade 4 and 5 results by firstly looking at descriptive statistics, such as percentages and means. After the initial exploration of the data, I conducted Rasch analyses to determine whether the items from the limited release texts showed measurement invariance – in other words, whether the items behaved differently for different groups of learners. As part of the Rasch analyses, individual item-fit statistics and differential item functioning (DIF) were conducted using RUMM2030. In phase two, the limited release texts were analysed by experts who attended workshops and completed open-ended questionnaires regarding the equivalence of the identified texts. The qualitative phase was conducted in order to complement and extend on the quantitative findings of phase one. The findings revealed that the limited release texts, with their accompanying items, were not equivalent across the different languages. However, by looking at the items that displayed DIF, there is not a clear pattern as the items did not universally favour one language nor did the texts discriminate universally against a particular language. An in-depth look at the texts and items themselves revealed that the Flowers on the Roof text is considered the poorest translation into Afrikaans and isiZulu. Overall, all the texts were considered to be appropriate for South African learners as the texts made use of rich vocabulary and introduced the learners to new ideas and concepts. Thus, this study offers new insights into the equivalence of the PIRLS assessments as well as possible reasons for the non-equivalence for each of the limited release texts. Based on the findings of this study, recommendations and further research are provided. / Thesis (PhD)--University of Pretoria, 2020. / Science, Mathematics and Technology Education / PhD / Unrestricted
462

Families Perceptions of Care Related to End-Of-Life Care Visits

Gatian, Rebecca Ann 12 April 2022 (has links)
No description available.
463

Exploring how objects used in a Picture Vocabulary Test influence validity

De Bruin, IIse 03 June 2011 (has links)
Multilingualism in the classroom is one of the many challenges found in the cumbersome bag that the South African education system is carrying over its shoulders at present. Globalisation and migration have added to the burden as factors adding further diversity to the already diverse classroom. In South Africa the spotlight is focused on equality. Equality is expected in the education system, and in the classroom and especially in tests. With 11 official languages excluding the additional languages from foreign learners it has become a daunting task to create tests that are fair across multilingual learners in one classroom. Items in tests that function differently from one group to another can provide biased marks. An investigation was done in order to detect any biased items present in a Picture Vocabulary Test. The study was lead by the main research question being: How do objects used in a Picture Vocabulary Test influence the level of validity? The first sub research question was: How do objects used in a Picture Vocabulary Test influence the level of validity? The next sub question was: To what extent is an undimensional trait measured by a Picture Vocabulary Test? The final subquestion was To what extent do the items in a Picture Vocabulary Test perform the same for the different language groups? This Picture Vocabulary Test was administered to Grade 1 learners in Afrikaans, English or Sepedi speaking schools within Pretoria, Gauteng. The sample totalling 1361 learners. The process involved a statistical procedure known as Rasch analyses. With the help of Rasch a Differential Item Functioning (DIF) analysis was done to investigate whether biased items were present in the test. The aim of this study it is to create greater awareness as to how biased items in tests can be detected and resolved. The results showed that the items in the Picture Vocabulary Test all tested vocabulary. Although items were detected that did indeed perform differently across the three language groups participating in the study. / Dissertation (MEd)--University of Pretoria, 2010. / Science, Mathematics and Technology Education / unrestricted
464

The Effect of Item Format on Computation Subtest Scores of Standardized Mathematics Achievement Tests

Carcelli, Larry 01 May 1981 (has links)
The effect on childrens' scores of different item formats used in standardized mathematics achievement tests was investigated. Second grade students were given a mathematics computation test using formats derived from five standardized achievement tests. Identical content was tested with each format . Differences in test scores between types r0f formats were statistically significant at p(.001 (F = 45.25). These results indicate that what a student appears to know is substantially influenced by the format of the particualar test used in measuring achievement. These differences are not accounted for by the rno~mative scaling of the different tests. Greater attention should be ~i,en to the effect of test item format in selecting and administering cac1 i evemen t tests.
465

Macromorality and Mormons: A Psychometric Investigation and Qualitative Evaluation of the Defining Issues Test-2

Winder, Daniel R. 01 May 2009 (has links)
In 1988, P. Scott Richard's dissertation research at the University of Minnesota asserted that the Defining Issues Test (DIT), a widely accepted paper-and-pencil test of moral reasoning, exhibited item bias against religiously orthodox persons. Since 1988 (when Richard's data were reported), new methods of differential-item functioning (DIF) have developed, a new DIT has emerged (the DIT-2), as well as a Neo-Kohlbergian framework based upon moral schemas derived from Kohlberg's Piagetian-like six stages. With new methods, new tests, and unanswered questions, this study's results imply: (1) that DIT-2 items exhibit differential item functioning for religiously orthodox persons in statistically significant but not as practically significant ways as Richards' earlier findings, (2) that religious orthodoxy does influence macromoral reasoning as measured by the DIT-2, (3) that the maintaining norms schema is insufficient to explain the variables that contribute to why religiously orthodox persons score the way they do. This study implies that the maintaining norms schema may be misnamed because it appears to be measuring a different construct than maintaining norms macromoral reasoning.
466

Use of Evidence-Based Test Development in Pre-Licensure Nursing programs: A Descriptive Study of Faculty Beliefs, Attitudes and Values

Berrick, Richild 01 January 2019 (has links)
Background: Effective testing in pre-licensure nursing programs is a challenge in nursing education. Implementing evidence-based test development is essential to successful assessment of students’ competence and preparation for licensure. Purpose: Identifying the beliefs, attitudes and values of nursing faculty will contribute to the use of best practices in student assessments, ultimately contributing to increased retention of competent students and increasing the workforce within the healthcare industry. Theoretical Framework: This study is based on Rokeach’s theory of beliefs, attitudes and values. Methods: A quantitative descriptive research methodology was used in this study using survey data collection. A purposive, non-probability, convenience sample was the sampling strategy. The instrument utilized was developed and validated in a previous study and additional researcher-developed items were added. These additional items were field tested for readability and structure by current nursing educators. Results: The results revealed that nursing faculty are not consistent with utilizing evidence-based test development practices within their nursing programs. The beliefs and attitudes identified from the data indicate a concern with the understanding and confidence towards evidence-based practices. Several challenges were identified in implementing test development practices such as addressing linguistic and cultural biases, faculty time constraints, and utilization of test banks. Conclusions: Identifying faculty beliefs, attitudes, and values of evidence-based test development practices offers insight into the challenges facing nursing faculty, nursing programs and nursing students. These challenges affect and influence the retention and persistence of nursing students in prelicensure programs which ultimately affects diversity in the nursing workforce.
467

Does a Single Item Alcohol Screening Test Improve Rates of Diagnosis/Referral of Alcohol Use Disorder in a Medicare Population with Diagnosis of Depression or Anxiety?

Larsen, Jack, Winegar, Bruce, Gilreath, Jesse, Hewitt, Sarah 18 March 2021 (has links)
Screening, Brief Intervention, and Referral to Treatment (SBIRT) for alcohol use has been shown to reduce rates of alcohol use across multiple clinical settings, and is routinely recommended by the United States Preventative Services Task Force (USPSTF). In 2005 the National Institute on Alcohol Abuse and Alcoholism (NIAAA) recommended implementing a single item screening question (SISQ) for this purpose. Since then the SISQ has been well validated compared to other tools, such as the Alcohol Use Disorders Identification Test (AUDIT). It has not, however, been well studied in particular populations, such as those with comorbid anxiety and/or depressive disorders. Medicare Annual Wellness Visits present a unique opportunity to study the SISQ because while they do inquire about alcohol use, they do not routinely include a SISQ. Our study seeks to investigate the efficacy of implementation of a SISQ during Medicare Annual Wellness Visits in a residency clinic population with anxiety and/or depressive disorders. Data collection is ongoing and will measure rates of referral to treatment before and after the SISQ is implemented, as well as rates of brief interventions given.
468

Proxy Reliability of the 12-Item World Health Organization Disability Assessment Schedule II Among Adult Patients With Mental Disorders

Zhou, Wei, Liu, Qian, Yu, Yu, Xiao, Shuiyuan, Chen, Lizhang, Khoshnood, Kaveh, Zheng, Shimin 01 August 2020 (has links)
Purpose: Despite the wide usage of World Health Organization Disability Assessment Schedule II (WHODAS 2.0) in psychiatry research and clinical practice, there was limited knowledge on its proxy reliability among people with mental disorders. This paper aimed to compare the 12-item WHODAS 2.0 responses of adult patients with mental disorders to their family caregivers. Methods: In this study, 205 pairs of patients with mental disorders and primary family caregivers were consecutively recruited from one inpatient mental health department in a large hospital in China. All participants completed the 12-item version WHODAS 2.0 to assess patients’ functioning in the 30 days prior to the hospitalization. Measurement invariance, including configural, metric and scalar invariance, was tested across patient and proxy groups, using multi-group confirmatory factor analysis. Agreement between patients and proxies was examined by paired Wilcoxon tests and intraclass correlation coefficients (ICC). Subgroup analyses for proxy reliability were conducted within strata of proxy kinship and patient psychiatric diagnosis. Results: The 12-item WHODAS 2.0 achieved configural, metric and partial scalar invariance across patient and proxy groups. Unsatisfactory consistency was found for most items (ICC < 0.75, P < 0.05), especially for items on Cognition, Getting along, Life activities, and Participation in society (ICC < 0.4, P < 0.05). Spouses agreed with patients more often than parents (ICC ≥ 0.4, P < 0.05). The paired Wilcoxon tests found that impairment of patients with psychotic disorders tended to be overestimated by proxies while proxies tended to underestimate impairment of patients with mood disorders. Conclusion: Our study reveals inconsistency between self and proxy reports in the 12-item WHODAS 2.0 among adult patients with mental disorders. When proxy reports is needed, spouses are preferred than parents. We should be aware of proxies’ impairment overestimation among patients with psychotic disorders and underestimation among patients with mood disorders.
469

The Strength of Multidimensional Item Response Theory in Exploring Construct Space that is Multidimensional and Correlated

Spencer, Steven Gerry 08 December 2004 (has links) (PDF)
This dissertation compares the parameter estimates obtained from two item response theory (IRT) models: the 1-PL IRT model and the MC1-PL IRT model. Several scenarios were explored in which both unidimensional and multidimensional item-level and personal-level data were used to generate the item responses. The Monte Carlo simulations mirrored the real-life application of the two correlated dimensions of Necessary Operations and Calculations in the basic mathematics domain. In all scenarios, the MC1-PL IRT model showed greater precision in the recovery of the true underlying item difficulty values and person theta values along each primary dimension as well as along a second general order factor. The fit statistics that are generally applied to the 1-PL IRT model were not sensitive to the multidimensional item-level structure, reinforcing the requisite assumption of unidimensionality when applying the 1-PL IRT model.
470

Assessing Item and Scale Sensitivity to Therapeutic Change on the College Adjustment Scales: Working Toward a Counseling Center Specific Outcome Questionnaire

Wimmer, Christian L. 04 June 2008 (has links) (PDF)
Many college counseling centers use outcome measures to track therapeutic change for their clientele. These questionnaires have traditionally looked primarily at a client's symptom distress (e.g. depression, anxiety, suicidality, etc.) and are used to detect changes in the client's life that are due to therapy. Unfortunately, there is no measure that has been exclusively created and validated for use with college students. The College Adjustment Scales (CAS) form a multidimensional psychological measure designed specifically for use in college and university settings. Even though the CAS was created as a screening tool, it contains items that provide insight into changes that are possibly taking place for college students in therapy that are not measured by current outcome questionnaires. The purpose of this study was to determine which items and scales on the CAS were sensitive to therapeutic change for college students, thus assessing the validity of the test as an outcome measure and providing data for the development of future college counseling specific outcome questionnaires. This study used hierarchical linear modeling (HLM) to generate slopes that represent change over time for treatment and control groups. These slopes were compared to each other in order to determine whether each item and scale was sensitive to therapeutic change. The control sample consisted of 127 student participants that were not in therapy. The treatment sample was archival and consisted of 409 student clients. Seven of the nine scales were found to be sensitive to therapeutic change. However, 45 of the 108 individual items did not meet the set criteria. Because of these findings, the creators of the CAS are encouraged to revise the measure if it is to be used as an outcome questionnaire. In addition, researchers and clinicians should consider these results and take care not to treat this measure as an instrument that is wholly sensitive to therapeutic change for the college population. Items found to be sensitive to therapeutic change can be used to create a new outcome measure specifically for counseling centers.

Page generated in 0.0315 seconds