• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8434
  • 5461
  • 1281
  • 897
  • 846
  • 459
  • 407
  • 241
  • 185
  • 171
  • 150
  • 142
  • 122
  • 82
  • 82
  • Tagged with
  • 22905
  • 4920
  • 4256
  • 3419
  • 2313
  • 2187
  • 1976
  • 1892
  • 1847
  • 1723
  • 1619
  • 1511
  • 1426
  • 1422
  • 1405
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
731

Evaluating the effects of several multi -stage testing design variables on selected psychometric outcomes for certification and licensure assessment

Zenisky, April L 01 January 2004 (has links)
Computer-based testing is becoming popular with credentialing agencies because new test designs are possible and the evidence is clear that these new designs can increase the reliability and validity of candidate scores and pass/fail decisions. Research on MST to date suggests that the measurement quality of MST results is comparable to full-fledged computer-adaptive tests and improved over computerized fixed-form tests. MST's promise dwells in this potential for improved measurement with greater control than other adaptive approaches for constructing test forms. Recommending use of the MST design and advising how best to set up the design, however, are two different things. The purpose of the current simulation study was to advance an established line of research on MST methodology by enhancing understanding of how several important design variables affect outcomes for high-stakes credentialing. Modeling of the item bank, the candidate population, and the statistical characteristics of test items reflect an operational credentialing exam's conditions. Studied variables were module arrangement (4 designs), amount of overall test information (4 levels), distribution of information over stages (2 variations), strategies for between-stage routing (4 levels), and pass rates (3 levels), for 384 conditions total. Results showed that high levels of decision accuracy (DA) and decision consistency (DC) were consistently observed, even when test information was reduced by as much as 25%. No differences due to the choice of module arrangement were found. With high overall test information, results were optimal when test information was divided equally among stages; with reduced test information gathering more test information at Stage 1 provided the best results. Generalizing simulation study findings is always problematic. In practice, psychometric models never completely explain candidate performance, and with MST, there is always the potential psychological impact on candidates if test difficulty shifts are noticed. At the same time, two findings seem to stand out in this research: (1) with limited amounts of overall test information, it may be best to capitalize on available information with accurate branching decisions early, and (2) there may be little statistical advantage in exceeding test information much above 10 as gains in reliability and validity appear minimal.
732

An item modeling approach to descriptive score reports

Huff, Kristen Leigh 01 January 2003 (has links)
One approach to bridging the gap between cognitively principled assessment, instruction, and learning is to provide the score user with meaningful details about the examinee's test performance. Several researchers have demonstrated the utility of modeling item characteristics, such as difficulty, in light of item features and the cognitive skills required to solve the item, as a way to link assessment and instructional feedback. The next generation of the Test of English as a Foreign Language (TOEFL) will be launched in 2005, with new task types that integrate listening, reading, writing and speaking—the four modalities of language. Evidence centered design (ECD) principles are being used to develop tasks for the new TOEFL assessment. ECD provides a framework within which to design tasks, to link information gathered from those tasks back to the target of inference through the statistical model, and to evaluate each facet of the assessment program in terms of its connection to the test purpose. One of the primary goals of the new exam is to provide users with a score report that describes the English language proficiencies of the examinee. The purpose of this study was to develop an item difficulty model as the first step in generating descriptive score reports for the new TOEFL assessment. Task model variables resulting from the ECD process were used as the independent variables, and item difficulty estimates were used as the dependent variable in the item difficulty model. Tree-based regression was used to estimate the nonlinear relationships among the item and stimulus features and item difficulty. The proposed descriptive score reports capitalized on the item features that accounted for the most variance in item difficulty. The validity of the resulting proficiency statements were theoretically supported by the links among the task model variables and student model variables evidenced in the ECD task design shells, and empirically supported by the item difficulty model. Directions for future research should focus on improving the predictors in the item difficulty model, determining the most appropriate proficiency estimate categories, and comparing item difficulty models across major native language groups.
733

Educating for Democratic Citizenship: An Analysis of the Role of Teachers in Implementing Civic Education Policy in Madagascar

Unknown Date (has links)
In democratizing states around the world, civic education programs have long formed a critical component of government and donor strategy to support the development of civil society and strengthen citizens' democratic competencies, encompassing the knowledge, attitudes and skills required for them to become informed and actively engaged participants in the economic and social development of their country. Such programs, however, have had limited success. Despite research that has identified critical components of successful democratic civic education programs, including the use of learner-centered methods and experiential civic learning opportunities rooted in real-world contexts, these programs continue to produce weak results. This study targets an under-examined link in the policy-to-practice chain: the teachers themselves. By applying a qualitative, grounded theory approach to analyze interview and observation data collected from public primary schools, teacher training institutes and other key sites in Madagascar where best practices in civic education have recently been adopted, this research presents original insight into the ways in which teachers conceptualize and execute their role as civic educator in a democratizing state. The impact of training and the diverse obstacles emerging from political and economic underdevelopment are examined and analyzed. Emerging from this analysis, a new approach to conceptualizing civic education programs is proposed in which a direct ('front-door') and an indirect ('back-door') approach to the development of democracy through civic education are assigned equal credence as legitimate, situationally-appropriate alternatives to utilize in the effort to strengthen political institutions, civil society and citizen participation in developing democracies around the world. / A Dissertation submitted to the Department of Educational Leadership and Policy Studies in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester, 2010. / October 27, 2010. / Democracy, Civic Education, Citizenship, Teacher Training, Madagascar, Learner-Centered Pedagogy, Active Methods, Democratization, Sub-Saharan Africa / Includes bibliographical references. / Peter Easton, Professor Directing Dissertation; Jim Cobbe, University Representative; Sande Milton, Committee Member; Jeff Milligan, Committee Member.
734

Weighting procedures for robust ability estimation in item response theory

Skorupski, William P 01 January 2004 (has links)
Methods of ability parameter estimation in educational testing are subject to the biases inherent in various estimation procedures. This is especially true in the case of tests whose properties do not meet the asymptotic assumptions of estimation procedures like Maximum Likelihood Estimation. The item weighting procedures in this study were developed as a means to improve the robustness of such ability estimates. A series of procedures to weight the contribution of items to examinees' scores are described and empirically tested using a simulation study under a variety of reasonable conditions. Item weights are determined to minimize the contribution of some items while simultaneously maximizing the contribution of others. These procedures differentially weight the contribution of items to examinees' scores, by accounting for either (1) the amount of information with respect to trait estimation, or (2) the relative precision of item parameter estimates. Results indicate that weighting by item information produced ability estimates that were moderately less biased at the tails of the ability distribution and had substantially lower standard errors than scores derived from a traditional item response theory framework. Areas for future research using this scoring method are suggested.
735

Principals' perceptions of the MCAS: The impact of high stakes testing in Massachusetts

McCall, Darryll Andrew 01 January 2003 (has links)
The primary goal of this study was to investigate principals' perceptions of the MCAS (Massachusetts Comprehensive Assessment System) and the “high stakes” nature of the exam. Twelfth grade students who have not passed either the English Language Arts or Math sections of the MCAS will not receive a diploma starting in the 2003. A decade after the signing of the Massachusetts Education Reform Act of 1993, educators are still grappling with the ever-changing educational landscape and how to best increase the amount of learning occurring in schools. The MCAS serves as the formal educational assessment system in Massachusetts. This qualitative study involved individually interviewing twelve middle/elementary school principals from Massachusetts, all of whom had at least ten years of experience as a building administrator. The principals were categorized by MCAS results as well as school demographic settings (urban, suburban or rural) in order to provide a representative sampling similar to that found in the state. An interview guide with a specific set of twelve predetermined questions was utilized for the semi-structured interviews. The first five questions were previously used in D. F. Brown's study of principals' perceptions in Illinois, New York and Tennessee in 1993. The remaining questions were geared toward eliciting responses specific to the MCAS. Responses from the participants were analyzed using an inductive process that allowed themes to emerge from the data. Findings from the data analysis included three themes: principals from higher performing schools spoke favorably about the MCAS, principals from all categories were concerned over the public release of the scores, and finally principals from lower scoring schools felt that there is too much pressure to improve their MCAS scores. Further analysis of the data included a comparison of themes from this study with that of Brown's 1993 study.
736

Standard setting methods for complex licensure examinations

Pitoniak, Mary Jean 01 January 2003 (has links)
As the content and format of educational assessments evolve, the need for valid and workable standard setting methods grows as well. Although there are numerous standard setting methods available for multiple-choice items, there is a much smaller pool of methods from which to choose when constructed-response items or performance assessments are considered. In this study, four standard setting methods were evaluated. Two of the methods were used with the simulation component of a licensing examination, and two were used with the multiple-choice component. The two methods used with the simulations were the Work Classification method and the Analytic method. With the multiple-choice items, the Item Cluster method and Direct Consensus method were employed. The Item Cluster and Direct Consensus methods had each been the subject of research on two previous occasions, and the aims of the current study were to make modifications suggested by earlier findings and to seek replication of trends found earlier. The Work Classification and Analytic methods, while bearing some similarity to existing methods, are seen as new approaches specially configured to reflect the features of the simulations under consideration in the study. The results for each method were evaluated in terms of three sources of validity evidence—procedural, internal, and external—and the methods for each item type were contrasted to each other to assess their relative strengths and weaknesses. For the methods used with the simulations, the Analytic method has an advantage procedurally due to time factors, but panelists felt more positively about the Work Classification method. Internally, interrater reliability for the Analytic method was lower. Externally, the consistency of cut scores between methods was good in two of the three simulations; the larger difference on the third simulation may be explainable by other factors. For the methods used with the multiple-choice items, this study's findings support most of those found in earlier research. Procedurally, the Direct Consensus method is more efficient. Internally, there was less consistency across panels with the Direct Consensus method. Externally, the Direct Consensus method produced higher cut scores. Suggestions for future research for all four methods are given.
737

A quasi-experimental analysis of second graders with dyslexia using the motor markers in the cerebellar deficit hypothesis

Stark, Sandra Kathleen 01 January 2013 (has links)
Developmental dyslexia is a specific impairment of reading ability in the presence of normal intelligence and adequate reading instruction. Current research has linked dyslexia to genetic underpinnings, which are identifiable. Furthermore, there are cognitive processes that are influenced by unique genetically programmed neural networks that determine the manner in which a dyslexic child learns to read. As a result of these breakdowns in cognitive processing, specific breakdowns are noted using measurable assessments. The constellation of measurable symptoms or markers can differentiate the dyslexic child from other children who are typically developing readers or those who are poor readers for reasons not related to genetic pre-programming. Identification of children with dyslexia is critical in providing the appropriate services and remedial models as early intervention in the classroom is of the utmost importance. This study will investigate one aspect, motor function and motor processes that are purported to be one dimension associated with a breakdown in reading acquisition. According to the Cerebellar Deficit Hypothesis, motor function is one valid process and salient feature by which true dyslexia can be identified in children during the second grade year of their education. By the second grade, most typically developing children have acquired the fundamentals of reading. As such, early identification and appropriate intervention for children with dyslexia can be targeted as soon as possible to ensure long-term success and quality of life in these individuals.
738

How Do Data Dashboards Affect Evaluation Use in a Knowledge Network? A Study of Stakeholder Perspectives in the Centre for Research on Educational and Community Services (CRECS)

Alborhamy, Yasmine 02 November 2020 (has links)
Since there is limited research on the use of data dashboards in the evaluation field, this study explores the integration of a data dashboard in a knowledge network, the Centre for Research on Educational and Community Services (CRECS) as part of its program evaluation activities. The study used three phases of data collection and analysis. It investigates the process of designing a dashboard for a knowledge network and the different uses of a data dashboard in a program evaluation context through interviews and focus group discussions. Four members of the CRECS team participated in one focus group; two other members participated in individual interviews. Data were analyzed for thematic patterns. Results indicate that the process of designing a data dashboard consists of five steps that indicate the iterative process of design and the need for sufficient consultations with stakeholders. Moreover, the data dashboard has the potential to be used internally, within CRECS, and externally with other stakeholders. The data dashboard is also believed to be beneficial in program evaluation context as a monitoring tool, for evaluability assessment, and for evaluation capacity building. In addition, it can be used externally for accountability, reporting, and communication. The study sheds light on the potentials of data dashboards in organizations, yet prolonged and broader studies should take place to confirm these uses and their sustainability.
739

Satisfaction and service quality in the quantity surveying profession

Procter, Carol Jane 14 April 2020 (has links)
This thesis investigates client satisfaction and service quality in the quantity surveyingprofession. Whilst many reasons abound for dissatisfaction with the construction industry,this thesis focuses on client satisfaction with the provision of quantity surveyors' services. To this end, a greater understanding of the psychological processes involved in making a satisfaction decision is required and is achieved by the presentation of the theory of consumer satisfaction. It was found that consumer satisfaction is the result of meeting or exceeding expectation with performance. Furthermore, performance is not measured in technical terms, but as a result of client perceptions. Perceptions are at the heart of this thesis. This study investigates the relationship between client perceptions and quantity surveyors' perceptions of the same.
740

Comparative analysis of dynamic assessment using a nonverbal standardized intelligence test and a verbal curriculum-based test

Lolwana, Peliwe P 01 January 1991 (has links)
The purpose of this study was to investigate the comparative analysis of dynamic assessment procedures when two types of tests are used. Specifically, the aim of this study was to find out whether instructions on basic cognitive skills would improve the students' performance on specific standardized tests. The tests that were used were: a verbal educational test (Standardized Test of Essential Writing Skills), and a non-verbal intelligence test (Raven Progressive Matrices). Fifty two subjects were randomly selected from the 7th grade population of a middle school in Western Massachusetts. This sample represented slightly more than 35% of the 7th grade population of this school (N = 148). Two out of five seventh grade classes were selected by the principal and the researcher. One was a low academic performance class and the other was a high academic performance class. Prior academic performance and achievement scores were collected from the school records. Participation in this study was voluntary. The administration of the pretest instruments (Raven progressive Matrices and Standardized Test of Essential Writing Skills) was done in group sessions. Students were divided into two treatment groups and each group was exposed to two sessions of graduated prompting instructions, each session lasting 30-40 minutes. The same pretest assessment instruments were then administered during the post test. Individual student data was held confidential and combined into a group statistical process. According to the research findings it appears that dynamic assessment did improve the subjects' performance on the verbal, educational test (Standardized Test of Essential Writing Skills), but not on the non-verbal, intelligence test (Raven Progressive Matrices). The type of instructions received did not seem to have a significant effect on the subjects' post test performance on both the Standardized Test of Essential Writing Skills and Raven Progressive Matrices. However, a comparison of the highest and lowest academic groups, (as defined by the teachers) showed that the lowest group improved their scores on all test measures as compared to the highest academic group.

Page generated in 0.105 seconds