• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3115
  • 336
  • 258
  • 209
  • 103
  • 78
  • 36
  • 30
  • 30
  • 30
  • 30
  • 30
  • 29
  • 28
  • 17
  • Tagged with
  • 4930
  • 2274
  • 1869
  • 1107
  • 377
  • 370
  • 297
  • 234
  • 227
  • 223
  • 214
  • 209
  • 204
  • 200
  • 198
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
771

The Relationship between Teacher Evaluation Ratings and Student Achievement in a Rural, Midwest School District

Mathus, Margaret A. 20 April 2017 (has links)
<p> While many factors have been identified as influencing student academic performance, previous studies consistently determined effective teaching as the most significant factor, within the control of educators, leading to improved student achievement. Nonetheless, educational experts, statisticians, and policy-makers alike acknowledged the complexity of isolating the contributions of individual teachers on their students&rsquo; achievement. Converging with these changing beliefs about teaching and learning, the landscape of education faced an additional challenge&mdash;marked by an increased demand for schools and individual teachers to be held accountable for the academic growth of his/her students. Local districts have been empowered to create and implement teacher evaluation systems, with the caveat they maintain student achievement data as one measure of teacher effectiveness. </p><p> While there has been research conducted investigating a relationship between performance-based teacher evaluation systems and student achievement, studies have been limited to the most common large-scale models. This study was unique because the research focused on a specific teacher evaluation system, created by and for, a rural Missouri school district during its first two years of implementation. The purpose of this mixed-methods research study was two-fold: (1) to investigate the relationship between teachers&rsquo; annual evaluation ratings (as measured by the researched district&rsquo;s teacher evaluation tool) and their students&rsquo; academic performance (as measured by the MAP and i-Ready assessments), and (2) to analyze teacher and administrator perceptions of the impact of the new teacher evaluation system on improving student achievement and the teachers&rsquo; instructional performance. </p><p> This study&rsquo;s analysis took both math and reading achievement scores into account, considering two different standardized assessments, the state-mandated Missouri Assessment Program (MAP) and a locally-administered i-Ready Benchmark Assessment. The student achievement data yielded an increase in student achievement over the two years of the study. However, the results of the study did not establish a correlation between the two variables: teacher quality and student achievement. More sensitive evaluation methods are needed to isolate the variable of teacher evaluation ratings on student achievement.</p>
772

The effects on calculations of reading in a vicinity of clinical optometric measurements

27 October 2008 (has links)
D.Phil. / none / Prof. W.F. Harris
773

Exploring early childhood classroom teachers? experiences with administrative support in the implementation of the DRDP as an authentic assessment tool

Krause, Judith Anne 16 August 2016 (has links)
<p> Purpose. The purpose of the qualitative study was to explore early childhood classroom teachers&rsquo; experiences with administrative support in the implementation of the Desired Results Developmental Profile (DRDP) as an authentic assessment tool. </p><p> Methodology. The participants included 10 Head Start and 10 State Preschool teachers implementing the DRDP. The researcher conducted and transcribed one-on-one participant interviews. The questions were pilot tested, and a member check was conducted. An inductive analysis approach, which included both the researcher and a second rater independently examining the data, was employed to identify common themes. </p><p> Findings. Results reflected the participants&rsquo; experiences regarding administrative support provided in DRDP implementation. The findings revealed 6 themes relevant to the research questions: (a) reflecting on DRDP results is challenging due to time constraints, (b) time off the floor with children aids in reflecting on DRDP results, (c) the Center for Child and Family Studies at WestEd (WestEd) DRDP training is encouraged, (d) the WestEd website is helpful in implementing the DRDP, (e) program-specific DRDP resources are provided, and (f) time is a valuable resource to aid in DRDP implementation. </p><p> Conclusions. The study&rsquo;s results indicated that administrative support is important in DRDP implementation. A major finding of this study exposed the need for time off the floor with children for both reflection on DRDP results and the completion of the required paperwork. The data from the study will aid early childhood administrators in future planning. </p><p> Recommendations. The researcher recommends additional early childhood program types be studied. Additional recommendations for further research include a quantitative study on the same topic. The researcher further recommends that support regarding authentic assessment tools other than the DRDP be explored. </p>
774

IP Router Testing, Isolation and Automation

Peddireddy, Divya January 2016 (has links)
Context. Test Automation is a technique followed by the present software development industries to reduce the time and effort invested for manual testing. The process of automating the existing manual tests has now gained popularity in the Telecommunications industry as well. The Telecom industries are looking for ways to improve their existing test methods with automation and express the benefit of introducing test automation. At the same time, the existing methods of testing for throughput calculation in industries involve measurements on a larger timescale, like one second. The possibility to measure the throughput of network elements like routers on smaller timescales gives a better understanding about the forwarding capabilities, resource sharing and traffic isolation in these network devices. Objectives. In this research, we develop a framework for automatically evaluating the performance of routers on multiple timescales, one second, one millisecond and less. The benefit of introducing test automation is expressed in terms of Return on Investment, by comparing the benefit of manual and automated testing. The performance of a physical router, in terms of throughput is measured for varying frame sizes and at multiple timescales. Methods. The method followed for expressing the benefit of test automation is quantitative. At the same time, the methodology followed for evaluating the throughput of a router on multiple timescales is experimental and quantitative, using passive measurements. A framework is developed for automatically conducting the given test, which enables the user to test the performance of network devices with minimum user intervention and with improved accuracy. Results. The results of this thesis work include the benefit of test automation, in terms of Return on Investment when compared to manual testing; followed by the performance of router on multiple timescales. The results indicate that test automation can improve the existing manual testing methods by introducing greater accuracy in testing. The throughput results indicate that the performance of a physical router varies on multiple timescales, like one second and one millisecond. The throughput of the router is evaluated for varying frame sizes. It is observed that the difference in the coefficient of variance at the egress and ingress of the router is more for smaller frame sizes, when compared to larger frame sizes. Also, the difference is more on smaller timescales when compared to larger timescales. Conclusions. This thesis work concludes that the developed test automation framework can be used and extended for automating several test cases at the network layer. The automation framework reduces the execution time and introduces accuracy when compared to manual testing. The benefit of test automation is expressed in terms of Return on Investment. The throughput results are in line with the hypothesis that the performance of a physical router varies on multiple timescales. The performance, in terms of throughput, is expressed using a previously suggested performance metric. It is observed that there is a greater difference in the Coefficient of Variance values (at the egress and ingress of a router) on smaller timescales when compared to larger timescales. This difference is more for smaller frame sizes when compared with larger frame sizes.
775

Assessing Job Negotiation Competencies of College Students Using Evidence-Centered Design and Branching Simulations

Unknown Date (has links)
The study explored the development of a valid assessment tool for job negotiation competencies using the Evidence Centered Design framework. It involved the creation of a competency model, evidence models, and task models that guided the development of a branching simulation tool to quickly diagnose college students' knowledge and skills in job negotiation. The online tool utilized three scenarios where students play the role of job seekers negotiating with their potential future employers. This study focused on two key behaviors in negotiation – making counteroffers and making reasonable concessions. A preliminary competency model was first developed based on a literature review of negotiation research. This model was then validated by a panel of experts. The experts also validated the evidence model (how to score performance on the simulation) and the task model (what tasks should be performed to elicit evidence of performance). These activities and the experts' feedback for improving the prototype simulation provided content validity for the tool. A total of 86 undergraduate and 51 graduate students participated in the study. The students completed an online tutorial, the scenarios in the simulation, a demographics survey, and two other survey instruments that provided alternative measures of negotiation abilities. Their performance on the assessment simulation was determined by their overall competency score and value of the negotiated outcome. Students were classified as experts or novices based on their negotiation experience and knowledge of negotiation strategies. Results from the study indicated that experts performed better than novices in terms of overall competency and negotiated outcome. The study also compared the outcomes of the assessment tool with outcomes from the alternative measures of negotiation ability (a survey on preference for competing, collaborating, compromising, and accommodating negotiation strategies and a survey to determine self-confidence in using distributive and integrative negotiation tactics). I hypothesized that students with a high preference for competing and collaborating strategies would also have higher scores from the assessment tool. On the other hand, students who indicated a high preference for accommodating and compromising strategies would have lower scores. The results from the Preferred Negotiation Strategies survey supported my hypothesis that students who highly prefer accommodating and compromising strategies would have lower scores on overall competency and negotiated outcome. But the mixed findings for competing and collaborating preferences only partially supported my hypotheses. I also hypothesized that students who were highly confident in the use of distributive and integrative negotiation tactics would have higher scores on the assessment compared to those who have low self-confidence. The results did not support my hypotheses because there were no significant relationships between confidence and the assessment outcomes. Finally, the study also found that gender, expertise, and negotiation training have an effect on overall competency score and the negotiated outcome. This dissertation provided a case study on how to develop an assessment tool that diagnoses negotiation competencies using the ECD framework. It also provided evidence of validity for the tool by demonstrating its ability to distinguish different levels of performance by expert and novice negotiators. / A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester 2015. / October 30, 2015. / Includes bibliographical references. / Valerie Shute, Professor Directing Dissertation; Paul Marty, University Representative; Robert Reiser, Committee Member; James Klein, Committee Member.
776

The Use of a Meta-Analysis Technique in Equating and Its Comparison with Several Small Sample Equating Methods

Unknown Date (has links)
The main objective of this study was to investigate the improvement of the accuracy of small sample equating, which typically occurs in teacher certification/licensure examinations due to a low volume of test takers per test administration, under the Non-Equivalent Groups with Anchor Test (NEAT) design by combining previous and current equating outcomes using a meta-analysis technique. The proposed meta-analytic score transformation procedure was called "meta-equating" throughout this study. To conduct meta-equating, the previous and current equating outcomes obtained from the chosen equating methods (ID (Identity Equating), Circle-Arc (CA) and Nominal Weights Mean (NW)) and synthetic functions (SFs) of these methods (CAS and NWS) were used, and then, empirical Bayesian (EB) and meta-equating (META) procedures were implemented to estimate the equating relationship between test forms at the population level. The SFs were created by giving equal weight to each of the chosen equating methods and the identity (ID) equating. Finally, the chosen equating methods, the SFs of each method (e.g., CAS, NWS, etc.), and also the META and EB versions (e.g., NW-EB, CA-META, NWS-META, etc.) were investigated and compared under varying testing conditions. These steps involved manipulating some of the factors that influence the accuracy of test score equating. In particular, the effect of test form difficulty levels, the group-mean ability differences, the number of previous equatings, and the sample size on the accuracy of the equating outcomes were investigated. The Chained Equipercentile (CE) equating with 6-univariate and 2-bivariate moments log-linear presmoothing was used as the criterion equating function to establish the equating relationship between the new form and the base (reference) form with 50,000 examinees per test form. To compare the performance of the equating methods, small numbers of examinee samples were randomly drawn from examinee populations with different ability levels in each simulation replication. Each pairs of the new and base test forms were randomly and independently selected from all available condition specific test form pairs. Those test forms were then used to obtain previous equating outcomes. However, purposeful selections of the examinee ability and test form difficulty distributions were made to obtain the current equating outcomes in each simulation replication. The previous equating outcomes were later used for the implementation of both the META and EB score transformation procedures. The effect of study factors and their possible interactions on each of the accuracy measures were investigated along the entire-score range and the cut (reduced)-score range using a series of mixed-factorial ANOVA (MFA) procedures. The performances of the equating methods were also compared based on post-hoc tests. Results show that the behaviors of the equating methods vary based on the each level of the group ability difference, test form difficult difference, and new group examinee sample size. Also, the use of both META and EB procedures improved the accuracy of equating results on average. The META and EB versions of the chosen equating methods therefore might be a solution to equate the test forms that are similar in their psychometric characteristics and also taken by new form examinee samples less than 50. However, since there are many factors affecting the equating results in reality, one should always expect that equating methods and score transformation procedures, or in more general terms, estimation procedures may function differently, to some degree, depending on conditions in which they are implemented. Therefore, one should consider the recommendations for the use of the proposed equating methods in this study as a piece of information, not an absolute guideline, for a rule of thumbs for practicing small sample test equating in teacher certification/licensure examinations. / A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester 2015. / October 28, 2015. / Collateral Information, Empirical Bayesian, Meta-Analysis, NEAT design, Small Samples, Test Equating / Includes bibliographical references. / Insu Paek, Professor Directing Dissertation; Victor Patrangenaru, University Representative; Russell Almond, Committee Member; Alysia Roehrig, Committee Member.
777

Use of intrinsic and payoff criteria to evaluate the effectiveness of instructional materials and their impact on instructor-led training

Unknown Date (has links)
The primary purpose of this study was to determine whether an instructional materials formative evaluation model that incorporated both payoff and intrinsic criteria resulted in more effective training materials in an instructor-led environment than a model relying on intrinsic criteria alone. Two revised versions of materials were developed and delivered in a classroom setting. Version X$\sb1$ was revised using intrinsic criteria only, and Version X$\sb2$ on the basis of both intrinsic criteria and student data. / The study focused on the effects of these two versions in both a highly and less-controlled environment. The dependent measures were learner performance and attitude, trainer attitude, trainer effectiveness, and trainee intent to use the skills on the job. / Due to low test reliability in both environments, the posttest results were not interpretable. Participant intent to use course skills was not interpretable due to a ceiling effect. / Trainers' attitudes toward the course were more positive for Version X$\sb2$. The results for trainer effectiveness were mixed. In the highly controlled environment, the ratings for this indicator were higher for Version X$\sb2$ for one trainer, but not the other. In the less-controlled environment, the trainers for Version X$\sb2$ were rated slightly higher in most categories for both modules. A comparison of trainer delivery revealed that better trainers performed equally well delivering bother versions. Some of the weaker trainers were rated significantly higher in their delivery of Version X$\sb2$ for some categories. / Learner attitudes were more positive for Version X$\sb2$ in the highly controlled setting. In the less-controlled environment, the ratings for Version X$\sb2$ were mixed. These results indicate that the use of payoff data as a basis for revision decisions is likely to result in instruction that is better received by trainers and learners. / Source: Dissertation Abstracts International, Volume: 54-02, Section: A, page: 0416. / Major Professor: Robert A. Reiser. / Thesis (Ph.D.)--The Florida State University, 1993.
778

An investigation of the dimensionality of minimum competency exam containing multiple-choice and student-produced-response items

Unknown Date (has links)
This study's purpose was to investigate the dimensionality of a test containing multiple-choice (MC) and student-produced-response (SPR) item formats. Parallel tests (SPR and MC) were developed to assess skills for Florida's 1994 High School Competency Test (HSCT). Each test contained nine items selected or prepared parallel to those in the contrasting test and corresponding to a HSCT skill. Both SPR and MC tests were administered to 556 tenth graders enrolled in Florida high schools. / Five analyses attempted to determine the dimensionality of the test. Exploring the test's unidimensionality, confirmatory factor analyses were performed, followed by higher order factor analyses, and a modified parallel analysis. To explore potential content factors, principal component analyses of the total item set and for each section of the test (SPR and MC) were conducted, as well as a multi-dimensional scaling analysis. / Analyses indicate, to varying degrees, that the test is sufficiently unidimensional for measurement purposes. Results of the second order analysis and the modified parallel analysis offer a qualified yes to whether this test could be considered unidimensional enough to be analyzed using IRT procedures. No format factors were present in the study. The question regarding similarity of content for the two item formats resulted in unexpected process factors for both item formats, however, findings were tentative. / Source: Dissertation Abstracts International, Volume: 54-03, Section: A, page: 0799. / Major Professor: Jacob G. Beard. / Thesis (Ph.D.)--The Florida State University, 1993.
779

A COMPARISON OF THE REGRESSION EQUATIONS AND VALIDITY COEFFICIENTS OF TRADITIONAL AND NONTRADITIONAL FULL-TIME DEGREE-SEEKING STUDENTS AT FIVE FLORIDA UNIVERSITIES (RETURNING STUDENTS, ADULT RETURNING)

Unknown Date (has links)
This study was designed to compare traditional and nontraditional full-time degree-seeking students' regression equations and validity coefficients across five Florida post-secondary institutions. The single multiple regression method with dummy variables was selected to compare the two groups' regression planes across institutions (FSU, UF, UCF, USF, and FAMU), within age groups, and between age groups within institutions. The multiple partial statistic was selected to test for the interaction effect between the two indicator variables (Age and Institutions) and the three main effects (HSGPA, SATV and SATQ). The across institutions and within age groups validity coefficients variability of the high school grade point average and the Scholastic Aptitude Test verbal and quantitative scores was investigated, using meta-analysis methodology. / A sample of 883 students was retrieved from the State University System (SUS) students' files. With the exception of age, all the students were selected to be equivalent on the following characteristics: full-time enrollment, first time in college, degree-seeking, and accepted under regular admission policies. This selection procedure limited the sample size of the nontraditional group of students and, therefore, generalizations regarding the results of this study should be made cautiously. / It was concluded that a common prediction system was not practical and that a separate prediction system should be developed for each of the two groups compared within the five postsecondary institutions included in this study. The findings showed possible systematic overprediction or underprediction of the nontraditional students' performance in college when using a traditional student-derived regression equation to predict nontraditional students' performance. It was also apparent that nontraditional students' high school grade point average and traditional students' Scholastic Aptitude Test quantitative validity coefficients varied from institution to institution. There was no variation across institutions or within age groups of the Scholastic Aptitude Test verbal validity coefficients. As expected, high school grade point average was a better predictor of traditional students' performance in college, as Scholastic Aptitude Test verbal was for the nontraditional students. The average validity coefficients of the nontraditional students were, in all but one instance, lower than for the traditional group. It was recommended that differential validity and regression systems for traditional and nontraditional students be routinely studied. / Source: Dissertation Abstracts International, Volume: 47-05, Section: A, page: 1706. / Thesis (Ph.D.)--The Florida State University, 1986.
780

Linguistic and cultural influences on differential item functioning for Hispanic examinees in a standardized secondary level achievement test

Unknown Date (has links)
The issue of differential item functioning (DIF) in standardized tests has increasingly generated interest in the measurement and testing communities. An item is said to contain DIF if examinees of equal proficiency from different gender, ethnic or other groups have an unequal probability of responding correctly to the item. / Although the majority of DIF research has focused on its identification through statistical procedures, recent published studies have addressed the arguably more important issue of causes of DIF. To date, however, most studies of causes of DIF have been concerned with post-secondary situations. / The present study identified sources of DIF within a widely used secondary school achievement battery. Responses on the Vocabulary and Reading Comprehension sections of the Stanford Achievement Test were obtained from 1580 White and 3223 Hispanic eighth graders in Dade County, Florida. A quantitative technique was used to detect items exhibiting DIF. Once these items were identified, a review panel of expert bilingual judges examined them in terms of linguistic and cultural factors associated to DIF between Hispanics and Whites. / Results suggest that, when comparing Hispanic and White students of the same ability, the use in test items of true cognate words frequently used in Spanish will favor the performance of Hispanics. In contrast, several conditions may favor Whites, including the use of non true cognate words infrequently used in English, linguistic complexity, idiomatic expressions derived from technical language, and poetry. Moreover, words and phrases with a special cultural meaning for one of the groups will favor members of that group (White or Hispanic) and, the use of settings for which Hispanic students are likely to be less familiar with will favor the performance of Whites. / Findings from the study confirm the need to stress certain areas in the instruction of Hispanic students. Even though DIF is not necessarily indicative of item bias, its appearance in test items might be a sign of instructional deficiencies. Findings also should alert test developers to distinguish between construct-relevant DIF and DIF associated with test invalidity. / Source: Dissertation Abstracts International, Volume: 54-09, Section: A, page: 3411. / Major Professor: Albert C. Oosterhof. / Thesis (Ph.D.)--The Florida State University, 1993.

Page generated in 0.0428 seconds