• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 230
  • 43
  • 29
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 1
  • Tagged with
  • 436
  • 436
  • 436
  • 65
  • 61
  • 60
  • 46
  • 46
  • 46
  • 35
  • 33
  • 31
  • 27
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

DEVELOPMENT OF AN EVALUATION FEEDBACK PROCESS AND AN EVALUATION UTILIZATION ASSESSMENT INSTRUMENT

Unknown Date (has links)
Evaluation feedback which is not presented appropriately for its prospective users might not be used. The proposed study determined the current and preferred characteristics of evaluation feedback, and the current and desired levels of the utilization of feedback. Subjects were the evaluator and receivers of evaluation information for the Primary Education Program (PREP) at the Leon County Public Elementary Schools. / This research culminated in three products; the first being an evaluation feedback process for the PREP program. This process fit public school decision-makers' preferences for specific types of evaluation feedback. Second, an instrument was developed to assess utilization of evaluation information which also can be used for other school programs. Finally, the Leon County Public Schools now have immediately-applicable procedures for improving feedback for programs and for assessing utilization of that feedback. In a wider application, the resulting instruments, process, and knowledge can be adapted for use in other school districts where utilization of evaluation information is less than optimal. / Source: Dissertation Abstracts International, Volume: 46-08, Section: A, page: 2275. / Thesis (Ph.D.)--The Florida State University, 1985.
62

Basic skills achievement patterns from kindergarten through tenth grade

Unknown Date (has links)
A descriptive and exploratory longitudinal study was conducted to investigate whether a gap existed between grade level standards and the academic achievement of students with low school readiness and to determine if the gap widened as those students progressed through school. The cohort of interest was students who had taken the KITE readiness test and had attended schools in the Leon County school district from kindergarten through tenth grade for the school years 1974-75 through 1984-85. Students were divided into low, average, and high readiness groups on the basis of their readiness test scores. / Mean differences and variability in communication and mathematics achievement among readiness groups on the norm-referenced CTBS test series and the criterion-referenced SSAT-I test series over time were examined. Because maturation and learning play such a vital role in academic achievement, the achievement patterns of the cohort were examined in the context of the fan spread growth model. It is assumed in this model that as variability within each group increases over time, so does the mean gap between the groups of interest. / Analyses revealed that the students in the low readiness groups did fall further from the set academic standard for the CTBS test series, however, the reverse was true for the criterion-referenced SSAT-I test series. Although for the CTBS tests little fan spread was found between the low and average readiness groups, a notable increasing fan spread was found between the average and high readiness groups. For the SSAT-I tests, overall, fan spread between readiness groups was small and decreasing. When analyzing the data by race and by sex, it was found that the largest differences were between white students and black students, not between males and females. For the CTBS, black males scored consistently lower than other subgroups. For the SSAT-I, black females performed below other subgroups. / Source: Dissertation Abstracts International, Volume: 49-06, Section: A, page: 1437. / Major Professor: F. Craig Johnson. / Thesis (Ph.D.)--The Florida State University, 1988.
63

Sampling effects of writing topics and discourse modes on generalizability of individual student and school writing performance on a standardized fourth-grade writing assessment

Unknown Date (has links)
This study investigated generalizability of writing performance of individual students and schools within the context of a large-scale direct writing assessment. The study focused on the sampling effects of the two major components of writing tasks--writing topics and discourse modes. Generalizability studies were conducted to estimate the sampling effects of writing topics and discourse modes using data from the 1994 Florida Writing Assessment for Fourth Grade. / Results at both individual and school level indicate that topics requiring the same discourse skills had little effect on writing performance. The study found significant discourse mode effects, which strained the generalization of writing scores beyond the sample discourse mode. The results suggest that different aims of discourse may put unequal cognitive demands on students. As a result, writing competency on one discourse domain may not generalize well to other aims of discourse. / The study also investigated the effects of school size on generalizability of school writing scores. The study found that writing scores of large schools have a higher level of generalizability than those of smaller schools, indicating a positive relationship between the generalizability and school size. / The study confirmed the need for a large number of tasks to obtain a reliable estimate of writing competency in both individual- and school-level assessments. The study, however, demonstrated that an assessment can provide a more reliable estimate of writing competency for schools than for individual students. Furthermore, school-level assessments based on matrix sampling design proved to be a viable solution for overcoming limited sampling problem, thus improving generalizability of school writing scores. / Source: Dissertation Abstracts International, Volume: 56-03, Section: A, page: 0901. / Major Professor: Albert Oosterhof. / Thesis (Ph.D.)--The Florida State University, 1995.
64

Performance assessment: Measurement issues of generalizability, dependability of scoring, and relative information on student performance

Unknown Date (has links)
The purpose of this study was to investigate whether the limited number of observations that might be included in a performance assessment would adequately generalize to potential circumstances that would not be observed. Related studies were also conducted to determine how dependably scores are assigned to the measures of students' performance and how different information is provided by paper and pencil test versus performance assessment. A performance assessment was developed in the context of an introductory graduate statistics course and administered to the graduate students along with a paper and pencil test. / A generalizability study was used to estimate the dependability of the performance assessment and to improve the design of the assessment. Dependability of scoring was analyzed through the application of classical test theory and generalizability theory. Correlational and exploratory factor analysis was conducted to determine the relative information provided by two test formats. / This study found that raters do not introduce substantial error into the measurement of performance. Rather, the major source of error is the inconsistency of student performance across tasks, indicating that the number of tasks could be increased to achieve a reliable score for student performance. The correlation between overall scores assigned by two raters and the results of G study suggest that raters are able to consistently evaluate student performance and eventually, the number of raters can be reduced to one and eliminated as a facet in the design of the generalizability study. Relatively high correlation was found between the two measures and there was no evidence of a format factor associated with the use of performance assessment. The factor analytic solution suggests a relationship between factor structure and item discrimination. / Source: Dissertation Abstracts International, Volume: 56-04, Section: A, page: 1328. / Major Professor: Albert C. Oosterhof. / Thesis (Ph.D.)--The Florida State University, 1995.
65

Changing the language of instruction for Mathematics and Science in Malaysia: the PPSMI policy and the washback effect of bilingual high-stakes secondary school exit exams

Tan, Hui May January 2010 (has links)
No description available.
66

Best Practices for Student Success on the ACT

Boyer, Grant Coday 24 April 2013 (has links)
<p> Large achievement gaps have been found in ACT scores between high schools throughout the same state and in comparisons between states. In Missouri, four public high schools have consistently scored four points higher than the Missouri average for years 2007&ndash;2011. States, such as Nebraska, Minnesota, and Iowa, have shown consistent above average scores as compared to states with similar participation numbers throughout the nation. Schmoker (2006) believed that due to the existing culture of schools and school leadership, learning from others' successes is often discouraged; therefore, this study was conducted in an attempt to discover the best practices used in high-achieving high schools and states that obtain high student achievement on the ACT. Educational leaders within the top 5% of high schools in Missouri, based on a five-year (2007&ndash;2011) average of ACT scores, were surveyed to determine successful teaching strategies and programs educators in these schools are implementing. Leaders from consistently successful states (having higher than average ACT scores with a high percentage of participation) took part in a survey to extrapolate further characteristics regarding high achievement. Furthermore, the trends and the approaches that contribute to student success in states that require the ACT were examined through interview responses. While the study did not reveal any new best practices, the findings supported many best practices already in existence, and most importantly, showed the necessity for the development of a learning culture that emphasizes success and achievement.</p>
67

The effects of misclassified training data on the classification accuracy of supervised and unsupervised classification techniques

Holden, Jocelyn E. January 2009 (has links)
Thesis (Ph.D.)--Indiana University, School of Education, 2009. / Title from PDF t.p. (viewed on Feb. 8, 2010). Source: Dissertation Abstracts International, Volume: 70-05, Section: A, page: 1636. Adviser: Ginette Delandshere.
68

The goal programming approach for test construction

Hsu, Yung-Chen, 1962- January 1993 (has links)
A goal programming approach for selecting items from an item bank to construct a test, based on item response theory (IRT), was proposed. This approach can simultaneously handle multiple and conflicting goals, which is more realistic and practical than the linear programming approach that deals with only a single goal. An example was presented to show the procedures of applying the goal programming approach for constructing a test based on IRT one-parameter logistic model. The results provide the test constructor flexible ways to select items to meet different needs.
69

What's in a grade? A mixed methods investigation of teacher assessment of grammatical ability in L2 academic writing

Neumann, Heike January 2011 (has links)
This study investigates how grammatical ability is assessed in L2 academic writing classrooms. In the assessment literature, grammatical ability is defined to include syntax and morphology (Purpura, 2004; Weigle, 2002) and lexical forms, cohesion, and information management on the subsentential, sentential, and suprasentential levels (Purpura, 2004). Writing teachers would, therefore, need to attend to morphosyntactic and other grammatical aspects in L2 texts that serve to organize information and create cohesion on the sentence, paragraph, and text levels. In a mixed methods triangulation design (Creswell &amp; Plano Clark, 2007) using both quantitative and qualitative methods, this study examines the indicators of grammatical ability that writing teachers (n = 2) attend to when assessing their students' (n = 33) grammatical ability in academic essays in one high-intermediate and one advanced L2 writing course at an English-medium university in Canada. In addition, the study considers to what extent the students' learning is affected by the teachers' assessment criteria. In the first phase of this study, the students' essay exams and the teacher-assigned grammar grade were collected and analyzed quantitatively using accuracy and complexity measures as indicators of morphosyntactic ability. They were also examined qualitatively within a framework of systemic functional linguistics to assess the students' ability to manage information in their texts. In phase two, student questionnaires were administered, and student interviews were conducted to determine the students' knowledge of the teachers' assessment criteria for grammar. In phase three, the teachers were interviewed about their criteria and their priorities in the assessment of grammar. Finally, the results from all three phases and all four data sources were integrated to come to an overall interpretation of the findings. The results indicate that writing teachers focus above all on grammatical accuracy when assessing their students' grammatical ability. Consequently, writing teachers seem to assess a reduced construct of grammatical ability in academic writing, compared to definitions in the L2 assessment literature. This emphasis has an impact on how students learn in these L2 writing classrooms. This dissertation concludes with a discussion of implications and makes recommendations for L2 writing assessment based on the findings of this study. / Cette recherche s'intéresse à la façon d'évaluer les compétences grammaticales dans les cours universitaires d'écriture langue seconde (L2). Selon la littérature, les compétences grammaticales comprennent la syntaxe et la morphologie (Purpura, 2004; Weigle, 2002) de même que les formes lexicales, la cohésion et la gestion de l'information au niveau sous-phrastique, phrastique et supra-phrastique (Purpura, 2004). Par conséquent, les professeurs d'écriture devraient, lors de l'étude de textes en L2, miser sur la morphosyntaxe et sur d'autres aspects grammaticaux utiles à l'organisation de l'information et assurant la cohésion au niveau de la phrase, du paragraphe et du texte. En s'appuyant sur une approche méthodologique mixte (Creswell &amp; Plano Clark, 2007), utilisant une triangulation de données qualitatives et quantitatives, cette recherche vise à déterminer quels sont les indicateurs de performance grammaticale sur lesquels s'appuient les professeurs d'écriture (n = 2) lorsqu'ils évaluent les compositions de leurs étudiants (n = 33). Les étudiants de l'échantillon étaient inscrits à une université canadienne anglophone et suivaient un cours d'anglais de niveau intermédiaire ou avancé. D'autre part, cette recherche a pour objectif de déterminer à quel point l'apprentissage des étudiants est influencé par les critères d'évaluation de leur professeur. Dans la première partie de la recherche, les compositions des étudiants et les notes de grammaire données par le professeur ont été collectées et analysées quantitativement, en utilisant des mesures de justesse et de complexité grammaticales comme indicateurs de la compétence morphosyntaxique. Elles ont également été analysées qualitativement dans un cadre linguistique systémique fonctionnel afin de déterminer la compétence des étudiants à gérer l'information dans leurs textes. Dans la seconde partie de la recherche, des questionnaires ont été administrés aux étudiants et certaines entrevues ont été menées afin d'évaluer la connaissance qu'avaient les étudiants des critères d'évaluation de leur professeur en ce qui a trait à la grammaire. Dans la troisième partie, les professeurs ont été interviewés au sujet de leurs critères et priorités dans l'évaluation de la grammaire. Enfin, les résultats des trois parties et des quatre sources d'informations ont été réunis afin d'en arriver à une interprétation globale des conclusions. Les résultats révèlent que les professeurs d'écriture priorisent surtout la justesse grammaticale lorsqu'ils évaluent les compétences grammaticales de leurs étudiants. Par conséquent, les professeurs d'écriture semblent s'appuyer sur une définition plus limitée de la compétence grammaticale dans l'évaluation des textes de leurs étudiants que ce qui est défini par la littérature à ce sujet, ce qui a nécessairement un effet sur la façon d'apprendre des étudiants qui suivent des cours d'écriture. Cette thèse se termine par une discussion qui met en lumière ce qu'impliquent ces résultats et où sont émises certaines recommandations au sujet de l'évaluation en écriture dans un cours de L2.
70

Measuring morphological awareness across languages

Quiroga Villalba, Jorge January 2013 (has links)
This study was part of a larger study that investigated the effects of biliteracy instruction on 2nd-grade students' morphological awareness in English and French. In order to measure their morphological awareness, a test was designed in each of the two languages: The English and the French Morphological Awareness Test (MAT). The design of the current study aimed at answering two questions. The first one was about the components necessary to develop a viable instrument whose objective is to measure morphological awareness across languages. The second question pertains to the relationships among measures of vocabulary knowledge, phonological awareness, and morphological awareness in each language. To that end, both versions of the MAT, plus four other measures (two to assess phonological awareness in English and French and the other two to determine receptive vocabulary knowledge in each language) were administered to 72 children who were English-dominant, French-dominant, or bilingual. The concept of lexical frequency was operationalized to design three different levels of difficulty on the MAT. Examiners and coders were carefully trained to administer and score the MAT, which contained two sections, with five items in each section borrowed from earlier studies that investigated morphological awareness. These items served the purpose of determining convergent validity, whereas the measures of phonological awareness and vocabulary were used to establish discriminant validity. The results of the statistical analysis indicate high levels of reliability and validity for both versions of the MAT. / Cette étude faisait partie d'une autre investigation plus grande sur les effets que l'instruction en biliteracité de deux langues aurait sur la conscience morphologique en anglais et en français avec des élèves de 2ième année. Afin de mesurer la conscience morphologique, on a créé un test dans les deux langues qui s'appelle le Morphological Awareness Test (MAT). La conception de cette étude avait l'intention de répondre à deux questions. La première question abordait les composantes nécessaires pour développer un test pour mesurer la conscience morphologique dans différentes langues. La deuxième question concernait la relation entre un test de conscience morphologique et des tests de conscience phonologique et de la connaissance de vocabulaire dans chaque langue. C'est pour cela que les deux versions du MAT et les quatre autres tests (deux pour évaluer la conscience phonologique en anglais et en français et les deux autres pour déterminer la connaissance réceptive de vocabulaire dans chaque langue) ont été donnés à 72 enfants qui avaient différents niveaux de bilinguisme, que ce soit un niveau prédominant en anglais ou en français ou même un niveau de bilinguisme presque égal dans les deux langues. La notion de fréquence lexicale a été opérationnalisé pour élaborer trois niveaux différents de difficulté dans le MAT. Les examinateurs et les évaluateurs ont été soigneusement formés à administrer et à corriger le MAT, lequel avait deux sections, avec cinq items dans chaque section qui ont été empruntés à des études antérieures qui examinaient la conscience morphologique. Ces items ont été utilisés pour déterminer la validité convergente, tandis que les tests de vocabulaire et de conscience phonologique ont servi pour établir la validité discriminante. Les résultats de l'analyse statistique indiquent de hauts niveaux de fiabilité et de validité pour les deux versions du MAT.

Page generated in 0.2317 seconds