• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 230
  • 43
  • 29
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 1
  • Tagged with
  • 436
  • 436
  • 436
  • 65
  • 61
  • 60
  • 46
  • 46
  • 46
  • 35
  • 33
  • 31
  • 27
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

THE INVESTIGATION OF SELF-EVALUATION PROCEDURES FOR IDENTIFYING INSTRUCTIONAL NEEDS OF TEACHERS

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 37-10, Section: A, page: 6428. / Thesis (Ph.D.)--The Florida State University, 1976.
52

EXPECTED PRODUCTIVITY CURVES FOR INSTRUCTION FOR A SYSTEM OF HUMAN RESOURCES ACCOUNTING IN HIGHER EDUCATION

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 38-04, Section: A, page: 2069. / Thesis (Ph.D.)--The Florida State University, 1977.
53

THE EFFECT OF STRUCTURAL CHARACTERISTICS UPON ITEM DIFFICULTIES IN STANDARDIZED READING COMPREHENSION TESTS

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 38-04, Section: A, page: 2070. / Thesis (Ph.D.)--The Florida State University, 1977.
54

The stability of item-parameter estimates across time: A comparison of item response models and selected estimation procedures

Unknown Date (has links)
This study examined the stability of item parameter estimates across time using (a) different item response models and (b) different estimation procedures. These stabilities were examined in the context of item banking. According to Item Response Theory, the item parameter estimates should not differ from administration to administration. Any differences in the item parameter estimates, among other factors, may be attributed to changes in emphasis of school curricula over time, using an inappropriate model, or the error associated with estimation procedures used to estimate the item parameters. / The factors and their levels in this investigation were (a) model: one-, two-, and three-parameter models, with a common value for guessing; (b) estimation procedures for item parameters: Joint Maximum Likelihood (JML) using the LOGIST computer program, and Marginal Maximum Likelihood (MML) using the BILOG computer program. / The test used in this study was the SSAT-II Mathematics test. Data for this study were obtained from test results of the March 1985 and March 1986 test administration. This test was administered to approximately 100,000 10th grade students in Florida who had to pass this test to graduate. The Mathematics test consisted of 75 items. Of these, 49 items were common to both the 1985 and 1986 tests. The analyses were performed on four representative samples, each 1000 first-time takers. / Results showed that regardless of the model used and estimation procedures employed, the parameter estimate changes on the average between the 1985 and 1986 test administrations were significantly higher than parameter estimate changes between the two 1985 samples. However, the changes were not across most of the items. Only two items demonstrated significant change beyond what was expected from sampling fluctuation. / The one-parameter model produced significantly lower mean differences between ICCs than the two- and the modified three-parameter models. This pattern of differences was observed to be similar for both the JML and the MML estimation procedures. The MML estimation procedure produced significantly smaller mean differences than the JML estimation procedure. / Source: Dissertation Abstracts International, Volume: 49-06, Section: A, page: 1436. / Major Professor: F. J. King. / Thesis (Ph.D.)--The Florida State University, 1988.
55

The differences among reliability estimates of randomly parallel tests and their effects on Tucker non-random group linear equating

Unknown Date (has links)
The purpose of the study was to investigate the differences among reliability estimates of randomly parallel tests and their effects on Tucker non-random group linear equating. / The study was conducted using a sample of 988 tenth graders. One hundred 20-item tests were randomly generated from the students' response files. These tests were called "current" forms. Each of these randomly parallel forms was equated to each of five reference forms. / It was found that differences between reliabilities of current and reference tests had an effect on the accuracy of Tucker equating. The equating error was systematic and predictable. Larger differences in the reliabilities of test forms tended to produce larger errors. / Given an arbitrary unweighted error of.50, or a weighted error of.15, on the standard T-scale, a range of acceptable differences in reliability estimates was proposed for Tucker equating. Differences in the reliability of the current and reference tests of less than.025 produced negligible equating errors. / Source: Dissertation Abstracts International, Volume: 52-03, Section: A, page: 0890. / Major Professor: Jacob G. Beard. / Thesis (Ph.D.)--The Florida State University, 1991.
56

Ridge regression: Application to educational data

Unknown Date (has links)
Ridge regression is a type of regression technique which was developed to remedy the problem of multicollinearity in regression analysis. The major problem with multicollinearity is that it causes high variances in the estimation of regression coefficients. The ridge model introduces some bias into the regression equation in order to reduce the variance of the estimators. The purposes of this study were to demonstrate the application of the ridge regression model to educational data and to compare the characteristics and performance of the ridge method and the least squares method. In this study, four types of ridge were compared to the least squares method. They were ridge trace, generalized, ordinary and directed ridge. / The sample of this study consisted of 141 public schools in Dade County, Florida. The dependent variable was the students' average scores in mathematical computation and reading comprehension. Six variables representing teacher and student characteristics were employed as the predictors. The performance of ridge and the least squares were compared in terms of the confidence interval of an individual estimator and predictive accuracy for the whole model. Since the statistical inference for the ridge method has not been completely developed, the bootstrap technique with a sample size of twenty, was used to calculate the confidence interval of each estimator. / The study resulted in a successful application of ridge regression to school level data in which it was found that (1) ridge regression yielded a smaller confidence interval for every estimated regression coefficient and (2) ridge regression produced higher predictive accuracy than ordinary least squares. / Since the results were just based on one particular set of data, it cannot be guaranteed that ridge always outperforms the least squares method in all cases. / Source: Dissertation Abstracts International, Volume: 49-03, Section: A, page: 0487. / Major Professor: F. J. King. / Thesis (Ph.D.)--The Florida State University, 1988.
57

Effects of local dependence in achievement tests on IRT ability estimation

Unknown Date (has links)
The effects of commonly occurring violations of the assumption of test items' local independence were investigated in this study. Item responses to college-level communications and mathematics tests were simulated using a multidimensional item response theory model. These data sets were then assigned different degrees of dependency as defined by Ackerman's model and the effects on the one and three parameter models' ability estimates were found. The results suggest caution in interpretation of the unidimensional ability estimate when extreme dependency is present with heterogeneous subtests. / Source: Dissertation Abstracts International, Volume: 49-06, Section: A, page: 1439. / Major Professor: Jacob Beard. / Thesis (Ph.D.)--The Florida State University, 1988.
58

THE CLASSIFICATION OF STUDENTS WITH RESPECT TO ACHIEVEMENT, WITH IMPLICATIONS FOR STATE-WIDE ASSESSMENT

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 38-05, Section: A, page: 2727. / Thesis (Ph.D.)--The Florida State University, 1977.
59

The effects of alternate testing strategies on student achievement

Unknown Date (has links)
The present study compared the effects of three classroom testing strategies on student achievement. The strategies varied with respect to both the detail in feedback provided students after each unit test and in the availability of a retest. Within one strategy, students were informed only of their total test score and had no opportunity to take a retest. Within a second strategy, students were provided scores on each skill assessed by the test and allowed to take a retest one week later. Within the third strategy, students were provided detailed feedback concerning the nature of problems they had experienced with each skill, in addition to the scores on each skill and the option of taking a retest. / The study was conducted in the context of an introductory graduate statistics course. Students were randomly assigned to one of the three testing strategies for the duration of the term. In contrast to previous research within mastery learning, the curriculum and delivery of instruction were held constant across treatment conditions. / The achievement of students in the respective groups was contrasted on two summative exams. The first exam measured the exact skills assessed by the unit test and retests. The second exam measured a more generic set of skills and was designed to test students' ability to generalize their knowledge. No significant differences in achievement on either exam were observed between treatment conditions. The findings of this study suggest that when instructional time and objectives are held constant, simply providing students with detailed feedback regarding their performance on the test and the opportunity to take a retest does not represent sufficient action to improve student achievement. / Source: Dissertation Abstracts International, Volume: 52-11, Section: A, page: 3898. / Major Professor: Albert C. Oosterhof. / Thesis (Ph.D.)--The Florida State University, 1991.
60

COMPUTER-BASED TESTING: A COMPARISON OF MODES OF ITEM PRESENTATION

Unknown Date (has links)
This study investigated the effects of two modes of computerized test item presentation on student performance, total testing time, and anxiety. The item type used was one that might be created by a teacher as part of a test after a unit of instruction. The test administration was designed to approximate a classroom application of computers to testing. / Sixty students enrolled in the Educational Psychology Class at Florida State University were randomly assigned to either the paced or unpaced mode of item presentation. Both modes entailed presenting 30 test items, over two instructional objectives, one at a time on a computer screen. A total of 15 minutes was allowed. In the paced mode, an item remained on the screen for 30 seconds and was removed. Examinees could not retake items. In the unpaced mode, an item remained on the screen until the examinee removed it. Examinees could retake any item. Following the timed test, an untimed anxiety questionnaire was administered on the screen. / The statistical design of the study was a two factor 2 (treatment) x 2 (computer experience) analysis of covariance. Covariance analysis was used to control variance between the students due to differences in age and general classroom achievement. The dependent variables were scores on the computer administered test, total testing time, and scores on the anxiety questionnaires. / No significant differences in test scores were found for the paced and unpaced groups, nor for the computer experienced and unexperienced groups. A significant treatment effect on test time was found. The paced group took significantly less time to complete the test, regardless of computer experience. No significant treatment or computer experience effects on anxiety were found. It appears that both computer experienced and unexperienced students can take a paced test with no decrease in scores or increase in anxiety, but with a substantial decrease in testing time. / Source: Dissertation Abstracts International, Volume: 48-07, Section: A, page: 1748. / Thesis (Ph.D.)--The Florida State University, 1987.

Page generated in 0.6122 seconds