• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8431
  • 5461
  • 1281
  • 897
  • 846
  • 459
  • 406
  • 241
  • 185
  • 171
  • 150
  • 142
  • 122
  • 82
  • 82
  • Tagged with
  • 22895
  • 4916
  • 4256
  • 3419
  • 2311
  • 2187
  • 1975
  • 1889
  • 1844
  • 1722
  • 1617
  • 1511
  • 1426
  • 1422
  • 1405
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
721

An experimental evaluation of Markov channel models

Zhou, Wenge 05 September 2012 (has links)
M.Ing. / The main contribution of this thesis can be summarized as follows. Firstly, we implemented a high speed error gap recording system, which can run with at most with a slight adaptation, on any personal computer, doing gap recording for digital communication systems, achieving a data rate of up to 200 kbits/sec. Secondly, we extended previous experimental investigations of the Fritchman channel model to three other channel models, namely the Gilbert, Gilbert-Elliott, and Aldridge- Ghanbari models, and implemented these models experimentally by a set of computer programs. Thirdly, we investigated the statistical modeling of two analog channels, employed for digital transmission. These are an analog audio cassette, magnetic tape recorder and an analog cordless telephone operating in the 46/49 MHz. band, in the CT1 system. No evidence could be found in the literature of modeling error distributions on these two channels.
722

Evaluation and comparison of air compressor performance

Antunes, Jose Manuel 29 September 2014 (has links)
M.Tech. (Mechanical and Industrial Engineering) / Please refer to full text to view abstract
723

Guidelines to improve the performance appraisal system for nurse educators in the nursing colleges in Botswana

Moalafhi, Carol Keabetswe 14 July 2015 (has links)
M.Cur. (Nursing Education) / Performance appraisal is a continuous process for identifying, evaluating and developing the work performance of nurse educators so that the goals and objectives of the college are more effectively achieved, while at the same time benefiting individual nurse educators in terms of reward and recognition of performance, professional development and career guidance. Performance appraisal entails structured formal interaction between an appraiser and appraisee, which usually takes the form of a periodic interview (annual or semiannual), in which the work performance of the appraisee is examined and discussed with a view to identifying weaknesses and strengths as well as opportunities for improvement and skills development. The challenge faced by the nurse educators is the lack of knowledge in execution of the performance appraisal. The purpose of the study is to describe guidelines to improve the performance appraisal system of nurse educators at all eight nursing colleges in Botswana. The research design is qualitative, exploratory and descriptive. Purposive sampling was used in this study to recruit nurse educators to participate in individual semistructured interviews. A qualitative open coding data analysis method was used. The researcher ensured the trustworthiness of the study by using Lincoln and Guba’s model of trustworthiness, which is based on four strategies: credibility, dependability, transferability and conformability. Inadequate knowledge among the nurse educators regarding performance appraisal emerged as the only main theme from the semi-structured interviews. Two subthemes that emerged from this theme were: inadequate knowledge of the appraisers and appraisees regarding the performance appraisal process and inadequate knowledge of the appraisers regarding mentoring and coaching of appraisees during the performance appraisal period. The theme and the sub-themes are conceptualised within the existing relevant literature, and guidelines to improve the performance appraisal system at the eight iv nursing colleges in Botswana are then described. Recommendations are made with regard to nursing education, nursing practice and nursing research. It is recommended that the nurse educators be trained in performance appraisal with emphasis on the performance appraisal process and the application of coaching and mentoring strategies throughout the performance appraisal period.
724

Die ontwerp van 'n meervoudige evalueringstelsel vir onderwysers aan die sekondêre skool

Grobler, Bernardus Rudolf 04 November 2014 (has links)
D.Ed. (Educational Management) / Please refer to full text to view abstract
725

Museum education: Creation, implementation, and evaluation of a web-based Elm Fork Natural Heritage Museum

Lundeen, Melissa 12 1900 (has links)
Evaluation of museum audiences both in their physical and web-based spaces is a necessary component of museum education. For smaller museums without the personnel or knowledge to create a website and evaluate the on-line audience, using a web-based learning tool may be able to help these museums properly maintain an online site. A web-based Elm Fork Natural Heritage Museum (WBEFNHM) was created during the 2008 fall semester at the University of North Texas. The site included photographs and information from specimens housed within the physical Elm Fork Natural Heritage Museum. The site was available to three non-science majors' biology laboratory courses, and three science majors' biology laboratory courses during the 2009 spring and fall semesters. Student use of the WBEFNHM was tracked and found no significant difference between the amount of time science majors and non-majors spent on the site. This evaluation helps in understanding future use of an online EFNHM.
726

Small sample IRT item parameter estimates

Setiadi, Hari 01 January 1997 (has links)
Item response theory (IRT) has great potential for solving many measurement problems. The success of specific IRT applications can be obtained only when the fit between the model and the test data is satisfactory. But model fit is not the only concern. Many tests are administered to relatively small numbers of examinees. If sample sizes are small, item parameter estimates will be of limited usefulness. There appear to be a number of ways that estimation might be improved. The purpose of this study was to investigate IRT parameter estimation using several promising small sample procedures. Computer simulation was used to generate the data. Two item banks were created with items described by a three parameter logistic model. Tests of length 30 and 60 items were simulated; examinee samples of 100, 200, and 500 were used in item calibration. Four promising models and associated estimation procedures were selected: (1) the one-parameter logistic model, (2) a modified one-parameter model in which a constant value for the "guessing parameter" was assumed, (3) a non-parametric three parameter model (called "Testgraf"), and (4) a one-parameter Bayesian model (with a variety of priors on the item difficulty parameter). Several criteria were used in evaluating the estimates. The main results were that (1) the modified one-parameter model seemed to consistently lead to the best estimates of item difficulty and examinee ability compared to the Rasch model and the non-parametric three-parameter model and related estimation procedures (the finding was observed across both test lengths and all three sample sizes and seemed to be true with both normal and rectangular distributions of ability), (2) the Bayesian estimation procedures with reasonable priors led to comparable results to the modified one-parameter model, and (3) the results with Testgraf, for the smallest sample of 100, typically led to the poorest results. Future studies seem justified to (1) replicate the findings with more relevant evaluation criteria, (2) determine the source of the problem with Testgraf and small samples/short tests, and (3) further investigate the utility of Bayesian estimation procedures.
727

Accuracy of parameter estimation on polytomous IRT models

Park, Chung 01 January 1997 (has links)
Procedures based on item response theory (IRT) are widely accepted for solving various measurement problems which cannot be solved using classical test theory (CTT) procedures. The desirable features of dichotomous IRT models over CTT are well known and have been documented by Hambleton, Swaminathan, and Rogers (1991). However, dichotomous IRT models are inappropriate for situations where items need to be scored in more than two categories. For example, in performance assessments, most of the scoring rubrics for performance assessment require scoring of examinee's responses in ordered categories. In addition, polytomous IRT models are useful for assessing an examinee's partial knowledge or levels of mastery. However, the successful application of polytomous IRT models to practical situations depends on the availability of reasonable and well-behaved estimates of the parameters of the models. Therefore, in this study, the behavior of estimators of parameters in polytomous IRT models were examined. In the first study, factors that affected the accuracy, variance, and bias of the marginal maximum likelihood (MML) estimators in the generalized partial credit model (GPCM) were investigated. Overall, the results of the study showed that the MML estimators of the parameters of the GPCM, as obtained through the computer program, PARSCALE, performed well under various conditions. However, there was considerable bias in the estimates of the category parameters under all conditions investigated. The average bias did not decrease when sample size and test length increased. The bias contributed to large RMSE in the estimation of category parameters. Further studies need to be conducted to study the effect of bias in the estimates of parameters on the estimation of ability, the development of item banks, and on adaptive testing based on polytomous IRT models. In the second study, the effectiveness of Bayesian procedures for estimating parameters in the GPCM was examined. The results showed that Bayes procedures provided more accurate estimates of parameters with small data sets. Priors on the slope parameters, while having only a modest effect on the accuracy of estimation of slope parameters, had a very positive effect on the accuracy of estimation of the step difficulty parameters.
728

Linking multiple -choice and constructed -response items to a common proficiency scale

Bastari, B 01 January 2000 (has links)
Tests consisting of both multiple-choice and constructed-response items have gained in popularity in recent years. The evidence shows that many assessment programs have administered these two item formats in the same test. However, linking these two item formats on a common scale has not been thoroughly studied. Even though several methods for linking scales under item response theory (IRT) have been developed, many studies have addressed multiple-choice items only and only a few studies have addressed constructed-response items. No linking studies have addressed both item formats in the same assessment. The purpose of this study was to investigate the effects of several factors on the accuracy of linking item parameter estimates onto a common scale using the combination of the three-parameter logistic (3-PL) model for multiple-choice items with the graded response model (GRM) for constructed-response items. Working with an anchor-test design, the factors considered were: (1) test length, (2) proportion of items of each format in the test, (3) anchor test length, (4) sample size, (5) ability distributions, and (6) method of equating. The data for dichotomous and polytomous responses for unique and anchor items were simulated to vary as a function of these factors. The main findings were as follows: the constructed-response items had a large influence in parameter estimation for both types of item formats. Generally, the slope parameters were estimated with small bias but large variance. Threshold parameters were also estimated with small bias but large variance for constructed-response items. However, the opposite results were obtained for multiple-choice items. For the guessing parameter estimates, the recovery was relatively good. The coefficients of transformation were also relatively well estimated. Overall, it was found that the following conditions led to more effective results: (1) a long test, (2) a large proportion of multiple-choice items in the test, (3) a long anchor test, (4) a large sample size, (5) no ability differences between the groups used in linking the two tests, and (6) the method of concurrent calibration. At the same time, more research will be necessary to expand the conditions, such as the introduction of multidimensional data, under which linking of item formats to a common scale is evaluated.
729

Measurements of student understanding on complex scientific reasoning problems

Izumi, Alisa Sau-Lin 01 January 2004 (has links)
While there has been much discussion of cognitive processes underlying effective scientific teaching, less is known about the response nature of assessments targeting processes of scientific reasoning specific to biology content. This study used multiple-choice (m-c) and short-answer essay student responses to evaluate progress in high-order reasoning skills. In a pilot investigation of student responses on a non-content-based test of scientific thinking, it was found that some students showed a pre-post gain on the m-c test version while showing no gain on a short-answer essay version of the same questions. This result led to a subsequent research project focused on differences between alternate versions of tests of scientific reasoning. Using m-c and written responses from biology tests targeted toward the skills of (1) reasoning with a model and (2) designing controlled experiments, test score frequencies, factor analysis, and regression models were analyzed to explore test format differences. Understanding the format differences in tests is important for the development of practical ways to identify student gains in scientific reasoning. The overall results suggested test format differences. Factor analysis revealed three interpretable factors—m-c format, genetics content, and model-based reasoning. Frequency distributions on the m-c and open explanation portions of the hybrid items revealed that many students answered the m-c portion of an item correctly but gave inadequate explanations. In other instances students answered the m-c portion incorrectly yet demonstrated sufficient explanation or answered the m-c correctly and also provided poor explanations. When trying to fit test score predictors for non-associated student measures—VSAT, MSAT, high school grade point average, or final course grade—the test scores accounted for close to zero percent of the variance. Overall, these results point to the importance of using multiple methods of testing and of further research and development in the area of assessment of scientific reasoning.
730

Content validity of independently constructed curriculum -based examinations

Chakwera, Elias Watson Jani 01 January 2004 (has links)
This study investigated the content validity of two independently constructed tests based on the Malawi School Certificate History syllabus. The key question was: To what extent do independently constructed examinations equivalently sample items from the same content and cognitive domains? This question was meant to examine the assumption that tests based on the same syllabus produce results that can be interpreted in similar manner in certification or promotion decisions on examinees without regard to the examination they took. In Malawi, such a study was important to provide evidence for the justification for using national examination results in placement and selection decisions. Based on Cronbach's (1971) proposal, two teams of three teachers were drawn from six schools that were purposefully selected to participate in this study. Each team constructed a test using the Malawi School Certificate of Education (MSCE) History syllabus. The two tests were put together in a common mock examination, which was first piloted before the final form. Two hundred examinees from the participating schools took the common mock examination. Paired scores from the two tests and the same examinees' scores on MSCE History 1A were used in the analysis of testing the mean difference of dependent samples and variance comparison. Subject matter experts' ratings were used to evaluate content and cognitive relevance of the items in the test. The findings indicate that MSCE syllabus was a well-defined operational universe of admissible observations because independently constructed tests equivalently tapped the same content. Their mean difference was not statistically different from zero and the mean of the squared difference scores was less than the sum of the split-half error variances. It was therefore, concluded that the two independently constructed were statistically equivalent. The two tests were also found to be statistically equivalent to the 2003 MSCE History 1A. However, the presence of stray items indicated syllabus looseness that needed redress to improve content coverage. Inadequacy in the rating of cognitive levels was noted as a problem for further research. The need to improve examinations was advocated in view of the their great influence in instruction and assessment decisions or practices.

Page generated in 0.0888 seconds