• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 43
  • 29
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 1
  • Tagged with
  • 438
  • 438
  • 438
  • 66
  • 62
  • 60
  • 46
  • 46
  • 46
  • 35
  • 33
  • 32
  • 27
  • 25
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Methods for assessing student learning in the State of Arizona

Midyett, Stephen Jay January 2001 (has links)
The effectiveness of a method using scaled scores and a correction for regression to the mean (RTM) designed to measure academic growth attributable to schools was compared to several alternative methods all incorporating simple (unadjusted) growth. Problems with scaled scores and the correction for RTM were discussed. Three alternative methods using normal curve equivalent (NCE), percentile rank (PR), and stanine scores were presented and compared to the scaled score method. A variation of the scaled score method without the correction for RTM was proposed to examine the effects of the correction. Two variations of the NCE and PR score methods were constructed with adjusted passing criteria to examine the effect of accounting for measurement error. Matched-student (1998--1999) Stanford 9 Achievement Test scores from the State of Arizona were used to compute a dichotomous one year's growth indicator (OYG) and a five-point within-state rank-ordered growth indicator (the Star Rating) for each school/grade unit using each of the proposed methods. Results showed that the methods using NCE or PR scores were more likely than the method using scaled scores to assign the same OYG decision to each school/grade unit. The correction for RTM resulted in school/grade units with low initial status having to (inappropriately) make more than one year's worth of growth to achieve a passing OYG decision. The results tended to confirm correlations between initial status and the simple growth indicators in the alternative methods, but for a majority, the magnitudes of the correlations were not large enough to warrant dismissing simple growth. Recommendations from the study were: (1) Scaled scores and the correction for RTM should not be used in any of the methods; (2) Methods that account for error should be used to allow for control over the possibility of misidentification of failing schools as well as the proportion of schools that are identified as needing assistance; (3) The current minimum unit size criterion of eight students should remain, because increasing the number would result in too many units not included in analyses.
212

Affective domain applications in the junior high schools in Turkey

Bicak, Bayram January 2003 (has links)
Teachers' affective domain applications in the junior high schools in Turkey were explored in this dissertation. Teachers' attitudes toward, awareness levels of, importance placed on, and planning and classroom applications of affective objectives were investigated. One hundred thirty-one junior high school teachers participated in the study. They were selected randomly from 13 schools in central Ankara. They answered a 57-item questionnaire. Ten of the teachers were interviewed based on the questionnaire items used for the study. Differences among teachers were studied according to subjects taught, amount of training received, gender, and years of teaching experience. Group differences were evaluated by performing t tests and analyses of variance statistical methods. Results showed that teachers' awareness, attitudes, applications, and planning levels were at moderate levels. Physical education and art teachers had the lowest mean scores for planning processes. The implications of the findings are discussed.
213

Out-of-level testing for special education students participating in large-scale achievement testing: A validity study

Brown, Laureen Kay January 2003 (has links)
The purpose of this study was to examine the reliability and validity of out-of-level (OOL) testing for students with mild cognitive disabilities participating in large-scale accountability assessments. Federal law now requires maximum participation of students with disabilities in these assessments, and OOL testing is one method used to accomplish this mandate. However, the prevalence, reliability, and validity of this practice have not been established. This study involved the analysis of second through eighth grade students' OOL and grade-level (GL) Stanford 9 reading and math subtest data. Raw data was collected by the district studied, as part of an annual state-mandated testing program. Participation rates and methods of participation for students with Specific Learning Disability (SLD) and Mild Mental Retardation (MIMR) were examined over a five-year period. Results indicated that an over 700% increase in the numbers of MIMR and SLD students participating in Stanford 9 testing occurred from 1998 to 2002. The use of OOL tests also increased substantially during that period. With regard to reliability, results indicated that KR-20 coefficients were comparable across regular education GL and Special Education OOL test groups. In addition, comparable percentages of students in GL and OOL groups scored within the test's reliable range. Special Education students were not given tests that were too easy as a result of OOL testing options. Validity evaluation included comparisons of modified caution indices (MCI) and point-biserial correlations for matched GL and OOL groups, as well as differential item functioning (DIF) analyses. MCI and point-biserial analyses provided no evidence of differential validity for GL and OOL groups. Although DIF analyses identified more items as functioning differently across groups (GL vs. OOL) than would be expected by chance, no systematic patterns of bias resulting from the OOL test administration condition were identified. OOL testing was determined to be an appropriate method of achievement testing for students with SLD. True differences between OOL and GL groups, as well as differences in test administration other than the OOL versus GL condition are discussed. Recommendations regarding OOL testing policy, stakeholder education, test development and reporting practices, and future research are included.
214

Using data mining in educational research: A comparison of Bayesian network with multiple regression in prediction

Xu, Yonghong January 2003 (has links)
Advances in technology have altered data collection and popularized large databases in areas including education. To turn the collected data into knowledge, effective analysis tools are required. Traditional statistical approaches have shown some limitations when analyzing large-scale data, especially sets with a large number of variables. This dissertation introduces to educational researchers a new data analysis approach called data mining, an analytic process at the intersection of statistics, databases, machine learning/artificial intelligence (AI), and computer science, that is designed to explore large amounts of data to search for consistent patterns and/or systematic relationships between variables. To examine the usefulness of data mining in educational research, one specific data mining technique--the Bayesian Belief Network (BBN) based in Bayesian probability--is used to construct an analysis model in contrast to the traditional statistical approaches to answer a pseudo research question about faculty salary prediction in postsecondary institutions. Four prediction models--a multiple regression model with theoretical variable selection, a regression model with statistical variable extraction, a data mining BBN model with wrapper feature selection, and a combination model that used variables selected by the BBN in a multiple regression procedure--are expounded to analyze a data set called the National Survey of Postsecondary Faculty 1999 (NSOPF:99) provided by the National Center of Educational Services (NCES). The algorithms, input variables, final models, outputs, and interpretations of the four prediction models are presented and discussed. The results indicate that, with a nonmetric approach, the BBN can effectively handle a large number of variables through a process of stochastic subset selection; uncover dependence relationships among variables; detect hidden patterns in the data set; minimize the sample size as a factor influencing the amount of computations in data modeling; reduce data dimensionality by automatically identifying the most pertinent variable from a group of different but highly correlated measures in the analysis; and select the critical variables related to a core construct in prediction problems. The BBN and other data mining techniques have drawbacks; nonetheless, they are useful tools with unique advantages for analyzing large-scale data in educational research.
215

Defining and measuring academic standards for higher education: A formative study at the University of Sonora

Gonzalez-Montesinos, Manuel Jorge January 2004 (has links)
Institutional efforts to organize the admissions process in several Mexican universities have led to the adoption of standardized instruments to measure applicants' initial academic qualifications for career programs. The University of Sonora, located in four campuses throughout the state, initiated the administration of a college level entrance exam in the fall of 1997. The Examen de Conocimientos y Habilidades Basicas (EXCHOBA), developed in 1991, is the instrument employed for aiding the academic and administrative agencies in making admissions and career placement decisions to date. Drawing on current practice, this project develops a model for investigating the alignment of the high school curriculum with the entrance examination by extracting and clarifying the academic standards that derive from the official curriculum. Through a series of statistical analyses on data from exam administrations, a working model for defining the standards along with the instruments' sub-tests is proposed. The basis for a system are then suggested to assist high school and university agencies and administrators to interpret the results with a clear set of procedures for making curricular and instructional decisions that will help improve the current rates of success in the different career programs at the institution. In particular, the results obtained will lead to a proposal to improve the academic advising and guidance programs that the Universidad de Sonora is currently implementing to improve student retention and graduation rates in its career programs.
216

Understanding the oral examination process in professional certification examinations

Gerdeman, Anthony Michael, 1968- January 1998 (has links)
The subjective nature of oral examinations often lead to reliability estimates that are lower than other types of examinations (i.e., written examinations). The potentially biasing individual attributes of examiners (i.e., experience) are of particular concern since the oral examination process depends specifically upon the quality of their assessments. In addition, traditional reliability estimation procedures are not always possible for some oral exams due to the utilization of incomplete measurement designs (i.e., one examiner per candidate) resulting from the inherent high costs and complicated logistics associated with large scale oral examinations. Consequently, the current study attempts to evaluate the quality of one such exam by developing alternative indicators of exam quality using a pre-existing data set. A series of examiner agreement variables were calculated for low, moderate, and high ability candidates and subsequently correlated with each other. A series of exploratory multiple regressions were also used to evaluate the potential impact of several examiner characteristics (experience, gender, specialty, variance of scale use, and fail rate) confined in the data set. Finally, a generalizability (G) study was conducted on a subset of the examination that utilizes a complete measurement design (i.e., two examiners evaluating the same candidate, and all examiners examine all candidates) for lower ability candidates. The G study was then followed by a decision (D) study to determine both the current level of dependability with two examiners, and how much the dependability of the process would improve by adding mure examiners. The results of the current study suggest that evaluating lower ability candidates is different and more difficult than evaluating higher ability candidates. Furthermore, systematic sources of error related to examiners appears to be less or a concern than previously anticipated. Finally, the results of the G-D studies suggest that the current dependability of evaluating lower ability candidates with two examiners could be greatly improved by adding additional examiners to the process.
217

Variables and Venn diagrams

Rein, Judith Ann January 1997 (has links)
Venn diagrams were invented by John Venn in 1880 as an aid in logical reasoning. Since then, the diagrams have been used as an aid in understanding and organization for widely diverse audiences (e.g., elementary school children, business people) and widely diverse content areas (e.g., self-improvement courses, statistics). In this dissertation, Venn diagrams are used to illustrate and explain variable relationships. There are three main foci: (a) correlation and interaction, (b) variables and Venn diagrams, and (c) reliability and Venn diagrams. Confusion between correlation and interaction is explained, and the multicollinearity problem is illustrated using a Venn diagram composed of three circles and a horse-shoe shaped figure. Venn diagrams are presented for these variables: moderator; concrete and hypothetical intervening; component; traditional, negative, and reciprocal suppressor; covariate; disturbance; and confound. Venn diagrams are also used to differentiate among within-subjects, between-subjects, and reliability designs. Last, a detailed example, which assumes basic knowledge of classical test theory and generalizability theory, is presented to help illustrate, using Venn diagrams, the role of error variance in performance assessments. Evaluation based on comments of 13 American Educational Research Association, Division D, listserve members and 7 non-members was positive, and interest in the topic was shown by over 100 visits to the website where a portion of the dissertation was posted.
218

The utility of curriculum-based measurement within a multitiered framework| Establishing cut scores as predictors of student performance on the Alaska standards-based assessment

Legg, David E. 03 May 2013 (has links)
<p> The purpose of this study was to explore the relationship between student performance on Reading Curriculum-based Measures (R-CBM) and student performance on the Alaska's standards based assessment (SBA) administered to students in Studied School District (SSD) Grade 3 through Grade 5 students in the Studied School District as required by Alaska's accountability system. The 2 research questions were: (a) To what extent, if at all, is there a relationship between student performance on the R-CBM tools administered in Grades 3, 4, and 5 in the fall, winter, and spring and student performance on the Alaska SBA administered in the spring of the same school year in the SSD? (b) To what extent, if at all, can cut scores be derived for each of the 3 R-CBM testing windows in the fall, winter, and spring that predict success on the Alaska SBA administered in the spring of the same school year in the SSD? The Study School District (SSD) served approximately 9,500 students, with 14% of students eligible for special education services. The enrollment was 81% Caucasian, 10% Alaska Native, 3% Hispanic, 3% multiethnic, and 4% as the total of American Indian, Asian, Black, and Native Hawaiian/Pacific Islander. The sample was 3rd (<i>n</i> = 472), 4th (<i>n</i> = 435), and 5th (<i>n</i> = 517) graders and consisted of all students with an Alaska SBA score and an R-CBM score for each of the 3 administrations of the R-CBM used in the 2009-2010 (FY10) and 2010-2011 (FY11) years. Pearson correlations were significant between R-CBM scores across 3rd, 4th, and 5th grades and the same grade Alaska SBA scores for FY10 data, <i>r</i> = .689 to <i>r</i> = .728, <i>p</i> &lt; .01. A test of the full model with R-CBM as predictor against a constant-only model was statistically reliable, <i>p</i> &lt; .001. The R-CBM reliably distinguished between passing and failing the Alaska SBA for students in Grades 3 through 5. Criterion validity of the cut scores was ascertained by applying scores to the FY11 data and yielded adequate levels of sensitivity from 49% to 88% while specificity levels ranged from 89% to 97%.</p>
219

Categorical differences in statewide standardized testing scores of students with disabilities

Trexler, Ellen L. 22 May 2013 (has links)
<p> The No Child Left Behind Act requires all students be proficient in reading and mathematics by 2014, and students in subgroups to make Adequate Yearly Progress. One of these groups is students with disabilities, who continue to score well below their general education peers. This quantitative study identified scoring differences between disability groups on the Florida Comprehensive Assessment Test (FCAT) over a 6-year period. The percentages of students who scored at the proficient level in reading, mathematics, and writing in the fourth grade, and reading, mathematics, and science in the fifth grade were used to identify differences in 12 disability groups. All students with disabilities are combined into one category for reporting purposes and assigning school grades. Disaggregation of the special education categories revealed scoring differences between groups in all subjects and both grades. Students with speech impairments had the highest number of students scoring at the proficient level in all subjects, while students with intellectual disabilities had the fewest. The categorical rank order was identical for reading in both grades and similar in the other subjects. Students with specific learning disabilities, who constitute approximately 50% of all students with disabilities in these grades, were in the lowest five categories for both grades in reading and in fourth grade mathematics, and in the lower 50% in fifth grade mathematics and science. Recommendations included the need for alternate measures of student achievement; specifically, modified assessments, in addition to teacher evaluations and the impact on the Florida Flexibility Waiver's achievement goals.</p>
220

Evaluating the impact of action plans on trainee compliance with learning objectives

Aumann, Michael J. 29 June 2013 (has links)
<p> This mixed methods research study evaluated the use of technology-based action plans as a way to help improve compliance with the learning objectives of an online training event. It explored how the action planning strategy impacted subjects in a treatment group and compared them to subjects in a control group who did not get the action plan. The study revealed that the action planning process supported the compliance of the learning objectives and provided insights into how the action planning process contributes to this compliance. As a result, this study recommends the use of technology-based action plans, as opposed to paper-based actions plans, as a simple and effective strategy to support the application and evaluation of training, specifically for online live training events.</p>

Page generated in 0.114 seconds