Spelling suggestions: "subject:"student rating""
1 |
On-Campus and Off-Campus Students' Ratings of Instruction and CoursesSaeki, Noriko 01 May 2003 (has links)
The associations of student ratings of instruction and courses (SRIC) with noninstructional variables (e.g., class size, expected grade) were examined in three instructional delivery groups--on-campus , off-campus face-to-face , and distance education courses. Factor analysis of SRIC from a 20-item form yielded two highly correlated factors , which differed somewhat across the groups ("Course " and "Instruction"; "Course/Instruction" and "Interaction Opportunities /Instructor Availability"; "Course/Instruction" and "Interaction Opportunities/Helpfulness"). The only educationally significant(r2 > .05) zero-order correlations were between SRIC total scores and expected grade, and were positive in all three groups(r2 = .07, .08, .06). In multiple regression analyses, 9%, 11 %, and 15% of the variance in SRIC for the three groups was explained by the entire set of noninstructional variables. Unique indices were consistent with the finding that expected grade was the only noninstructional variable with an educationally significant relationship with SRIC.
In a separate study, SRIC and the instructor's social presence in host- and remote-site groups were investigated. Remote-site students rated course management lower, on average, than host-site students did, and educationally significant, positive relationships were found between social presence scores and the ratings on four SRIC categories. In addition , remote-site students at smaller sites tended to rate instruction and course satisfaction, as well as the instructor's social presence, higher than students at larger sites.
In an additional investigation, students' ratings of teacher immediacy and reports of teacher-student interaction in distance education courses were analyzed. Host-site students tended to rate teacher immediacy higher than remote-site students did, and the negative association of site size with nonverbal teacher immediacy scores was educationally significant for host sites. Host-site students also tended to report more interaction with their instructors than remote-site students did, and mean reported interaction with the instructor was associated positively with site size and ratings of teacher immediacy.
Based on the differing SRIC factorial structures for on-campus and off-campus students, the identification of distance-education-specific noninstructional variables, problems with obtaining SRIC from students in on-line courses, and evidence on the noninstructional-variable-related theory of teacher immediacy, suggestions were made for future research on student satisfaction and perceptions of teaching effectiveness in distance education.
|
2 |
Student Self-Assessment and Student Ratings of Teacher Rapport in Secondary Student Course RatingsRoe, John Wilford 01 May 2010 (has links)
This study involved administering two rating forms (student self-rating on commitment and student rating of teacher rapport) to approximately 1,400 secondary students taught by 12 different teachers at two different high school Latter-day Saint (LDS) released time seminaries along the Wasatch Front in Utah. Seminaries and Institutes of Religion (S&I) function within the Church Educational System (CES) of the LDS Church, providing religious education for secondary students between the ages of 14-18. The purpose of this study was to explore relationships between student, teacher, and course characteristics on student ratings of teacher rapport and to explore a possible relationship between student self-assessments on their own commitment to learning with student ratings on their rapport with their teacher. Evidence suggests that teacher characteristics such as the teacher's age and experience have little to no impact on student ratings of teacher rapport. Female students tended to rate their teacher more favorably on rapport than male students, although practical significance was minimal. Younger students reported greater interest in seminary and higher-grade expectancy. They also tended to rate themselves higher on commitment. A statistically significant difference was found for teacher rapport scores between two groups based on the order of test administration. Group 1--self-first (student self-rating before student rating of teacher rapport) reported higher levels of rapport than group 2--comparison (student rating of teacher rapport prior to student self-rating). Students tended to rate their teacher more favorably after completing a self-rating on commitment. Practical significance between study groups was minimal because findings were small. Further research is suggested based on these findings to seek more understanding regarding the relationship between student self-evaluations and student ratings of their teacher.
|
3 |
Student ratings of instruction and student motivation: is there a connection?Feit, Christopher R. January 1900 (has links)
Doctor of Philosophy / Department of Special Education, Counseling and Student Affairs / Doris W. Carroll / This study examined factors relates to student ratings of instruction and student levels of motivation. Data came from archival data of 386,195 classes of faculty and students who completed the Faculty Information Form (FIF), completed by the instructor, and the Student Ratings Diagnostic Form (SRDF) completed by the student from the Individual Development and Educational Assessment (IDEA) Center Student Ratings system. Descriptive statistics, correlation studies, analysis of variance (ANOVA), and pairwise comparisons were used to test the research hypotheses. Despite significant differences among student ratings of instruction and student motivation by course type, discipline, and student type, the amount of unknown variability in student ratings of instruction and student motivation is still very large. The findings from the study provide higher education institutions with information about differences between student ratings of instruction by institution type, course level, discipline, and course type as well as the impact of student motivation on student ratings of instruction.
|
4 |
Effect of Rater Training and Scale Type on Leniency and Halo Error in Student Ratings of FacultyCook, Stuart S. (Stuart Sheldon) 05 1900 (has links)
The purpose of this study was to determine if leniency and halo error in student ratings could be reduced by training the student raters and by using a Behaviorally Anchored Rating Scale (BARS) rather than a Likert scale. Two hypotheses were proposed. First, the ratings collected from the trained raters would contain less halo and leniency error than those collected from the untrained raters. Second, within the group of trained raters the BARS would contain less halo and leniency error than the Likert instrument.
|
5 |
LE POINT SUR LES DISPOSITIFS DÉVALUATION DES ENSEIGNEMENTS PAR LES ÉTUDIANTS : PERTINENCE, UTILISATION, AMÉLIORATIONDetroz, Pascal 06 July 2010 (has links)
L'évaluation des enseignements par les étudiants est utilisée à travers de monde. Ce travail consiste à faire le point sur la qualité de ce dispositif.
|
6 |
An Examination of Sources of Instructional Feedback and the Connection with Self Determination Theory and Job SatisfactionBirkholz, Paige M 16 July 2008 (has links)
This study looked to gain information and detail on seven sources of instructional feedback. Instructor’s utilization and perceived value of those sources were examined, along with fulfillment of psychological needs and present job satisfaction. Instructors from Western Kentucky University (WKU; N = 126) were solicited as participants. An online survey included five different measures.
The first, a Sources of Feedback Questionnaire, was created to examine various sources of instructional feedback utilized by participants (institutional student ratings, consultation with faculty, soliciting feedback from students, self-assessment, self-observation, peer/administrator observation, and team teaching). The second measure, adapted from the Basic Needs Satisfaction questionnaire (Deci et al., 2001), was based on the proposal that with the satisfaction of basic needs instructors will show greater job satisfaction. The third questionnaire was a measure of present job satisfaction (Larkin, 1990; Oshagbemi, 1995; Oshagbemi, 1999). The fourth measure was a measure of Competence Valuation (Elliot et al., 2000). The final measure was a basic questionnaire created to obtain demographic information for each participant. Of the seven sources of feedback studied, self-assessment (i.e., reflection) was found to be the most utilized source, whereas self-observation (i.e., videotaping) was found to be the least utilized. The most helpful source of feedback to improve an instructor’s effectiveness was soliciting feedback from students; institutional student ratings were found to be the least helpful. Soliciting feedback from students was also found to be the most useful source of feedback for improving teaching. Job satisfaction was significantly correlated with the three basic psychological needs as well as two other items from the basic needs questionnaire (enjoyment and effort). Job satisfaction of participants was also significantly correlated with competence valuation and the utilization of institutional student ratings. In terms of fulfillment of the basic psychological needs and utilization of the feedback sources, relatedness was the only need that was found to be significantly correlated with utilization of feedback.
|
7 |
Teacher Evaluation as a Function of Leadership Style: A Multiple-Correlational ApproachSwanson, Ronald G. 05 1900 (has links)
One of the most persistent issues in contemporary organizations has been how to evaluate individual performance. Basically, the problem is who should evaluate whom and against what productivity criterion. Educational institutions have been the organizations most concerned with this dilemma in recent years. As recently as September, 1973, teachers went on strike over accountability procedures. This study was conducted to identify which mode of teacher evaluation was most efficient, based on fairly objective performance criterion, and to establish a basis for viewing teaching style as leadership style. In existing research, superior ratings were the most used evaluation measure, student ratings were a rapidly growing mode of evaluation, self-ratings were considered biased, and peer ratings were used very little. Hence, who should do the evaluating was an unsolved problem. All four evaluation modes were employed in the present study for comparison.
|
8 |
Psychometric Properties of Postsecondary Students' Course EvaluationsDrysdale, Michael J. 01 December 2010 (has links)
Several experts in the area of postsecondary student evaluations of courses have concluded that they are stable or reliable measures as well as being measures that provide ways of making valid inferences regarding teacher effectiveness. Often these experts have offered these conclusions without supporting evidence. Surprisingly, a thorough review of the literature revealed very few reported test-retest reliability studies of course evaluations and the results from these studies are contradictory. In the area of validity, the conclusions offered by scholars who conducted meta-analyses of mutlisection course studies are inconsistent. This leads to the following two research questions:
1. What is the test-retest reliability over a 3-week period of the course evaluation currently employed at Utah State University?
2. Can results of the course evaluation employed at Utah State University be used to make valid inferences about a teacher's effectiveness?
Two parts of a study were conducted to answer these questions. First, a test-retest reliability part was conducted with students from courses at Utah State University, employing a 3-week time lapse between administrations of the course evaluations. Second, a multisection course validity part was conducted using existing student ratings data and final examination scores for 100 sections of MATH 1010 over a 5-year period. Correlational analyses were conducted on the resulting data from both studies. Test-retest reliability coefficients ranging from 0.64 to 0.94 were found. In the second study, the correlation coefficients from the validity study ranged from -0.39 to 0.71, with a mean coefficient of 0.14 and 0.11 for final examination score by instructor rating and final examination score by course rating, respectively. Results from both parts of the study suggest that the course evaluation used at USU is not reliable and that results of the course evaluation do not provide information that can be used to make valid inferences regarding teacher effectiveness.
|
9 |
Counseling Graduate Students’ Preference for Qualities Pertaining to Teaching EffectivenessKreider, Valerie A.L. 30 April 2009 (has links)
No description available.
|
10 |
Student Ratings of Instruction: Examining the Role of Academic Field, Course Level, and Class SizeLaughlin, Anne Margaret 11 April 2014 (has links)
This dissertation investigated the relationship between course characteristics and student ratings of instruction at a large research intensive university. Specifically, it examined the extent to which academic field, course level, and class size were associated with variation in mean class ratings. Past research consistently identifies differences between student ratings in different academic fields, but offers no unifying conceptual framework for the definition or categorization of academic fields. Therefore, two different approaches to categorizing classes into academic fields were compared - one based on the institution's own academic college system and one based on Holland's (1997) theory of academic environments.
Because the data violated assumptions of normality and homogeneity of variance, traditional ANOVA procedures were followed by post-hoc analyses using bootstrapping to more accurately estimate standard errors and confidence intervals. Bootstrapping was also used to determine the statistical significance of a difference between the effect sizes of academic college and Holland environment, a situation for which traditional statistical tests have not been developed.
Findings replicate the general pattern of academic field differences found in prior research on student ratings and offer several unique contributions. They confirm the value of institution-specific approaches to defining academic fields and also indicate that Holland's theory of academic environments may be a useful conceptual framework for making sense of academic field differences in student ratings. Building on past studies that reported differences in mean ratings across academic fields, this study describes differences in the variance of ratings across academic fields. Finally, this study shows that class size and course level may impact student ratings differently - in terms of interaction effects and magnitude of effects - depending on the academic field of the course. / Ph. D.
|
Page generated in 0.1122 seconds