• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 8
  • 3
  • 1
  • Tagged with
  • 66
  • 66
  • 39
  • 20
  • 17
  • 17
  • 13
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Seleção de modelos multiníveis para dados de avaliação educacional / Selection of multilevel models for educational evaluation data

Fabiano Rodrigues Coelho 11 August 2017 (has links)
Quando um conjunto de dados possui uma estrutura hierárquica, uma possível abordagem são os modelos de regressão multiníveis, que se justifica pelo fato de haver uma porção significativa da variabilidade dos dados que pode ser explicada por níveis macro. Neste trabalho, desenvolvemos a seleção de modelos de regressão multinível aplicados a dados educacionais. Esta análise divide-se em duas partes: seleção de variáveis e seleção de modelos. Esta última subdivide-se em dois casos: modelagem clássica e modelagem bayesiana. Buscamos através de critérios como o Lasso, AIC, BIC, WAIC entre outros, encontrar quais são os fatores que influenciam no desempenho em matemática dos alunos do nono ano do ensino fundamental do estado de São Paulo. Também investigamos o funcionamento de cada um dos critérios de seleção de variáveis e de modelos. Foi possível concluir que, sob a abordagem frequentista, o critério de seleção de modelos BIC é o mais eficiente, já na abordagem bayesiana, o critérioWAIC apresentou melhores resultados. Utilizando o critério de seleção de variáveis Lasso para abordagem clássica, houve uma diminuição de 34% dos preditores do modelo. Por fim, identificamos que o desempenho em matemática dos estudantes do nono ano do ensino fundamental do estado de São Paulo é influenciado pelas seguintes covariáveis: grau de instrução da mãe, frequência de leitura de livros, tempo gasto com recreação em dia de aula, o fato de gostar de matemática, o desempenho em matemática global da escola, desempenho em língua portuguesa do aluno, dependência administrativa da escola, sexo, grau de instrução do pai, reprovações e distorção idade-série. / When a dataset contains a hierarchical data structure, a possible approach is the multilevel regression modelling, which is justified by the significative amout of the data variability that can be explained by macro level processes. In this work, a selection of multilevel regression models for educational data is developed. This analysis is divided into two parts: variable selection and model selection. The latter is subdivided into two categories: classical and Bayesian modeling. Traditional criteria for model selection such as Lasso, AIC, BIC, and WAIC, among others are used in this study as an attempt to identify the factors influencing ninth grade students performance in Mathematics of elementary education in the State of São Paulo. Likewise, an investigation was conducted to evaluate the performance of each variable selection criteria and model selection methods applied to fitted models that will be mentioned throughout this work. It was possible to conclude that, under the frequentist approach, BIC is the most efficient, whereas under the bayesian approach, WAIC presented better results. Using Lasso under the frequentist approach, a decrease of 34% on the number of predictors was observed. Finally, we identified that the performance in Mathematics of students in the ninth year of elementary school in the state of São Paulo is most influenced by the following covariates: mothers educational level, frequency of book reading, time spent with recreation in classroom, the fact of liking Math, school global performance in Mathematics, performance in Portuguese, school administrative dependence, gender, fathers educational degree, failures and age-grade distortion.
32

Investigating Differences in Formative Critiquing between Instructors and Students in Graphic Design

Liwei Zhang (6635930) 15 May 2019 (has links)
<p>Critique is an essential skill of professional designers to communicate success and failure of a design with others. For graphic design educators, including critique in their pedagogical approaches enables students to improve both their design capability and critique skills. Adaptive Comparative Judgment (ACJ) is an innovative approach of assessment where students and instructors make comparisons between two designs and choose the better of the two. The purpose of this study was to investigate the differences between instructors’ and students’ critiquing practices. The data was collected through think-aloud protocol methods while both groups critiqued the same design projects. </p> <p>The results indicate that it took students longer to finish the same amount of critiques as those completed by instructors. Students spent more time describing their personal feelings, evaluating each individual design, and looking for the right phrases to precisely express their thoughts on a design. Instructors, with more teaching experience, were able to complete the critique more quickly and justify their critique decisions more succinctly with efficient use of terminology and a reliance on their instincts. </p>
33

Cultures of writing: The state of transfer at state comprehensive universities

Derek R Sherman (10947219) 04 August 2021 (has links)
<p>The Elon Research Seminar, <i>Critical Transitions: Writing and the Question of Transfer</i>, was a coalition of rhetoric and composition scholars’ attempt at codifying writing transfer knowledge for teaching and research purposes. Although the seminar was an important leap in transfer research, many ‘behind the scenes’ decisions of writing transfer, often those not involving the writing program, go unnoticed, yet play a pivotal role in how writing programs encourage and reproduce writing transfer in the classroom. This dissertation study, inspired by a pilot study conducted in Fall 2018 on writing across the curriculum programs and their role in writing transfer, illustrates how an institution’s context systems (e.g., macrosystem, mesosystem, microsystem, etc.) affect writing programs’ processes—i.e., curriculum components, assessment, and administrative structure and budget—and vice versa. Using Bronfenbrenner and Morris’ (2006) bioecological model, I show how writing programs and their context systems interact to reproduce writing transfer practices. Through ten interviews with writing program administrators at state comprehensive universities, I delineate specific actions that each writing program could take to encourage writing transfer. I develop a list of roles and responsibilities a university’s context systems play in advocating writing transfer practices. The results of the study show that research beyond the writing classroom and students is necessary to understand how writing transfer opportunities arise in university cultures of writing.</p>
34

New Forms of Assessment in the South African Curriculum Assessment Guidelines: What Powers do Teachers Hold?

Mwakapenda, Willy 07 May 2012 (has links)
This article opens up a discussion on the power that teachers have in mathematics curriculum at the Further Education and Training level. It is related to the general question: who holds the power in school mathematics education in South Africa? To what extent is the teacher given an opportunity to exercise their power in mathematics assessment? If the teacher is given power, what does that power allow teachers to do, and under what conditions does this happen? The case of mathematics is presented here to illustrate the above complex questions of teacher power in new forms of assessment in the curriculum.
35

Evaluating utility of the National Survey of Student Engagement subscores for institutional assessment in higher education

Winkler, Christa Elisa 01 October 2020 (has links)
No description available.
36

1500 Students and Only a Single Cluster? A Multimethod Clustering Analysis of Assessment Data from a Large, Structured Engineering Course

Taylor Williams (13956285) 17 October 2022 (has links)
<p>  </p> <p>Clustering, a prevalent class of machine learning (ML) algorithms used in data mining and pattern-finding—has increasingly helped engineering education researchers and educators see and understand assessment patterns at scale. However, a challenge remains to make ML-enabled educational inferences that are useful and reliable for research or instruction, especially if those inferences influence pedagogical decisions or student outcomes. ML offers an opportunity to better personalizing learners’ experiences using those inferences, even within large engineering classrooms. However, neglecting to verify the trustworthiness of ML-derived inferences can have wide-ranging negative impacts on the lives of learners. </p> <p><br></p> <p>This study investigated what student clusters exist within the standard operational data of a large first-year engineering course (>1500 students). This course focuses on computational thinking skills for engineering design. The clustering data set included approximately 500,000 assessment data points using a consistent five-scale criterion-based grading framework. Two clustering techniques—N-TARP profiling and K-means clustering—examined criterion-based assessment data and identified student cluster sets. N-TARP profiling is an expansion of the N-TARP binary clustering method. N-TARP is well suited to this course’s assessment data because of the large and potentially high-dimensional nature of the data set. K-means clustering is one of the oldest and most widely used clustering methods in educational research, making it a good candidate for comparison. After finding clusters, their interpretability and trustworthiness were determined. The following research questions provided the structure for this study: RQ1 – What student clusters do N-TARP profiling and K-means clustering identify when applied to structured assessment data from a large engineering course? RQ2 – What are the characteristics of an average student in each cluster? and How well does the average student in each cluster represent the students of that cluster? And RQ3 – What are the strengths and limitations of using N-TARP and K-means clustering techniques with large, highly structured engineering course assessment data?</p> <p><br></p> <p>Although both K-means clustering and N-TARP profiling did identify potential student clusters, the clusters of neither method were verifiable or replicable. Such dubious results suggest that a better interpretation is that all student performance data from this course exist in a single homogeneous cluster. This study further demonstrated the utility and precision of N-TARP’s warning that the clustering results within this educational data set were not trustworthy (by using its W value). Providing this warning is rare among the thousands of available clustering methods; most clustering methods (including K-means) will return clusters regardless. When a clustering algorithm identifies false clusters that lack meaningful separation or differences, incorrect or harmful educational inferences can result. </p>
37

Capturing L2 Oral Proficiency with CAF Measures as Predictors of the ACTFL OPI Rating

Mayu Miyamoto (6634307) 14 May 2019 (has links)
<p>Despite an emphasis on oral communication in most foreign language classrooms, the resource-intensive nature (i.e. time and manpower) of speaking tests hinder regular oral assessments. A possible solution is the development of a (semi-) automated scoring system. When it is used in conjunction with human raters, the consistency of computers can complement human raters’ comprehensive judgments and increase efficiency in scoring (e.g., Enright & Quinlan, 2010). In search of objective and quantifiable variables that are strongly correlated with overall oral proficiency, a number of studies have reported that some utterance fluency variables (e.g., speech rate and mean length of run) might be strong predictors for L2 learners’ speaking ability (e.g., Ginther et al., 2010; Hirotani et al., 2017). However, these findings are difficult to generalize due to small sample sizes, narrow ranges of proficiency levels, and/or a lack of data from languages other than English. The current study analyzed spontaneous speech samples collected from 170 Japanese learners at a wide range of proficiency levels determined by a well-established speaking test, the American Council on the Teaching of Foreign Languages’ (ACTFL) Oral Proficiency Interview (OPI). Prior to analysis, 48 <i>Complexity, Accuracy, Fluency</i> (CAF) measures (with a focus on fluency variables) were calculated from the speech samples. First, the study examined the relationships among the CAF measures and learner oral proficiency assessed by the ACTFL OPI. Then, using an empirically-based approach, a feasibility of using a composite measure to predict L2 oral proficiency was investigated. The results revealed that <i>Speech Speed</i> and <i>Complexity</i> variables demonstrated strong correlation to the OPI levels, and moderately strong correlations were found for the variables in the following categories: <i>Speech Quantity, Pause</i>, <i>Pause Location</i> (i.e., Silent pause ratio within AS-unit), <i>Dysfluency</i> (i.e., Repeat ratio), and <i>Accuracy.</i> Then, a series of multiple regression analyses revealed that a combination of five CAF measures (i.e., Effective articulation rate, Silent pause ratio, Repeat ratio, Syntactic complexity, and Error-free AS-unit ratio) can predict 72.3% of the variance of the OPI levels. This regression model includes variables that correspond to Skehan’s (2009) proposed three categories of fluency (speed, breakdown, and repair) and variables that represent CAF, supporting the literature (e.g., Larsen-Freeman, 1978, Skehan, 1996).</p>
38

A systems analysis of selection for tertiary education: Queensland as a case study

Maxwell, Graham Samuel Unknown Date (has links)
No description available.
39

A systems analysis of selection for tertiary education: Queensland as a case study

Maxwell, Graham Samuel Unknown Date (has links)
No description available.
40

A systems analysis of selection for tertiary education: Queensland as a case study

Maxwell, Graham Samuel Unknown Date (has links)
No description available.

Page generated in 0.1251 seconds