• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Implementation of a Confidence-based Assessment Tool Within an Aviation Training Program

Novacek, Paul F. 08 1900 (has links)
Traditional use of the multiple-choice question rewards a student for guessing. This technique encourages rote memorization of questions to pass a lengthy exam, and does not promote comprehensive understanding or subject correlation. This begs the question; do we really want question memorizers to operate the machinery of our industrialized society? In an effort to identify guessing on answers during an exam within a safety-critical aviation pilot training course, a qualitative research study was undertaken that introduced a confidence-based element to the end-of-ground-school exam followed by flight simulator sessions. The research goals were twofold, to clearly identify correct guesses and also provide an evidence-based snapshot of aircraft systems knowledge to be used as a formative study aid for the remainder of the course. Pilot and instructor interviews were conducted to gather perceptions and opinions about the effectiveness of the confidence-based assessment tool. The finding of overall positive interview comments confirmed that the pilots and flight instructors successfully used the confidence-based assessments as intended to identify weak knowledge areas and as aids, or plans, for their remaining study time. The study found that if properly trained and administered—especially through a computer-based medium—a robust confidence-based assessment tool would be minimally-burdensome while offering worthwhile benefits.
2

The Impact of Teachers¡¦Physical Attractiveness and Sense of Humor on Learning Attention and Efficiency

Tsai, Chih-Yung 11 September 2012 (has links)
Attractive people are more likely to be considered as having better abilities, personalities, and interpersonal relationships. Does the physical attractiveness of a teacher affect the first impression of students of the teacher, and further affect learning concentration and achievement? Second, humor is a type of philosophy and wisdom. People with a good sense of humor are more approachable and have more advantages in interpersonal relationships. Is a humorous teacher capable of creating a happy learning environment to enhance the learning concentration and achievement of students? Junior teachers spend less time teaching and they are typically viewed less favorably compared to senior teachers regarding knowledge structure, teaching strategy, and their understanding of students¡¦ learning difficulties. Are senior teachers with many years of experience capable of improving the learning concentration and achievements of students because they have more teaching experience and progress? In this study, we attempt to understand the influence of teacher characteristics on the learning concentration and achievements of students. Therefore, we used a user within the quasi-experimental method. Teachers with various levels of physical attractiveness, senses of humor, and years of teaching experience were selected to teach students. The learning concentration and achievement of the students were analyzed and compared. The results were as shown below: First, the results of learning concentration showed that the physical attractiveness of a teacher is negatively correlated with learning concentration. The sense of humor of a teacher is positively correlated with learning concentration. Years of teaching experience is positively correlated with learning concentration. Teachers¡¦ physical attractiveness, sense of humor, and years of teaching experience have an interactive effect on learning concentration. Second, results on learning achievement showed that the physical attractiveness of a teacher is not significantly correlated with learning performance. The years of teaching experience of a teacher is not significantly correlated with learning performance. The sense of humor of a teacher is positively correlated with learning achievement. The interactions among factors such as teachers¡¦ physical attractiveness, sense of humor, and years of teaching experience did not show significant correlation with learning achievement According to the results of this study, we suggest that the myth about physical attractiveness should be dispelled, and that educational training regarding teachers¡¦ senses of humor should be strengthened. We recommend that future research include in-depth investigations on the differences between Asians and Caucasians regarding physical attractiveness and teacher charisma.
3

Students' familiarity with the narrator in multimedia learning material

Ben-Dror, Yaffa January 2014 (has links)
This is a study of the influence of the familiarity of students with the narrator of video tutorials, in a blended learning situation, on both the perceived and actual effectiveness of the learning materials, in terms of students’ learning efficiency – where a course is traditional in format and online learning is carried out with the help of Narrated Video Screen Captures (NVSCs). The study also focused on the interaction of student-narrator gender similarity and students’ individual differences (conscientiousness and test-anxiety) with voice familiarity. Thus, the study sought to fill a gap in knowledge regarding the influence of familiarity with the narrator in multimedia learning material on the efficiency of learning within a blended learning context. The research paradigm was deductive, employing a mixed methods and a case study research and using quasi-experiments. In order to compare the relational efficiency of the different instructional conditions, a calculative approach was used that combined measurement of mental effort with task performance. In addition to the mental effort questionnaires and task performance, students completed an assessment questionnaire for the NVSCs. In addition, semi-structured interviews and a follow-up questionnaire were used for collection of corroborative data, in order to shed more light on this matter. Findings showed significant influence of voice familiarity on most of the learning efficiency indices and on perceived effectiveness of NVSCs. Gender similarity was significant only with unfamiliar voice and there was no significant interaction between conscientiousness and test anxiety and voice familiarity. Thus, it was concluded that when students have a personal relationship with the class teacher, exposure to multimedia learning materials with an unfamiliar narrator has an adverse influence on their learning efficiency. These findings add to the established voice related principles of Cognitive Theory of Multimedia Learning and Social Agency Theory. Contribution to knowledge was made by filling the gap in knowledge in the area of multimedia instructional design.
4

Three Essays in Business Cycles

Karimzada, Muhebullah January 2023 (has links)
In chapter one of the thesis, we incorporate shocks to the efficiency with which firms learn from production activity and accumulate knowledge into an otherwise standard real DSGE model with imperfect competition. Using real aggregate data and Bayesian inference techniques, we find that learning efficiency shocks are an important source of observed variation in the growth rate of aggregate output, investment, consumption and especially hours worked in post-war US data. The estimated shock processes suggest much less exogenous variation in preferences and total factor productivity are needed by our model to account for the joint dynamics of consumption and hours. This occurs because learning efficiency shocks induce shifts in labour demand uncorrelated with current TFP, a role usually played by preference shocks which shift labour supply. At the same time, knowledge capital acts like an endogenous source of productivity variation in the model. Measures of model fit prefer the specification with learning efficiency shocks. The results are robust to the addition of many observables and shocks. In chapter 2, I estimate a "Learning-by-doing'' model with "Learning efficiency shocks'' using Bayesian estimation techniques and real aggregate data from Euro Area. I find that learning efficiency shocks explain a large fraction of the fluctuations in the growth rate of real aggregate variables such as consumption, output, investment and employment. This paper is the first to estimate a learning-by-doing model with learning efficiency shocks for the Euro Area and analyses its business cycles. In chapter 3, We study the impact of COVID 19 pandemic on the Canadian housing market. The Canadian economy has been hit hard by the COVID-19 pandemic like almost every other country in the World. The residential real estate market that makes a significant contribution to the Canadian economy however behaved far differently in the wake of the COVID-19 downturn. Unlike previous recessions, housing market recovered much faster and house prices steadily increased from 2020:QII. Since the pandemic has started, working from home (WFH) has become more prevalent. How important is WFH in producing large swings in house prices as observed in the data? To address this question, we estimate an augmented New Keynesian model with collateralized household debt and remote working condition. We argue that remote working condition improves the performance of the model, particularly explaining the house price dynamics in the last two years. / Thesis / Doctor of Philosophy (PhD)
5

Achieving More with Less: Learning Generalizable Neural Networks With Less Labeled Data and Computational Overheads

Bu, Jie 15 March 2023 (has links)
Recent advancements in deep learning have demonstrated its incredible ability to learn generalizable patterns and relationships automatically from data in a number of mainstream applications. However, the generalization power of deep learning methods largely comes at the costs of working with very large datasets and using highly compute-intensive models. Many applications cannot afford these costs needed to ensure generalizability of deep learning models. For instance, obtaining labeled data can be costly in scientific applications, and using large models may not be feasible in resource-constrained environments involving portable devices. This dissertation aims to improve efficiency in machine learning by exploring different ways to learn generalizable neural networks that require less labeled data and computational resources. We demonstrate that using physics supervision in scientific problems can reduce the need for labeled data, thereby improving data efficiency without compromising model generalizability. Additionally, we investigate the potential of transfer learning powered by transformers in scientific applications as a promising direction for further improving data efficiency. On the computational efficiency side, we present two efforts for increasing parameter efficiency of neural networks through novel architectures and structured network pruning. / Doctor of Philosophy / Deep learning is a powerful technique that can help us solve complex problems, but it often requires a lot of data and resources. This research aims to make deep learning more efficient, so it can be applied in more situations. We propose ways to make the deep learning models require less data and less computer power. For example, we leverage the physics rules as additional information for training the neural network to learn from less labeled data and we use a technique called transfer learning to leverage knowledge from data that is from other distribution. Transfer learning may allow us to further reduce the need for labeled data in scientific applications. We also look at ways to make the deep learning models use less computational resources, by effectively reducing their sizes via novel architectures or pruning out redundant structures.
6

Анализ действий пользователя информационной системы математического тренажера для школьников : магистерская диссертация / Analysis of the user actions of the information system’s mathematical simulator for pupils

Пономаренко, М. И., Ponomarenko, M. I. January 2022 (has links)
Актуальность данной работы обусловлена тем, что изучение того или иного навыка нуждается в постоянной оценке для дальнейшей корректировки при составлении индивидуальной обучающей траектории. Научная новизна заключается в формировании и практическом применении методики анализа действий пользователей учебной информационной системы с учетом индивидуальных кривых обучения. Практическая значимость состоит в том, что данная методика на основе анализа кривых обучения может быть применена при изучении любого навыка. / The relevance of the topic is due to the fact that the study of a particular skill needs constant evaluation for further adjustment when compiling an individual learning trajectory. Scientific novelty lies in the formation and practical methodology application for analyzing the users’ actions of educational information system considering individual learning curves. The practical significance is in the fact that this technique, based on the analysis of learning curves, can be applied to study of any skill.
7

Optimizing Accuracy-Efficiency Tradeoffs in Emerging Neural Workloads

Amrit Nagarajan (17593524) 11 December 2023 (has links)
<p>Deep Neural Networks (DNNs) are constantly evolving, enabling the power of deep learning to be applied to an ever-growing range of applications, such as Natural Language Processing (NLP), recommendation systems, graph processing, etc. However, these emerging neural workloads present large computational demands for both training and inference. In this dissertation, we propose optimizations that take advantage of the unique characteristics of different emerging workloads to simultaneously improve accuracy and computational efficiency.</p> <p><br></p> <p>First, we consider Language Models (LMs) used in NLP. We observe that the design process of LMs (pre-train a foundation model, and subsequently fine-tune it for different downstream tasks) leads to models that are highly over-parameterized for the downstream tasks. We propose AxFormer, a systematic framework that applies accuracy-driven approximations to create accurate and efficient LMs for a given downstream task. AxFormer eliminates task-irrelevant knowledge, and helps the model focus only on the relevant parts of the input.</p> <p><br></p> <p>Second, we find that during fine-tuning of LMs, the presence of variable-length input sequences necessitates the use of padding tokens when batching sequences, leading to ineffectual computations. It is also well known that LMs over-fit to the small task-specific training datasets used during fine-tuning, despite the use of known regularization techniques. Based on these insights, we present TokenDrop + BucketSampler, a framework that synergistically combines a new regularizer that drops a random subset of insignificant words in each sequence in every epoch, and a length-aware batching method to simultaneously reduce padding and address the overfitting issue.</p> <p><br></p> <p>Next, we address the computational challenges of Transformers used for processing inputs of several important modalities, such as text, images, audio and videos. We present Input Compression with Positional Consistency (ICPC), a new data augmentation method that applies varying levels of compression to each training sample in every epoch, thereby simultaneously reducing over-fitting and improving training efficiency. ICPC also enables efficient variable-effort inference, where easy samples can be inferred at high compression levels, and vice-versa.</p> <p><br></p> <p>Finally, we focus on optimizing Graph Neural Networks (GNNs), which are commonly used for learning on non-Euclidean data. Few-shot learning with GNNs is an important challenge, since real-world graphical data is often sparsely labeled. Self-training, wherein the GNN is trained in stages by augmenting the training data with a subset of the unlabeled data and their pseudolabels, has emerged as a promising approach. However, self-training significantly increases the computational demands of training. We propose FASTRAIN-GNN, a framework for efficient and accurate self-training of GNNs with few labeled nodes. FASTRAIN-GNN optimizes the GNN architecture, training data, training parameters, and the graph topology during self-training.</p> <p><br></p> <p>At inference time, we find that ensemble GNNs are significantly more accurate and robust than single-model GNNs, but suffer from high latency and storage requirements. To address this challenge, we propose GNN Ensembles through Error Node Isolation (GEENI). The key concept in GEENI is to identify nodes that are likely to be incorrectly classified (error nodes) and suppress their outgoing messages, leading to simultaneous accuracy and efficiency improvements. </p> <p><br></p>
8

Wie viel Abstraktion erträgt die Lernwirksamkeit? / Diskussion der Vermittlung einer modellgeleiteten Ausbildungsbotschaft an Sporthoch-schulen in der Schweiz im Spannungsfeld zwischen Reduktion und Komplexität / How much abstraction does learning efficiency tolerate? / Discussion on the communication of a model-based education message in physical education colleges in Switzerland in the field of debate between reduction and complexity

Disler, Pius 09 December 2005 (has links)
No description available.

Page generated in 0.0507 seconds