• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 344
  • 80
  • 25
  • 17
  • 11
  • 9
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • Tagged with
  • 635
  • 635
  • 208
  • 132
  • 74
  • 72
  • 66
  • 62
  • 60
  • 58
  • 56
  • 54
  • 49
  • 44
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

IDENTIFICATION AND EXAMINATION OF KEY COMPONENTS OF ACTIVE LEARNING

Kelly, Darrell Scott January 2016 (has links)
No description available.
332

Active learning of interatomic potentials to investigate thermodynamic and elastic properties of Ti0.5Al0.5N at elevated temperature

Bock, Florian January 2021 (has links)
With the immense increase in the computational power available for the material science community in recent years, a range of new discoveries were made possible. Accurate investigations of large scale atomic systems, however, still come with an extremely high computational demand. While the recent development of Graphics Processing Unit (GPU) accelerated supercomputing might offer a solution to some extent, most well known electronic structure codes have yet to be fully ported to utilize this new power. With a soaring demand for new and better materials from both science and industry, a more efficient approach for the investigation of material properties needs to be implemented. The use of Machine Learning (ML) to obtain Interatomic Potentials (IP) which far outperform the classical potentials has increased greatly in recent years. With successful implementation of ML methods utilizing neural networks or Gaussian basis functions, the accuracy of ab-initio methods can be achieved at the demand of simulations with empirical potentials. Most ML approaches, however, require high accuracy data sets to be trained sufficiently. If no such data is available for the system of interest, the immense cost of creating a viable data set from scratch can quickly negate the benefit of using ML. In this diploma project, the elastic and thermodynamic properties of the Ti0.5Al0.5N random alloy at elevated temperature are therefore investigated using an Active Learning (AL) approach with the Machine Learning Interatomic Potentials (MLIP) package. The obtained material properties are found to be in good agreement with results from computationally demanding ab-initio studies of Ti0.5Al0.5N, at a mere fraction of the demand. The AL approach requires no high accuracy data sets or previous knowledge about the system, as the model is initially trained on low accuracy data which is removed from the training set (TS) at a later stage. This allows for an iterative process of improving and expanding the data set used to train the IP, without the need for large amounts of data.
333

Exploring playful annotations in interactive textbooks: Engaging the teacher and the learner in an active learning process

Nicolas, Noémie January 2016 (has links)
This thesis aims at exploring the potential of playful annotations in interactive textbooks, to engage the teacher and the learner in an active learning process. This research focus was taken after a Field study consisting of a range of semi-structured interviews, surveys, and discussions with teachers and students from a pilot school provided with an interactive textbook platform called Gleerups. This latter is a Swedish publisher which spreads a large offer of educational textbooks across Sweden. The thesis topic was chosen in order to find and suggest ways to approach the learning and reading phase in an active way while also focusing on the teacher-learner relationship.The design contributions include proposals for improvements taking the shape of scenarios and sketches using field research and qualitative studies. It is based on an analysis of related examples and cross disciplinary literature, grounding the research in education and learning theories. Finally, a prototype encompassing the main features raised from the research is presented.The thesis ends with outcomes and reflections from findings, as well as discussions with stakeholders and teachers that initiated the research.
334

Precision Aggregated Local Models

Edwards, Adam Michael 28 January 2021 (has links)
Large scale Gaussian process (GP) regression is infeasible for larger data sets due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide-and-conquer, e.g., partitioning into sub-problems and inducing functional (and thus computational) independence. Such approximations can speedy, accurate, and sometimes even more flexible than an ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus pathologically good and bad in this regard. Model averaging, an alternative to divide-and-conquer, can maintain absolute continuity but often over-smooth, diminishing accuracy. Here I propose putting LAGP-like methods into a local experts-like framework, blending partition-based speed with model-averaging continuity, as a flagship example of what I call precision aggregated local models (PALM). Using N_C LAGPs, each selecting n from N data pairs, I illustrate a scheme that is at most cubic in n, quadratic in N_C, and linear in N, drastically reducing computational and storage demands. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster in terms of speed, and furnishes continuous predictive surfaces. Finally, I propose sequential updating scheme which greedily refines a PALM predictor up to a computational budget, and several variations on the basic PALM that may provide predictive improvements. / Doctor of Philosophy / Occasionally, when describing the relationship between two variables, it may be helpful to use a so-called ``non-parametric" regression that is agnostic to the function that connects them. Gaussian Processes (GPs) are a popular method of non-parametric regression used for their relative flexibility and interpretability, but they have the unfortunate drawback of being computationally infeasible for large data sets. Past work into solving the scaling issues for GPs has focused on ``divide and conquer" style schemes that spread the data out across multiple smaller GP models. While these model make GP methods much more accessible to large data sets they do so either at the expense of local predictive accuracy of global surface continuity. Precision Aggregated Local Models (PALM) is a novel divide and conquer method for GP models that is scalable for large data while maintaining local accuracy and a smooth global model. I demonstrate that PALM can be built quickly, and performs well predictively compared to other state of the art methods. This document also provides a sequential algorithm for selecting the location of each local model, and variations on the basic PALM methodology.
335

Evaluating Active Interventions to Reduce Student Procrastination

Martin, Joshua Deckert 21 June 2015 (has links)
Procrastination is a pervasive problem in education. In computer science, procrastination and lack of necessary time management skills to complete programming projects are viewed as primary causes of student attrition. The most effective techniques known to reduce procrastination are resource-intensive and do not scale well to large classrooms. In this thesis, we examine three course interventions designed to both reduce procrastination and be scalable for large classrooms. Reflective writing assignments require students to reflect on their time management choices and how these choices impact their classroom performance. Schedule sheets force students to plan out their work on an assignment. E-mail alerts inform students of their current progress as they work on their projects, and provide ideas on improving their work behavior if their progress is found to be unsatisfactory. We implemented these interventions in a junior-level course on data structures. The study was conducted over two semesters and 330 students agreed to participate in the study. Data collected from these students formed the basis of our analysis of the interventions. We found a statistically significant relationship between the time a project was completed and the quality of that work, with late work being of lower quality. We also found that the e-mail alert intervention had a statistically significant effect on reducing the number of late submissions. This result occurred despite students responded negatively to the treatment. / Master of Science
336

Learning with Constraint-Based Weak Supervision

Arachie, Chidubem Gibson 28 April 2022 (has links)
Recent adaptations of machine learning models in many businesses has underscored the need for quality training data. Typically, training supervised machine learning systems involves using large amounts of human-annotated data. Labeling data is expensive and can be a limiting factor in using machine learning models. To enable continued integration of machine learning systems in businesses and also easy access by users, researchers have proposed several alternatives to supervised learning. Weak supervision is one such alternative. Weak supervision or weakly supervised learning involves using noisy labels (weak signals of the data) from multiple sources to train machine learning systems. A weak supervision model aggregates multiple noisy label sources called weak signals in order to produce probabilistic labels for the data. The main allure of weak supervision is that it provides a cheap yet effective substitute for supervised learning without need for labeled data. The key challenge in training weakly supervised machine learning models is that the weak supervision leaves ambiguity about the possible true labelings of the data. In this dissertation, we aim to address the challenge associated with training weakly supervised learning models by developing new weak supervision methods. Our work focuses on learning with constraint-based weak supervision algorithms. Firstly, we will propose an adversarial labeling approach for weak supervision. In this method, the adversary chooses the labels for the data and a model learns by minimising the error made by the adversarial model. Secondly, we will propose a simple constrained based approach that minimises a quadratic objective function in order to solve for the labels of the data. Next we explain the notion of data consistency for weak supervision and propose a data consistent method for weakly supervised learning. This approach combines weak supervision labels with features of the training data to make the learned labels consistent with the data. Lastly, we use this data consistent approach to propose a general approach for improving the performance of weak supervision models. In this method, we combine weak supervision with active learning in order to generate a model that outperforms each individual approach using only a handful of labeled data. For each algorithm we propose, we report extensive empirical validation of it by testing it on standard text and image classification datasets. We compare each approach against baseline and state-of-the-art methods and show that in most cases we match or outperform the methods we compare against. We report significant gains of our method on both binary and multi-class classification tasks. / Doctor of Philosophy / Machine learning models learn to make predictions from data. In supervised learning, a machine learning model is fed data and corresponding labels for the data so that the model can learn to predict labels for new unseen test data. Curation of large fully supervised datasets is expensive and time consuming since it involves subject matter experts providing labels for each individual data example. The cost of collecting labels has become one of the major roadblocks for training machine learning models. An alternative to supervised training of machine learning models is weak supervision. Weak supervision or weakly supervised learning trains with cheap, and easy to define signals that noisily label the data. We refer to these signals as weak signals. A weak supervision model combines various weak signals to produce training labels for the data. The key challenge in weak supervision is how to combine the different weak signals while navigating misleading correlations in their errors. In this dissertation, we propose several algorithms for weakly supervised learning. We classify our methods as constraint-based weak supervision since weak supervision is provided as constraints to our algorithms. We use experiments on different text and image classification datasets to show that our methods are effective and outperform competing methods that we compare against. Lastly, we propose a general framework for improving the performance of weak supervision models by incorporating a few labeled data. With this method we are able to close the gap to supervised learning without the need for labeling all the data examples.
337

Embracing or resisting evidence-based instruction: Exploring the lasting effect of a sudden pivot to online learning on higher education STEM faculty

Babcock, Jessica, 0009-0008-0758-8309 05 1900 (has links)
There is a significant body of literature showing improved student outcomes in higher education STEM courses when evidence-based instructional practices (EBIPs) are used. Despite this, traditional, lecture-style instruction remains the primary means of instruction in these courses. However, given the situation of the sudden shift to online teaching as a result of the COVID-19 pandemic, faculty were participating in training programs with greater frequency, and thus learning more about the use of EBIPs than ever before. Through the lens of Kurt Lewin’s theory of organizational change in the three stages of unfreezing, change, and refreezing, this explanatory mixed methods study sought to explore through a survey and interviews whether this shift to online teaching and the resulting increase in training participation did, in fact, result in changes in instructional practices, implementation, and perceptions of EBIPs, and whether any changes were sustained upon the return to in-person instruction.The survey tool used in this study was a subset of the Teaching Practices Inventory, developed by the Carl Wieman Science Education Initiative from the University of British Columbia. This generated a modified “extent of use of research-based teaching practices” (METP) score, as well as METP sub-scores in five subcategories of the survey. These results, as well as data obtained from demographic questions and questions about teaching responsibilities and training participation, informed the selection of twelve participants for semi-structured interviews. Through one-way ANOVA testing, the quantitative analysis showed a statistically significant increase in METP (p < .001) from Pre-Covid to Post-Covid scores. Statistical significance was also found in the subcategories of In Class Features (p = .003) and Collaboration (p = .005). Two-way ANOVA testing was also done to explore statistical significance for demographic subcategories, which was found to exist for gender, tenure status, and various categories relating to participation in training and professional development. Interview data supported the quantitative data analysis, and offered further insight and context for the changes that have been made and sustained, including changes regarding the use of educational technology tools, introduction of authentic learning experiences, streamlining of content, and intentional alignment of activities and assessments with course goals. Additional analysis showed faculty relied on virtual collaboration to develop community with other instructors, and realized the importance of student feedback to inform their instruction and of fostering a classroom community. Most significantly, the ability to see first-hand the effect of the pandemic on students and to have a window into their personal lives caused faculty to make sweeping changes with respect to their beliefs in the affective domains of learning, emphasizing the need for empathy, flexibility, and equity-mindedness in their classrooms. This study showed that faculty became convinced of the need for change, consistent with Lewin’s unfreezing stage, not solely through training and professional development, but largely through the realizations about the individuality of students that faculty experienced during the pandemic. This occurred simultaneously with an increase in virtual collaboration as well as the influence of changes peers had made and suggested upon the return to in-person instruction. The recognition of the need to center students in learning combined with these outside influences resulted in the increased use of EBIPs upon the return to in-person instruction, therefore creating the desired change. Lastly, these practices have been maintained as of two years after the return to in-person, thus indicating refreezing, and further data showed that faculty continue to adapt their practices to create more inclusive and student-centered learning environments. / Policy, Organizational and Leadership Studies
338

Leveraging Multimodal Perspectives to Learn Common Sense for Vision and Language Tasks

Lin, Xiao 05 October 2017 (has links)
Learning and reasoning with common sense is a challenging problem in Artificial Intelligence (AI). Humans have the remarkable ability to interpret images and text from different perspectives in multiple modalities, and to use large amounts of commonsense knowledge while performing visual or textual tasks. Inspired by that ability, we approach commonsense learning as leveraging perspectives from multiple modalities for images and text in the context of vision and language tasks. Given a target task (e.g., textual reasoning, matching images with captions), our system first represents input images and text in multiple modalities (e.g., vision, text, abstract scenes and facts). Those modalities provide different perspectives to interpret the input images and text. And then based on those perspectives, the system performs reasoning to make a joint prediction for the target task. Surprisingly, we show that interpreting textual assertions and scene descriptions in the modality of abstract scenes improves performance on various textual reasoning tasks, and interpreting images in the modality of Visual Question Answering improves performance on caption retrieval, which is a visual reasoning task. With grounding, imagination and question-answering approaches to interpret images and text in different modalities, we show that learning commonsense knowledge from multiple modalities effectively improves the performance of downstream vision and language tasks, improves interpretability of the model and is able to make more efficient use of training data. Complementary to the model aspect, we also study the data aspect of commonsense learning in vision and language. We study active learning for Visual Question Answering (VQA) where a model iteratively grows its knowledge through querying informative questions about images for answers. Drawing analogies from human learning, we explore cramming (entropy), curiosity-driven (expected model change), and goal-driven (expected error reduction) active learning approaches, and propose a new goal-driven scoring function for deep VQA models under the Bayesian Neural Network framework. Once trained with a large initial training set, a deep VQA model is able to efficiently query informative question-image pairs for answers to improve itself through active learning, saving human effort on commonsense annotations. / Ph. D. / Designing systems that learn and reason with common sense is a challenging problem in Artificial Intelligence (AI). Humans have the remarkable ability to interpret images and text from different perspectives in multiple modalities, and to use large amounts of commonsense knowledge while performing visual or textual tasks. Inspired by that ability, we approach commonsense learning as leveraging perspectives from multiple modalities for images and text in the context of vision and language tasks. Given a target task, our system first represents the input information (e.g., images and text) in multiple modalities (e.g., vision, text, abstract scenes and facts). Those modalities provide different perspectives to interpret the input information. Based on those perspectives, the system performs reasoning to make a joint prediction to solve the target task. Perhaps surprisingly, we show that imagining (generating) abstract scenes behind input textual scene descriptions improves performance on various textual reasoning tasks such as answering fill-in-the-blank and paraphrasing questions, and answering questions about images improves performance on retrieving image captions. Through the use of perspectives from multiple modalities, our system also makes use of training data more efficiently and has a reasoning process that is easy to understand. Complementary to the system design aspect, we also study the data aspect of commonsense learning in vision and language. We study active learning for Visual Question Answering (VQA). VQA is the task of answering open-ended natural language questions about images. In active learning for VQA, a model iteratively grows its knowledge through querying informative questions about images for answers. Inspired by human learning, we explore cramming (entropy), curiosity-driven (expected model change), and goal-driven (expected error reduction) active learning approaches, and propose a new goal-driven query selection function. We show that once initialized with a large training set, a VQA model is able to efficiently query informative question-image pairs for answers to improve itself through active learning, saving human effort on commonsense annotations.
339

Virtual Clicker - A Tool for Classroom Interaction and Assessment

Glore, Nolan David 10 January 2012 (has links)
Actively engaging students in the classroom and promoting their interaction, both amongst themselves and with the instructor, is an important aspect to student learning. Research has demonstrated that student learning improves when instructors make use of pedagogical techniques which promote active learning. Equally important is instructor feedback from activities such as in-class assessments. Studies have shown that when instructor feedback is given at the time a new topic is introduced, student performance is improved. The focus of this thesis is the creation of a software program, Virtual Clicker, which addresses the need for active engagement, in-class feedback, and classroom interaction, even in large classrooms. When properly used it will allow for multi-directional feedback; teacher to student, student to teacher, and student to student. It also supports the use of digital ink for Tablet PCs in this interaction environment. / Master of Science
340

Sensor-Enabled Accelerated Engineering of Soft Materials

Liu, Yang 24 May 2024 (has links)
Many grand societal challenges are rooted in the need for new materials, such as those related to energy, health, and the environment. However, the traditional way of discovering new materials is basically trial and error. This time-consuming and expensive method can't meet the quickly growing requirements for material discovery. To meet this challenge, the government of the United States started the Materials Genome Initiative (MGI) in 2011. MGI aims at accelerating the pace and reducing the cost of discovering new materials. The success of MGI needs materials innovation infrastructure including data tools, computation tools, and experiment tools. The last decade has witnessed significant progress for MGI, especially with respect to hard materials. However, relatively less attention has been paid to soft materials. One important reason is the lack of experimental tools, especially characterization tools for high-throughput experimentation. This dissertation aims to enrich the toolbox by trying new sensor tools for high-throughput characterization of hydrogels. Piezoelectric-excited millimeter-sized cantilever (PEMC) sensors were used in this dissertation to characterize hydrogels. Their capability to investigate hydrogels was first demonstrated by monitoring the synthesis and stimuli-response of composite hydrogels. The PEMC sensors enabled in-situ study of how the manufacturing process, i.e. bulk vs. layer-by-layer, affects the structure and properties of hydrogels. Afterwards, the PEMC sensors were integrated with robots to develop a method of high-throughput experimentation. Various hydrogels were formulated in a well-plate format and characterized by the sensor tools in an automated manner. High-throughput characterization, especially multi-property characterization enabled optimizing the formulation to achieve tradeoff between different properties. Finally, the sensor-based high-throughput experimentation was combined with active learning for accelerated material discovery. A collaborative learning was used to guide the high-throughput formulation and characterization of hydrogels, which demonstrated rapid discovery of mechanically optimized composite glycogels. Through this dissertation, we hope to provide a new tool for high-throughput characterization of soft materials to accelerate the discovery and optimization of materials. / Doctor of Philosophy / Many grand societal challenges, including those associated with energy and healthcare, are driven by the need for new materials. However, the traditional way of discovering new materials is based on trial and error using low throughput computational and experimental methods. For example, it often takes several years, even decades, to discover and commercialize new materials. The lithium-ion battery is a good example. Traditional time-consuming and expensive methods cannot meet the fast-growing requirements of modern material discovery. With the development of computer science and automation, the idea of using data, artificial intelligence, and robots for accelerated materials discovery has attracted more and more attention. Significant progress has been made in metals and inorganic non-metal materials (e.g., semiconductors) in the past decade under the guidance of machine learning and the assistance of automated robots. However, relatively less progress has been made in materials having complex structures and dynamic properties, such as hydrogels. Hydrogels have wide applications in our daily lives, such as drugs and biomedical devices. One significant barrier to accelerated discovery and engineering of hydrogels is the lack of tools that can rapidly characterize the material's properties. In this dissertation, a sensor-based approach was created to characterize the mechanical properties and stimuli-response of soft materials using low sample volumes. The sensor was integrated with a robot to test materials in high-throughput formats in a rapid and automated measurement format. In combination with machine learning, the high-throughput characterization method was demonstrated to accelerate the engineering and optimization of several hydrogels. Through this dissertation, we hope to provide new tools and methods for rapid engineering of soft materials.

Page generated in 0.0242 seconds