Spelling suggestions: "subject:"curriculum learning"" "subject:"9curriculum learning""
1 |
Literacy development in the pre-school yearsMiller, Linda Kathleen January 2000 (has links)
No description available.
|
2 |
Teacher formative assessment : influences and practice case study research at the year one levelElliott, Susan M. January 1999 (has links)
This case study research investigated the formative assessment practices of four Year One teachers in one local education authority, and the influences which have shaped their skills. School-level contextual factors such as the role of colleagues, the head teacher, and experience in the classroom were investigated through interview and questionnaire. External influences on teacher practice, most specifically the influence of the National Curriculum and its assessment requirements, were also examined. The findings identified experience in the classroom and colleagues as key sources of influence on practice. The study reviewed the current understanding of formative assessment from social-constructivist perspective on learning. Research has illustrated specific elements of formative assessment practice, including the development of learning goals, communicating criteria, feedback, and the role of discourse. In this research, questioning emerged as a vital formative assessment skill. Underpinning the practice of the teachers who demonstrated the widest range of strategies were three key features. These teachers were reflective about their own daily practice, and demonstrated a problem-solving approach to teaching and learning. Lastly, they had established a collegial relationship of shared power in which pupil and teacher thinking processes and ideas could be expressed and exchanged. Theory has pointed to formative assessment as a teacher practice embedded in planning, teaching and assessing. Case study data were analysed to describe the practices of the teachers and to understand the ways in which formative assessment strategies might be linked together. A model of integrated practice is developed from the analysis, useful for teacher development and further research.
|
3 |
Perceptions and possibilities : a school community's imaginings for a future 'curriculum for excellence'Drew, Valerie January 2013 (has links)
This thesis reports research undertaken to explore a school community’s imaginings for secondary education for future generations. The research was designed to trouble the seemingly straightforward constructs of imagination and creativity, not merely to trace or audit their inclusion in the secondary curriculum, but rather to invite a secondary school community to put these constructs to work in exploring their imaginings and desires for good education 25-30 years ahead. The objectives used to structure the research involved: tracing the discourses of imagination and creativity in education curriculum policy; exploring a school community’s experiences and perceptions of secondary education; examining a school community’s imaginings for future secondary education; and exploring a school community’s desires for a future ‘curriculum for excellence’. The research was carried out during the development phase of Curriculum for Excellence (Scottish Executive 2004a) in Scotland which is explicit in its desire to provide opportunities for school communities to be/come imaginative and creative. This is not a new aspiration as imagination and creativity are familiar and enduring constructs in education. At a policy level the resurgence of interest in (imagination and) creativity is closely aligned to a desire for economic sustainability. The focus of my study is to explore how the concepts of imagination and creativity might become an impetus for the school community to think differently about good education for future generations. The study took place in a large comprehensive school community in a rural town in Scotland. Groups of participants, including pupils, parents, early-career teachers, mid-career teachers and school managers were drawn from across the school community. The method of data collection was adapted from Open Space Technology (Owen 2008) to provide an unstructured forum for participants to discuss their experiences and imaginings. A theoretical framework which offered a way of thinking differently about the data was devised from readings of concepts drawn from Deleuze (1995) and Deleuze and Guattari (2004) and used to analyse the school community’s perceptions, imaginings and desires. The findings suggest that whilst the new curriculum seems to open up a space for imagination and creativity the school community’s imaginings tend to be orientated to past experiences and/or closely aligned to the policy imaginary which appears to close down openings and opportunities for becoming. However there was a discernible desire in the school community for ‘good’ education in a fair and equitable system which appeared to be less narrowly focused on economic imperatives than that of the policy. I argue that there is a need for a new way of thinking about future education within current structures and systems which I have conceptualised as an ‘edu-imaginary interruption’. The thesis concludes with some reflections on the potential forms of such interruptions to impact on research and professional practice.
|
4 |
Online Unsupervised Domain Adaptation / Online-övervakad domänanpassningPanagiotakopoulos, Theodoros January 2022 (has links)
Deep Learning models have seen great application in demanding tasks such as machine translation and autonomous driving. However, building such models has proved challenging, both from a computational perspective and due to the requirement of a plethora of annotated data. Moreover, when challenged on new situations or data distributions (target domain), those models may perform inadequately. Such examples are transitioning from one city to another, different weather situations, or changes in sunlight. Unsupervised Domain adaptation (UDA) exploits unlabelled data (easy access) to adapt models to new conditions or data distributions. Inspired by the fact that environmental changes happen gradually, we focus on Online UDA. Instead of directly adjusting a model to a demanding condition, we constantly perform minor adaptions to every slight change in the data, creating a soft transition from the current domain to the target one. To perform gradual adaptation, we utilized state-of-the-art semantic segmentation approaches on increasing rain intensities (25, 50, 75, 100, and 200mm of rain). We demonstrate that deep learning models can adapt substantially better to hard domains when exploiting intermediate ones. Moreover, we introduce a model switching mechanism that allows adjusting back to the source domain, after adaptation, without dropping performance. / Deep Learning-modeller har sett stor tillämpning i krävande uppgifter som maskinöversättning och autonom körning. Att bygga sådana modeller har dock visat sig vara utmanande, både ur ett beräkningsperspektiv och på grund av kravet på en uppsjö av kommenterade data. Dessutom, när de utmanas i nya situationer eller datadistributioner (måldomän), kan dessa modeller prestera otillräckligt. Sådana exempel är övergång från en stad till en annan, olika vädersituationer eller förändringar i solljus. Unsupervised Domain adaptation (UDA) utnyttjar omärkt data (enkel åtkomst) för att anpassa modeller till nya förhållanden eller datadistributioner. Inspirerade av att miljöförändringar sker gradvis, fokuserar vi på Online UDA. Istället för att direkt anpassa en modell till ett krävande tillstånd, gör vi ständigt mindre anpassningar till varje liten förändring i data, vilket skapar en mjuk övergång från den aktuella domänen till måldomänen. För att utföra gradvis anpassning använde vi toppmoderna semantiska segmenteringsmetoder för att öka regnintensiteten (25, 50, 75, 100 och 200 mm regn). Vi visar att modeller för djupinlärning kan anpassa sig betydligt bättre till hårda domäner när man utnyttjar mellanliggande. Dessutom introducerar vi en modellväxlingsmekanism som tillåter justering tillbaka till källdomänen, efter anpassning, utan att tappa prestanda.
|
5 |
Enhancing Neural Network Accuracy on Long-Tailed Datasets through Curriculum Learning and Data Sorting / Maskininlärning, Neuralt Nätverk, CORAL-ramverk, Long-Tailed Data, Imbalance Metrics, Teacher-Student modeler, Curriculum Learning, Tränings- schemanBarreira, Daniel January 2023 (has links)
In this paper, a study is conducted to investigate the use of Curriculum Learning as an approach to address accuracy issues in a neural network caused by training on a Long-Tailed dataset. The thesis problem is presented by a Swedish e-commerce company. Currently, they are using a neural network that has been modified by them using a CORAL framework. This adaptation means that instead of having a classic binary regression model, it is an ordinal regression model. The data used for training the model has a Long-Tail distribution, which leads to inaccuracies when predicting a price distribution for items that are part of the tail-end of the data. The current method applied to remedy this problem is Re-balancing in the form of down-sampling and up-sampling. A linear training scheme is introduced, increasing in increments of $10\%$ while applying Curriculum Learning. As a method for sorting the data in an appropriate way, inspiration is drawn from Knowledge Distillation, specifically the Teacher-Student model approach. The teacher models are trained as specialists on three different subsets, and furthermore, those models are used as a basis for sorting the data before training the student model. During the training of the student model, the Curriculum Learning approach is used. The results show that for Imbalance Ratio, Kullback-Liebler divergence, Class Balance, and the Gini Coefficient, the data is clearly less Long-Tailed after dividing the data into subsets. With the correct settings before training, there is also an improvement in the training speed of the student model compared to the base model. The accuracy for both the student model and the base model is comparable. There is a slight advantage for the base model when predicting items in the head part of the data, while the student model shows improvements for items that are between the head and the tail. / I denna uppsats genomförs en studie för att undersöka användningen av Curriculum Learning som en metod för att hantera noggrannhetsproblem i ett neuralt nätverk som är en konsekvens av träning på data som har en Long-Tail fördelning. Problemstälnningen som behandlas i uppsatsen är tillhandagiven av ett svensk e-handelsföretag. För närvarande använder de ett neuralt nätverk som har modifierats med hjälp av ett CORAL-ramverk. Denna anpassning innebär att det istället för att ha en klassisk binär regressionsmodell har en ordinal regressionsmodell. Datan som används för att träna modellen har en Long-Tail fördelning, vilket leder till problem vid prediktering av prisfördelning för diverse föremål som tillhör datans svans. Den nuvarande metod som används för att åtgärda detta problem är en Re-balancing i form av down-sampling och up-sampling. Ett linjärt träningschema introduceras, som ökar i steg om $10\%$ medan Curriculum Learning tillämpas. Metoden för att sortera datan på ett lämpligt sätt inspires av Knowledge-Distillation, mer specifikt lärar-elevmodell delen. Lärarmodellerna tränas som specialister på tre olika delmängder, och därefter används dessa modeller som grund för att sortera datan innan tränandet av elevmodellen. Under träningen av elevmodellen tillämpas Curriculum Learning. Resultaten visar att för Imbalance Ratio, Kullback-Libler-divergens, Class Balance och Gini-koefficienten är datat tydligt mindre Long-Tailed efter att datat delats in i delmängder. Med rätt inställningar innan tränandet finns även en förbättring i träningshastighet för elevmodellen jämfört med basmodellen. Noggrannheten för både elevmodellen och basmodellen är jämförbar. Det finns en liten fördel för basmodellen vid prediktering av föremål i huvuddelen av datan, medan elevmodellen visar förbättringar för föremål som ligger mellan huvuddelen och svansen.
|
6 |
Reinforcement Learning for Control of a Multi-Input, Multi-Output Model of the Human ArmCrowder, Douglas Cale 01 September 2021 (has links)
No description available.
|
7 |
Curriculum Learning with Deep Convolutional Neural NetworksAvramova, Vanya January 2015 (has links)
Curriculum learning is a machine learning technique inspired by the way humans acquire knowledge and skills: by mastering simple concepts first, and progressing through information with increasing difficulty to grasp more complex topics. Curriculum Learning, and its derivatives Self Paced Learning (SPL) and Self Paced Learning with Diversity (SPLD), have been previously applied within various machine learning contexts: Support Vector Machines (SVMs), perceptrons, and multi-layer neural networks, where they have been shown to improve both training speed and model accuracy. This project ventured to apply the techniques within the previously unexplored context of deep learning, by investigating how they affect the performance of a deep convolutional neural network (ConvNet) trained on a large labeled image dataset. The curriculum was formed by presenting the training samples to the network in order of increasing difficulty, measured by the sample's loss value based on the network's objective function. The project evaluated SPL and SPLD, and proposed two new curriculum learning sub-variants, p-SPL and p-SPLD, which allow for a smooth progresson of sample inclusion during training. The project also explored the "inversed" versions of the SPL, SPLD, p-SPL and p-SPLD techniques, where the samples were selected for the curriculum in order of decreasing difficulty. The experiments demonstrated that all learning variants perform fairly similarly, within ≈1% average test accuracy margin, based on five trained models per variant. Surprisingly, models trained with the inversed version of the algorithms performed slightly better than the standard curriculum training variants. The SPLD-Inversed, SPL-Inversed and SPLD networks also registered marginally higher accuracy results than the network trained with the usual random sample presentation. The results suggest that while sample ordering does affect the training process, the optimal order in which samples are presented may vary based on the data set and algorithm used. The project also investigated whether some samples were more beneficial for the training process than others. Based on sample difficulty, subsets of samples were removed from the training data set. The models trained on the remaining samples were compared to a default model trained on all samples. On the data set used, removing the “easiest” 10% of samples had no effect on the achieved test accuracy compared to the default model, and removing the “easiest” 40% of samples reduced model accuracy by only ≈1% (compared to ≈6% loss when 40% of the "most difficult" samples were removed, and ≈3% loss when 40% of samples were randomly removed). Taking away the "easiest" samples first (up to a certain percentage of the data set) affected the learning process less negatively than removing random samples, while removing the "most difficult" samples first had the most detrimental effect. The results suggest that the networks derived most learning value from the "difficult" samples, and that a large subset of the "easiest" samples can be excluded from training with minimal impact on the attained model accuracy. Moreover, it is possible to identify these samples early during training, which can greatly reduce the training time for these models.
|
8 |
Learning and time : on using memory and curricula for language understandingGulcehre, Caglar 05 1900 (has links)
No description available.
|
Page generated in 0.0626 seconds