Spelling suggestions: "subject:"1earning model"" "subject:"c1earning model""
1 |
A Learning Model for Discrete MathematicsWallace, Christopher 01 December 2008 (has links)
In this paper we introduce a new model which we apply to Discrete Mathematics, but could be applied to other courses as well. The model uses homework, lectures and quizzes. The key factor and design is centered on the quizzes which are given daily. We also discuss how lectures and homework question sessions can be shortened slightly to allow for twenty-five minute quizzes without sacrificing content. The model assumes a course which meets two days a week lecture, each of which is ninety minutes with no recitations. A three hour lecture could also be applied to this model.
|
2 |
Blended Synchronous Learning Models in Web-based Learning EnvironmentLin, Chun-Cheng 27 August 2007 (has links)
Due to the advancement of web-based technologies, using LMS to support both asynchronous and synchronous learning has become more and more popular. Another new trend is to combine physical classroom and cyber classroom into a mixture learning environment, that is why blended learning has become an important research topic in e-learning domain. According to the literature survey, blended learning can create a flexible learning environment and improve the learning effects. Besides, blended learning can also reduce the cost, increase the benefit, and extend the outcomes. However, most teachers are not familiar with this kind of blended synchronous learning environment; they have no ideas about how to conduct teaching and learning activities in this kind of environment. The aims of this study are to explore the proper setup of a blended learning environment and to propose some important blended learning models for teachers.
We use case study approach for our research. Two online successful courses were chosen as the study cases. These two courses are ¡§E-learning Theory and Practice¡¨ and ¡§Computer Networks and the Internet¡¨ which were instructed by Dr. Nian-Shing Chen at National Sun Yat-Sen University. Observation method and interview method were used to gather the study data. Moreover, the gathered data was analyzed by qualitative methods. The contributions of this study are the setup guideline for blended synchronous classroom and the proposed five blended synchronous learning modes. These results could provide valuable references for administrators to setup appropriate blended learning environments and for instructors to design better blended learning courses.
|
3 |
Informal learning in the Web 2.0 environment : how Chinese students who are learning English use Web 2.0 tools for informal learningLi, Yiran, active 2013 13 December 2013 (has links)
The purpose of this master’s report was to investigate how Chinese students who were learning English used Web 2.0 tools for informal learning and to construct a model of informal learning in the Web 2.0 environment. I conducted a pilot study with 32 Chinese students who were learning English and tried to understand how they used Web 2.0 tools as informal learning tools to improve their English. Furthermore, I discussed the main challenges of informal learning in a Web 2.0 environment from the learners’ perspective and from a technical perspective. Then I proposed a model of informal learning in a Web 2.0 environment which may improve learning in an informal learning environment, and provide learners a possible learning method. It is hoped that this model will help students better master learning methods of informal learning in the Web 2.0 environment and lay a good foundation for lifelong learning. / text
|
4 |
A study of learning models for analyzing prisoners' dilemma game data / 囚犯困境資料分析之學習模型研究賴宜祥, Lai, Yi Hsiang Unknown Date (has links)
人們如何在重覆的囚犯困境賽局選擇策略是本文探討的議題,其中的賽局學習理論就是預測賽局的參與者(player)會選擇何種策略。本文使用的資料包括3個囚犯困境的實驗,各自有不同的實驗設定及配對程序,參加者都是政治大學的大學部學生,我們將使用這些資料比較不同的學習模型。除了常見的3個學習模型:增強學習模型(Reinforcement Learning model)、信念學習模型(Belief Learning model)及加權經驗吸引模型(Experience-Weighted Attraction model),本文也提出一個延伸的增強學習模型(Extended reinforcement learning model)。接著將分析劃為Training (in-sample)及Testing (out-sample),並比較各實驗間或模型間的結果。
雖然延伸增強學習模型(Extended reinforcement learning model)較原始的增強學習模型(Reinforcement learning model)多了一個參數,該模型(Extended reinforcement learning model)在Training(in-sample)及Testing(out-sample)表現多較之前的模型來得些許的好。 / How people choose strategies in a finite repeated prisoners’ dilemma game is of interest in Game Theory. The way to predict which strategies the people choose in a game is so-called game learning theory. The objective of this study is to find a proper learning model for the prisoners’ dilemma game data collected in National Cheng-Chi University. The game data consist of three experiments with different game and matching rules. Four learning models are considered, including Reinforcement learning model, Belief learning model, Experience Weighted Attraction learning model and a proposed model modified from reinforcement learning model. The data analysis was divided into 2 parts: training (in-sample) and testing (out-sample).
The proposed learning model is slightly better than the original reinforcement learning model no matter when in training or testing prediction although one more parameter is added. The performances of prediction by model fitting are all better than guessing the decisions with equal chance.
|
5 |
Může modelová kombinace řídit prognózu volatility? / Can Model Combination Improve Volatility Forecasting?Tyuleubekov, Sabyrzhan January 2019 (has links)
Nowadays, there is a wide range of forecasting methods and forecasters encounter several challenges during selection of an optimal method for volatility forecasting. In order to make use of wide selection of forecasts, this thesis tests multiple forecast combination methods. Notwithstanding, there exists a plethora of forecast combination literature, combination of traditional methods with machine learning methods is relatively rare. We implement the following combination techniques: (1) simple mean forecast combination, (2) OLS combination, (3) ARIMA on OLS combined fit, (4) NNAR on OLS combined fit and (5) KNN regression on OLS combined fit. To our best knowledge, the latter two combination techniques are not yet researched in academic literature. Additionally, this thesis should help a forecaster with three choice complication causes: (1) choice of volatility proxy, (2) choice of forecast accuracy measure and (3) choice of training sample length. We found that squared and absolute return volatility proxies are much less efficient than Parkinson and Garman-Klass volatility proxies. Likewise, we show that forecast accuracy measure (RMSE, MAE or MAPE) influences optimal forecasts ranking. Finally, we found that though forecast quality does not depend on training sample length, we see that forecast...
|
6 |
Deep Learning of Model Correction and Discontinuity DetectionZhou, Zixu 26 August 2022 (has links)
No description available.
|
7 |
The Realm of Self-Regulated Learning (SRL): An Examination of SRL in an Elementary Classroom Setting and its Relevancy to Trends in our Current CurriculaLutfi, Duaa 01 December 2013 (has links)
Teaching and instructing students is a necessity, but creating ways to challenge them is a priority. This thesis focuses on Barry Zimmerman and Timothy Clearly’s Self-Regulation Empowerment Program (SREP). This model uses a problem-solving approach in establishing Self-Regulated Learning (SRL) strategies in students’ learning. Stemming from interdisciplinary questions such as, “what will help students be successful in and outside the classroom?†and “how do teachers challenge students without stifling their creativity?†this purpose of this study aims to explore the realm of Self-Regulated Learning (SRL). The present study further examines if SRL strategies and practices foster learning and are prevalent in current trends and curricula such as, Marzano and Common Core. After thorough analysis of student observations and coding of data, the findings concluded that SRL strategies fostered student learning. Students studied were more readily motivated to regulate their learning and attempt challenging tasks. Moreover these findings indicated an increase in student success and metacognitive knowledge, as the students were provided with more opportunities to engage in self-talk, self-reflection, strategic planning, and goal setting. Results suggested the flexibility of the SREP model and its application to current instructional practices. Implications and recommendations for further research into the SRL model across other disciplines are also presented and discussed.
|
8 |
Autonomous Overtaking with Learning Model Predictive Control / Autonom Omkörning med Learning Model Predictive ControlBengtsson, Ivar January 2020 (has links)
We review recent research into trajectory planning for autonomous overtaking to understand existing challenges. Then, the recently developed framework Learning Model Predictive Control (LMPC) is presented as a suitable method to iteratively improve an overtaking manoeuvre each time it is performed. We present recent extensions to the LMPC framework to make it applicable to overtaking. Furthermore, we also present two alternative modelling approaches with the intention of reducing computational complexity of the optimization problems solved by the controller. All proposed frameworks are built from scratch in Python3 and simulated for evaluation purposes. Optimization problems are modelled and solved using the Gurobi 9.0 Python API gurobipy. The results show that LMPC can be successfully applied to the overtaking problem, with improved performance at each iteration. However, the first proposed alternative modelling approach does not improve computational times as was the intention. The second one does but fails in other areas. / Vi går igenom ny forskning inom trajectory planning för autonom omkörning för att förstå de utmaningar som finns. Därefter föreslås ramverket Learning Model Predictive Control (LMPC) som en lämplig metod för att iterativt förbättra en omkörning vid varje utförande. Vi tar upp utvidgningar av LMPC-ramverket för att göra det applicerbart på omkörningsproblem. Dessutom presenterar vi också två alternativa modelleringar i syfte att minska optimeringsproblemens komplexitet. Alla tre angreppssätt har byggts från grunden i Python3 och simulerats i utvärderingssyfte. Optimeringsproblem har modellerats och lösts med programvaran Gurobi 9.0s python-API gurobipy. Resultaten visar att LMPC kan tillämpas framgångsrikt på omkörningsproblem, med förbättrat utförande vid varje iteration. Den första alternativa modelleringen minskar inte beräkningstiden vilket var dess syfte. Det gör däremot den andra alternativa modelleringen som dock fungerar sämre i andra avseenden.
|
9 |
Computational model-based functional magnetic resonance imaging of reinforcement learning in humansErdeniz, Burak January 2013 (has links)
The aim of this thesis is to determine the changes in BOLD signal of the human brain during various stages of reinforcement learning. In order to accomplish that goal two probabilistic reinforcement-learning tasks were developed and assessed with healthy participants by using functional magnetic resonance imaging (fMRI). For both experiments the brain imaging data of the participants were analysed by using a combination of univariate and model–based techniques. In Experiment 1 there were three types of stimulus-response pairs where they predict either a reward, a neutral or a monetary loss outcome with a certain probability. The Experiment 1 tested the following research questions: Where does the activity occur in the brain for expecting and receiving a monetary reward and a punishment ? Does avoiding a loss outcome activate similar brain regions as gain outcomes and vice a verse does avoiding a reward outcome activate similar brain regions as loss outcomes? Where in the brain prediction errors, and predictions for rewards and losses are calculated? What are the neural correlates of reward and loss predictions for reward and loss during early and late phases in learning? The results of the Experiment 1 have shown that expectation for reward and losses activate overlapping brain areas mainly in the anterior cingulate cortex and basal ganglia but outcomes of rewards and losses activate separate brain regions, outcomes of losses mainly activate insula and amygdala whereas reward activate bilateral medial frontal gyrus. The model-based analysis also revealed early versus late learning related changes. It was found that predicted-value in early trials is coded in the ventro-medial orbito frontal cortex but later in learning the activation for the predicted value was found in the putamen. The second experiment was designed to find out the differences in processing novel versus familiar reward-predictive stimuli. The results revealed that dorso-lateral prefrontal cortex and several regions in the parietal cortex showed greater activation for novel stimuli than for familiar stimuli. As an extension to the fourth research question of Experiment 1, reward predictedvalues of the conditional stimuli and prediction errors of unconditional stimuli were also assessed in Experiment 2. The results revealed that during learning there is a significant activation of the prediction error mainly in the ventral striatum with extension to various cortical regions but for familiar stimuli no prediction error activity was observed. Moreover, predicted values for novel stimuli activate mainly ventro-medial orbito frontal cortex and precuneus whereas the predicted value of familiar stimuli activates putamen. The results of Experiment 2 for the predictedvalues reviewed together with the early versus later predicted values in Experiment 1 suggest that during learning of CS-US pairs activation in the brain shifts from ventro-medial orbito frontal structures to sensori-motor parts of the striatum.
|
10 |
Enhancing the Verification-Driven Learning Model for Data Structures with VisualizationKondeti, Yashwanth Reddy 04 August 2011 (has links)
The thesis aims at teaching various data structures algorithms using the Visualization Learning tool. The main objective of the work is to provide a learning opportunity for novice computer science students to gain a broader exposure towards data structure programming. The visualization learning tool is based on the Verification-Driven Learning model developed for software engineering. The tool serves as a platform for demonstrating visualizations of various data structures algorithms. All the visualizations are designed to emphasize the important operational features of various data structures. The learning tool encourages students into learning data structures by designing Learning Cases. The Learning Cases have been carefully designed to systematically implant bugs in a properly functioning visualization. Students are assigned the task of analyzing the code and also identify the bugs through quizzing. This provides students with a challenging hands-on learning experience that complements students’ textbook knowledge. It also serves as a significant foundation for pursuing future courses in data structures.
|
Page generated in 0.0957 seconds