Spelling suggestions: "subject:"1earning. 1earning."" "subject:"1earning. colearning.""
1 |
Generische E-Learning-Plattform für interaktive Lehrsimulationen zum Einsatz in Selbststudium und Präsenzlehre online und offlineDieckmann, Andreas. January 2003 (has links) (PDF)
Bielefeld, Universiẗat, Diss., 2004.
|
2 |
Methoden des Blended Learning : Überblick und Softwareevaluation /Obrist, Markus. January 2007 (has links)
Fachhochsch. Solothurn Nordwestschweiz (FH), Diplomarbeit--Solothurn, 2006. / Diese Diplomarbeit wurde im Auftrag vom Verband Interieursuisse erstellt.
|
3 |
A comparative study of the historical development of andragogy and the formation of its scientific foundation in Germany and the United States of America, 1833-1999 /Wilson, Clive Antonio. January 2003 (has links)
Thesis (Ed. D.)--Graduate School of Education, Oral Roberts University, 2003. / Includes abstract and vita. Includes bibliographical references (leaves 178-200).
|
4 |
Discovering hierarchy in reinforcement learningHengst, Bernhard, Computer Science & Engineering, Faculty of Engineering, UNSW January 2003 (has links)
This thesis addresses the open problem of automatically discovering hierarchical structure in reinforcement learning. Current algorithms for reinforcement learning fail to scale as problems become more complex. Many complex environments empirically exhibit hierarchy and can be modeled as interrelated subsystems, each in turn with hierarchic structure. Subsystems are often repetitive in time and space, meaning that they reoccur as components of different tasks or occur multiple times in different circumstances in the environment. A learning agent may sometimes scale to larger problems if it successfully exploits this repetition. Evidence suggests that a bottom up approach that repetitively finds building-blocks at one level of abstraction and uses them as background knowledge at the next level of abstraction, makes learning in many complex environments tractable. An algorithm, called HEXQ, is described that automatically decomposes and solves a multi-dimensional Markov decision problem (MDP) by constructing a multi-level hierarchy of interlinked subtasks without being given the model beforehand. The effectiveness and efficiency of the HEXQ decomposition depends largely on the choice of representation in terms of the variables, their temporal relationship and whether the problem exhibits a type of constrained stochasticity. The algorithm is first developed for stochastic shortest path problems and then extended to infinite horizon problems. The operation of the algorithm is demonstrated using a number of examples including a taxi domain, various navigation tasks, the Towers of Hanoi and a larger sporting problem. The main contributions of the thesis are the automation of (1)decomposition, (2) sub-goal identification, and (3) discovery of hierarchical structure for MDPs with states described by a number of variables or features. It points the way to further scaling opportunities that encompass approximations, partial observability, selective perception, relational representations and planning. The longer term research aim is to train rather than program intelligent agents
|
5 |
Calibrating recurrent sliding window classifiers for sequential supervised learningJoshi, Saket Subhash 03 October 2003 (has links)
Sequential supervised learning problems involve assigning a class label to
each item in a sequence. Examples include part-of-speech tagging and text-to-speech
mapping. A very general-purpose strategy for solving such problems is
to construct a recurrent sliding window (RSW) classifier, which maps some window
of the input sequence plus some number of previously-predicted items into
a prediction for the next item in the sequence. This paper describes a general purpose
implementation of RSW classifiers and discusses the highly practical
issue of how to choose the size of the input window and the number of previous
predictions to incorporate. Experiments on two real-world domains show that
the optimal choices vary from one learning algorithm to another. They also
depend on the evaluation criterion (number of correctly-predicted items versus
number of correctly-predicted whole sequences). We conclude that window
sizes must be chosen by cross-validation. The results have implications for the
choice of window sizes for other models including hidden Markov models and
conditional random fields. / Graduation date: 2004
|
6 |
A study of model-based average reward reinforcement learningOk, DoKyeong 09 May 1996 (has links)
Reinforcement Learning (RL) is the study of learning agents that improve
their performance from rewards and punishments. Most reinforcement learning
methods optimize the discounted total reward received by an agent, while, in many
domains, the natural criterion is to optimize the average reward per time step. In this
thesis, we introduce a model-based average reward reinforcement learning method
called "H-learning" and show that it performs better than other average reward and
discounted RL methods in the domain of scheduling a simulated Automatic Guided
Vehicle (AGV).
We also introduce a version of H-learning which automatically explores the
unexplored parts of the state space, while always choosing an apparently best action
with respect to the current value function. We show that this "Auto-exploratory H-Learning"
performs much better than the original H-learning under many previously
studied exploration strategies.
To scale H-learning to large state spaces, we extend it to learn action models
and reward functions in the form of Bayesian networks, and approximate its value
function using local linear regression. We show that both of these extensions are very
effective in significantly reducing the space requirement of H-learning, and in making
it converge much faster in the AGV scheduling task. Further, Auto-exploratory H-learning
synergistically combines with Bayesian network model learning and value
function approximation by local linear regression, yielding a highly effective average
reward RL algorithm.
We believe that the algorithms presented here have the potential to scale to
large applications in the context of average reward optimization. / Graduation date:1996
|
7 |
Pädagogische Anforderungen an das Lernhandeln im E-Learning : Dimensionen von Selbstlernkompetenz /Heidenreich, Susanne. January 2009 (has links)
Zugl.: Dresden, Techn. Universiẗat, Diss., 2009.
|
8 |
Expeditionary learningSusag, Angie. January 2009 (has links)
Thesis (MA)--University of Montana, 2009. / Contents viewed on December 11, 2009. Title from author supplied metadata. Includes bibliographical references.
|
9 |
There's more to it than instructional design the role of individual learner characteristics for hypermedia learningOpfermann, Maria January 2008 (has links)
Zugl.: Tübingen, Univ., Diss.
|
10 |
Lernstrategien und E-Learning : eine empirische Untersuchung /Mankel, Mirco. January 2008 (has links)
Zugl.: Wuppertal, Universiẗat, Diss., 2008.
|
Page generated in 0.0762 seconds