Spelling suggestions: "subject:"1earning. 1earning."" "subject:"1earning. a.learning.""
1 
Generische ELearningPlattform für interaktive Lehrsimulationen zum Einsatz in Selbststudium und Präsenzlehre online und offlineDieckmann, Andreas. January 2003 (has links) (PDF)
Bielefeld, Universiẗat, Diss., 2004.

2 
Methoden des Blended Learning : Überblick und Softwareevaluation /Obrist, Markus. January 2007 (has links)
Fachhochsch. Solothurn Nordwestschweiz (FH), DiplomarbeitSolothurn, 2006. / Diese Diplomarbeit wurde im Auftrag vom Verband Interieursuisse erstellt.

3 
A comparative study of the historical development of andragogy and the formation of its scientific foundation in Germany and the United States of America, 18331999 /Wilson, Clive Antonio. January 2003 (has links)
Thesis (Ed. D.)Graduate School of Education, Oral Roberts University, 2003. / Includes abstract and vita. Includes bibliographical references (leaves 178200).

4 
Discovering hierarchy in reinforcement learningHengst, Bernhard, Computer Science & Engineering, Faculty of Engineering, UNSW January 2003 (has links)
This thesis addresses the open problem of automatically discovering hierarchical structure in reinforcement learning. Current algorithms for reinforcement learning fail to scale as problems become more complex. Many complex environments empirically exhibit hierarchy and can be modeled as interrelated subsystems, each in turn with hierarchic structure. Subsystems are often repetitive in time and space, meaning that they reoccur as components of different tasks or occur multiple times in different circumstances in the environment. A learning agent may sometimes scale to larger problems if it successfully exploits this repetition. Evidence suggests that a bottom up approach that repetitively finds buildingblocks at one level of abstraction and uses them as background knowledge at the next level of abstraction, makes learning in many complex environments tractable. An algorithm, called HEXQ, is described that automatically decomposes and solves a multidimensional Markov decision problem (MDP) by constructing a multilevel hierarchy of interlinked subtasks without being given the model beforehand. The effectiveness and efficiency of the HEXQ decomposition depends largely on the choice of representation in terms of the variables, their temporal relationship and whether the problem exhibits a type of constrained stochasticity. The algorithm is first developed for stochastic shortest path problems and then extended to infinite horizon problems. The operation of the algorithm is demonstrated using a number of examples including a taxi domain, various navigation tasks, the Towers of Hanoi and a larger sporting problem. The main contributions of the thesis are the automation of (1)decomposition, (2) subgoal identification, and (3) discovery of hierarchical structure for MDPs with states described by a number of variables or features. It points the way to further scaling opportunities that encompass approximations, partial observability, selective perception, relational representations and planning. The longer term research aim is to train rather than program intelligent agents

5 
A study of modelbased average reward reinforcement learningOk, DoKyeong 09 May 1996 (has links)
Reinforcement Learning (RL) is the study of learning agents that improve
their performance from rewards and punishments. Most reinforcement learning
methods optimize the discounted total reward received by an agent, while, in many
domains, the natural criterion is to optimize the average reward per time step. In this
thesis, we introduce a modelbased average reward reinforcement learning method
called "Hlearning" and show that it performs better than other average reward and
discounted RL methods in the domain of scheduling a simulated Automatic Guided
Vehicle (AGV).
We also introduce a version of Hlearning which automatically explores the
unexplored parts of the state space, while always choosing an apparently best action
with respect to the current value function. We show that this "Autoexploratory HLearning"
performs much better than the original Hlearning under many previously
studied exploration strategies.
To scale Hlearning to large state spaces, we extend it to learn action models
and reward functions in the form of Bayesian networks, and approximate its value
function using local linear regression. We show that both of these extensions are very
effective in significantly reducing the space requirement of Hlearning, and in making
it converge much faster in the AGV scheduling task. Further, Autoexploratory Hlearning
synergistically combines with Bayesian network model learning and value
function approximation by local linear regression, yielding a highly effective average
reward RL algorithm.
We believe that the algorithms presented here have the potential to scale to
large applications in the context of average reward optimization. / Graduation date:1996

6 
Calibrating recurrent sliding window classifiers for sequential supervised learningJoshi, Saket Subhash 03 October 2003 (has links)
Sequential supervised learning problems involve assigning a class label to
each item in a sequence. Examples include partofspeech tagging and texttospeech
mapping. A very generalpurpose strategy for solving such problems is
to construct a recurrent sliding window (RSW) classifier, which maps some window
of the input sequence plus some number of previouslypredicted items into
a prediction for the next item in the sequence. This paper describes a general purpose
implementation of RSW classifiers and discusses the highly practical
issue of how to choose the size of the input window and the number of previous
predictions to incorporate. Experiments on two realworld domains show that
the optimal choices vary from one learning algorithm to another. They also
depend on the evaluation criterion (number of correctlypredicted items versus
number of correctlypredicted whole sequences). We conclude that window
sizes must be chosen by crossvalidation. The results have implications for the
choice of window sizes for other models including hidden Markov models and
conditional random fields. / Graduation date: 2004

7 
Evaluating the engaged institution the conceptualizations and discourses of engagement /Steel, Victoria A. Placier, Peggy. January 2009 (has links)
Title from PDF of title page (University of MissouriColumbia, viewed on March 1, 2010). The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a nontechnical public abstract appears in the public.pdf file. Dissertation advisor: Dr. Peggy Placier. Vita. Includes bibliographical references.

8 
Pädagogische Anforderungen an das Lernhandeln im ELearning : Dimensionen von Selbstlernkompetenz /Heidenreich, Susanne. January 2009 (has links)
Zugl.: Dresden, Techn. Universiẗat, Diss., 2009.

9 
Expeditionary learningSusag, Angie. January 2009 (has links)
Thesis (MA)University of Montana, 2009. / Contents viewed on December 11, 2009. Title from author supplied metadata. Includes bibliographical references.

10 
An analysis of rightand leftbrain thinkers and certain styles of learningBielefeldt, Steven D. January 2006 (has links) (PDF)
Thesis, PlanB (M.S.)University of WisconsinStout, 2006. / Includes bibliographical references.

Page generated in 0.1399 seconds