• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20450
  • 8056
  • 4308
  • 1497
  • 1336
  • 1091
  • 889
  • 428
  • 402
  • 366
  • 282
  • 274
  • 254
  • 252
  • 155
  • Tagged with
  • 47928
  • 11334
  • 7693
  • 6997
  • 5965
  • 5314
  • 3977
  • 3524
  • 3469
  • 3414
  • 3384
  • 3094
  • 3036
  • 2980
  • 2921
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Conditional discrimination and stimulus equivalence effects of suppressing derived symmetrical responses on the emergence of transitivity /

Jones, Aaron A. Glenn, Sigrid S., January 2007 (has links)
Thesis (M.S.)--University of North Texas, May, 2007. / Title from title page display. Includes bibliographical references.
102

Learning considered within a cultural context Confucian and Socratic approaches /

Tweed, Roger Gordon. January 2000 (has links) (PDF)
Thesis (Ph.D.)--The University of British Columbia, 2000. / Adviser: Darrin R. Lehman. Includes bibliographical references.
103

Calibrating recurrent sliding window classifiers for sequential supervised learning

Joshi, Saket Subhash 03 October 2003 (has links)
Sequential supervised learning problems involve assigning a class label to each item in a sequence. Examples include part-of-speech tagging and text-to-speech mapping. A very general-purpose strategy for solving such problems is to construct a recurrent sliding window (RSW) classifier, which maps some window of the input sequence plus some number of previously-predicted items into a prediction for the next item in the sequence. This paper describes a general purpose implementation of RSW classifiers and discusses the highly practical issue of how to choose the size of the input window and the number of previous predictions to incorporate. Experiments on two real-world domains show that the optimal choices vary from one learning algorithm to another. They also depend on the evaluation criterion (number of correctly-predicted items versus number of correctly-predicted whole sequences). We conclude that window sizes must be chosen by cross-validation. The results have implications for the choice of window sizes for other models including hidden Markov models and conditional random fields. / Graduation date: 2004
104

A study of model-based average reward reinforcement learning

Ok, DoKyeong 09 May 1996 (has links)
Reinforcement Learning (RL) is the study of learning agents that improve their performance from rewards and punishments. Most reinforcement learning methods optimize the discounted total reward received by an agent, while, in many domains, the natural criterion is to optimize the average reward per time step. In this thesis, we introduce a model-based average reward reinforcement learning method called "H-learning" and show that it performs better than other average reward and discounted RL methods in the domain of scheduling a simulated Automatic Guided Vehicle (AGV). We also introduce a version of H-learning which automatically explores the unexplored parts of the state space, while always choosing an apparently best action with respect to the current value function. We show that this "Auto-exploratory H-Learning" performs much better than the original H-learning under many previously studied exploration strategies. To scale H-learning to large state spaces, we extend it to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We show that both of these extensions are very effective in significantly reducing the space requirement of H-learning, and in making it converge much faster in the AGV scheduling task. Further, Auto-exploratory H-learning synergistically combines with Bayesian network model learning and value function approximation by local linear regression, yielding a highly effective average reward RL algorithm. We believe that the algorithms presented here have the potential to scale to large applications in the context of average reward optimization. / Graduation date:1996
105

Pädagogische Anforderungen an das Lernhandeln im E-Learning : Dimensionen von Selbstlernkompetenz /

Heidenreich, Susanne. January 2009 (has links)
Zugl.: Dresden, Techn. Universiẗat, Diss., 2009.
106

Perceived attributes to the development of a positive selfconcept from the experiences of adolescents with learning disabilities /

Bernacchio, Charles P., January 2003 (has links) (PDF)
Thesis (Doctor of Education) in Individualized Ph. D. Program--University of Maine, 2003. / Includes vita. Includes bibliographical references (leaves 216-221).
107

Fluid intelligence and use of cognitive learning strategies /

Barton, John Wesley, January 1999 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 1999. / Vita. Includes bibliographical references (leaves 145-168). Available also in a digital version from Dissertation Abstracts.
108

Expeditionary learning

Susag, Angie. January 2009 (has links)
Thesis (MA)--University of Montana, 2009. / Contents viewed on December 11, 2009. Title from author supplied metadata. Includes bibliographical references.
109

Transformational learning : a deep description of an emancipatory experience /

Retherford, April L. January 1900 (has links)
Thesis (Ed. D.)--Oregon State University, 2001. / Typescript (photocopy). Includes bibliographical references (leaves 212-228). Also available online.
110

e-Learning effectiveness in interconnected corporate learning environments

Yaari, Omri 09 March 2013 (has links)
Approaches to workplace learning are continuously evolving to support business objectives but learning and development practitioners are not delivering on their mandate of developing relevant competencies which deliver on strategic objectives. Globally, the proportion of e-Learning to instructor led training is growing and the investment in e-Learning is steadily increasing. Executives expect to see better alignment of e-Learning initiatives and a proven return on investment. In order to earn their place at the executive boardroom, learning and development practitioners need to understand and align their programmes to the context of the business environment in order to positively influence business performance.This research set out to investigate the relationship between the corporate learning environment and e-Learning programme effectiveness using a self-administered questionnaire. The survey was completed by 50 corporate learning and development practitioners. It explored e-Learning programme effectiveness and the configuration of learning environments in relation to a corporate learning environment interconnectedness model proposed in this research. Descriptive statistics, correlation analysis and regression modelling were used to determine the relationship between the environment and e-Learning programme effectiveness. The strongest environmental predictors as well as the current perception of e-Learning programme effectiveness within these environments were also identified.The corporate learning environment was found to be significantly correlated with e-Learning programme effectiveness, specifically in driving higher order benefits of e-Learning programme effectiveness, behaviour change and return on investment. The two strongest predictors of e-Learning programme effectiveness in the corporate learning environment were found to be the definition of clear learning outcomes as well as the provision of opportunities for collaboration in the context of learning. The proposed model of corporate learning environment interconnectedness was also validated and found to be reliable. / Dissertation (MBA)--University of Pretoria, 2012. / Gordon Institute of Business Science (GIBS) / unrestricted

Page generated in 0.135 seconds