• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 6
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Escape dynamics in learning models /

Williams, Noah. January 2001 (has links)
Thesis (Ph. D.)--University of Chicago, Dept. of Economics, June 2001. / Includes bibliographical references. Also available on the Internet.
2

Which a stochastic best-first search learner /

Milton, Zachery A. January 1900 (has links)
Thesis (M.S.)--West Virginia University, 2008. / Title from document title page. Document formatted into pages; contains x, 127 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 123-127).
3

Probably Approximately Correct (PAC) exploration in reinforcement learning

Strehl, Alexander L. January 2007 (has links)
Thesis (Ph. D.)--Rutgers University, 2007. / "Graduate Program in Computer Science." Includes bibliographical references (p. 133-136).
4

Concept coverage and its application to two learning tasks

Almuallim, Hussein Saleh 14 April 1992 (has links)
The coverage of a learning algorithm is the number of concepts that can be learned by that algorithm from samples of a given size for given accuracy and confidence parameters. This thesis begins by asking whether good learning algorithms can be designed by maximizing their coverage. There are three questions raised by this approach: (i) For given sample size and other learning parameters, what is the largest possible coverage that any algorithm can achieve? (ii) Can we design a learning algorithm that attains this optimal coverage? (iii) What is the coverage of existing learning algorithms? This thesis contributes to answering each of these questions. First, we generalize the upper bound on coverage given in [Dietterich 89]. Next, we present two learning algorithms and determine their coverage analytically. The coverage of the first algorithm, Multi-Balls, is shown to be quite close to the upper bound. The coverage of the second algorithm, Large-Ball, turns out to be even better than Multi-Balls in many situations. Third, we considerably improve upon Dietterich's limited experiments for estimating the coverage of existing learning algorithms. We find that the coverage of Large-Ball exceeds the coverage of ID3 [Quinlan 86] and FRINGE [Pagano and Haussler 90] by more than an order of magnitude in most cases. Nevertheless, further analysis of Large-Ball shows that this algorithm is not likely to be of any practical help. Although this algorithm learns many concepts, these do not seem to be very interesting concepts. These results lead us to the conclusion that coverage maximization alone does not appear to yield practically-useful learning algorithms. The results motivate considering the biased-coverage under which different concepts are assigned different weight or importance based on given background assumptions. As an example of the new setting, we consider learning situations where many of the features present in the domain are irrelevant to the concept being learned. These situations are often encountered in practice. For this problem, we define and study the MIN-FEATURES bias in which hypotheses definable using a smaller number of features involved are preferred. We prove a tight bound on the number of examples needed for learning. Our results show that, if the MIN-FEATURES bias is implemented, then the presence of many irrelevant features does not make the learning problem substantially harder in terms of the needed number of examples. The thesis also introduces and evaluates a number of algorithms that implement or approximate the MIN-FEATURES bias. / Graduation date: 1993
5

Stochastic algorithms for learning with incomplete data an application to Bayesian networks /

Myers, James William. January 1999 (has links) (PDF)
Thesis (Ph.D.)--George Mason University, 1999. / Includes bibliographical references (leaves [180]-189).
6

A contextualized virtual learning model for South African institutions of higher learning.

Segooa, Mmatshuene Anna. January 2016 (has links)
M. Tech. Business Information Systems / A Virtual Learning Environment (VLE) enables teaching and learning pedagogy that allows students to study without geographical barriers and time constraints. VLE promotes innovations in institutions of higher learning and encourages lecturers and students to move away from the face-to-face learning method to virtual learning pedagogy. The ability to learn anywhere, any time (which is what VLE is all about) was found to be the most appealing. Furthermore, VLE enables institutions of higher learning to enrol large numbers of students without having to worry about the size of the classroom. Global learning, as well as collaboration between leaners and lecturers is encouraged and supported through VLEs. Although institutions of higher learning spend huge amounts of money on technologies such as VLE, most VLEs are still not contextualized to cater for the needs of students in developing countries such as South Africa. This leads to the VLEs in developing countries not being effectively utilised. This study aims at designing a contextualized VL model that suits South African institutions of higher learning. The study identifies factors necessary for contextualizing VLE to fit the student's perspective in a developing country (in this case, South Africa).

Page generated in 0.1552 seconds