Spelling suggestions: "subject:"aperception anda learning"" "subject:"aperception ando learning""
1 |
The processing of words on a negative priming paradigmGodel, Margaret D. January 1988 (has links)
No description available.
|
2 |
The inferential basis of perceptual performance /Leboe, Jason P. Milliken, Bruce. January 2002 (has links)
Thesis (Ph.D.)--McMaster University, 2002. / Advisor: Bruce Milliken. Includes bibliographical references (leaves 95-107). Also available via World Wide Web.
|
3 |
The inferential basis of perceptual performance /Leboe, Jason P. Milliken, Bruce. January 2002 (has links)
Thesis (Ph.D.)--McMaster University, 2002. / Advisor: Bruce Milliken. Includes bibliographical references (leaves 95-107). Also available via World Wide Web.
|
4 |
Chronic low-level lead exposure affects the monoaminergic system in the mouse supererior olivary complexFortune, Tyler John. January 2007 (has links) (PDF)
Thesis (M.S.) -- University of Montana, 2007. / Title from author supplied metadata. Description based on contents viewed on August 7, 2009. Includes bibliographical references.
|
5 |
Disappearing effects of transitional probability on visual word recognition during readingEiter, Brianna M. January 2005 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Department of Psychology, 2005. / Includes bibliographical references.
|
6 |
The development of Florida length based vehicle classification scheme using support vector machinesMauga, Timur. Mussa, Renatus. January 2006 (has links)
Thesis (M.S.)--Florida State University, 2006. / Advisor: Renatus Mussa, Florida State University, College of Engineering, Dept. of Civil and Environmental Engineering Title and description from dissertation home page (viewed Sept. 19, 2006). Document formatted into pages; contains xi, 202 pages. Includes bibliographical references.
|
7 |
Learning in a state of confusion : employing active perception and reinforcement learning in partially observable worldsCrook, Paul A. January 2007 (has links)
In applying reinforcement learning to agents acting in the real world we are often faced with tasks that are non-Markovian in nature. Much work has been done using state estimation algorithms to try to uncover Markovian models of tasks in order to allow the learning of optimal solutions using reinforcement learning. Unfortunately these algorithms which attempt to simultaneously learn a Markov model of the world and how to act have proved very brittle. Our focus differs. In considering embodied, embedded and situated agents we have a preference for simple learning algorithms which reliably learn satisficing policies. The learning algorithms we consider do not try to uncover the underlying Markovian states, instead they aim to learn successful deterministic reactive policies such that agents actions are based directly upon the observations provided by their sensors. Existing results have shown that such reactive policies can be arbitrarily worse than a policy that has access to the underlying Markov process and in some cases no satisficing reactive policy can exist. Our first contribution is to show that providing agents with alternative actions and viewpoints on the task through the addition of active perception can provide a practical solution in such circumstances. We demonstrate empirically that: (i) adding arbitrary active perception actions to agents which can only learn deterministic reactive policies can allow the learning of satisficing policies where none were originally possible; (ii) active perception actions allow the learning of better satisficing policies than those that existed previously and (iii) our approach converges more reliably to satisficing solutions than existing state estimation algorithms such as U-Tree and the Lion Algorithm. Our other contributions focus on issues which affect the reliability with which deterministic reactive satisficing policies can be learnt in non-Markovian environments. We show that that greedy action selection may be a necessary condition for the existence of stable deterministic reactive policies on partially observable Markov decision processes (POMDPs). We also set out the concept of Consistent Exploration. This is the idea of estimating state-action values by acting as though the policy has been changed to incorporate the action being explored. We demonstrate that this concept can be used to develop better algorithms for learning reactive policies to POMDPs by presenting a new reinforcement learning algorithm; the Consistent Exploration Q(l) algorithm (CEQ(l)). We demonstrate on a significant number of problems that CEQ(l) is more reliable at learning satisficing solutions than the algorithm currently regarded as the best for learning deterministic reactive policies, that of SARSA(l).
|
8 |
A study of the relationship between self-concept of mild grade mentally retarded and their family acceptance /Leung, Chi-hung. January 1993 (has links)
Thesis (M. Ed.)--University of Hong Kong, 1993. / Includes bibliographical references (leaves 101-110).
|
9 |
Social status and friendship patterns among students with learning difficulties /Law, Man-shing. January 1995 (has links)
Thesis (M. Ed.)--University of Hong Kong, 1995. / Includes bibliographical references (leaf 61-77).
|
10 |
A study of the relationship between self-concept of mild grade mentally retarded and their family acceptanceLeung, Chi-hung. January 1993 (has links)
Thesis (M.Ed.)--University of Hong Kong, 1993. / Includes bibliographical references (leaves 101-110). Also available in print.
|
Page generated in 0.1376 seconds