Spelling suggestions: "subject:"achine learning"" "subject:"amachine learning""
41 
Calibrating recurrent sliding window classifiers for sequential supervised learningJoshi, Saket Subhash 03 October 2003 (has links)
Sequential supervised learning problems involve assigning a class label to
each item in a sequence. Examples include partofspeech tagging and texttospeech
mapping. A very generalpurpose strategy for solving such problems is
to construct a recurrent sliding window (RSW) classifier, which maps some window
of the input sequence plus some number of previouslypredicted items into
a prediction for the next item in the sequence. This paper describes a general purpose
implementation of RSW classifiers and discusses the highly practical
issue of how to choose the size of the input window and the number of previous
predictions to incorporate. Experiments on two realworld domains show that
the optimal choices vary from one learning algorithm to another. They also
depend on the evaluation criterion (number of correctlypredicted items versus
number of correctlypredicted whole sequences). We conclude that window
sizes must be chosen by crossvalidation. The results have implications for the
choice of window sizes for other models including hidden Markov models and
conditional random fields. / Graduation date: 2004

42 
A study of modelbased average reward reinforcement learningOk, DoKyeong 09 May 1996 (has links)
Reinforcement Learning (RL) is the study of learning agents that improve
their performance from rewards and punishments. Most reinforcement learning
methods optimize the discounted total reward received by an agent, while, in many
domains, the natural criterion is to optimize the average reward per time step. In this
thesis, we introduce a modelbased average reward reinforcement learning method
called "Hlearning" and show that it performs better than other average reward and
discounted RL methods in the domain of scheduling a simulated Automatic Guided
Vehicle (AGV).
We also introduce a version of Hlearning which automatically explores the
unexplored parts of the state space, while always choosing an apparently best action
with respect to the current value function. We show that this "Autoexploratory HLearning"
performs much better than the original Hlearning under many previously
studied exploration strategies.
To scale Hlearning to large state spaces, we extend it to learn action models
and reward functions in the form of Bayesian networks, and approximate its value
function using local linear regression. We show that both of these extensions are very
effective in significantly reducing the space requirement of Hlearning, and in making
it converge much faster in the AGV scheduling task. Further, Autoexploratory Hlearning
synergistically combines with Bayesian network model learning and value
function approximation by local linear regression, yielding a highly effective average
reward RL algorithm.
We believe that the algorithms presented here have the potential to scale to
large applications in the context of average reward optimization. / Graduation date:1996

43 
Learning World Models in Environments with Manifest Causal StructureBergman, Ruth 05 May 1995 (has links)
This thesis examines the problem of an autonomous agent learning a causal world model of its environment. Previous approaches to learning causal world models have concentrated on environments that are too "easy" (deterministic finite state machines) or too "hard" (containing much hidden state). We describe a new domain  environments with manifest causal structure  for learning. In such environments the agent has an abundance of perceptions of its environment. Specifically, it perceives almost all the relevant information it needs to understand the environment. Many environments of interest have manifest causal structure and we show that an agent can learn the manifest aspects of these environments quickly using straightforward learning techniques. We present a new algorithm to learn a rulebased causal world model from observations in the environment. The learning algorithm includes (1) a low level rulelearning algorithm that converges on a good set of specific rules, (2) a concept learning algorithm that learns concepts by finding completely correlated perceptions, and (3) an algorithm that learns general rules. In addition this thesis examines the problem of finding a good expert from a sequence of experts. Each expert has an "error rate"; we wish to find an expert with a low error rate. However, each expert's error rate and the distribution of error rates are unknown. A new expertfinding algorithm is presented and an upper bound on the expected error rate of the expert is derived.

44 
Nonlinear Latent Factor Models for Revealing Structure in Highdimensional DataMemisevic, Roland 28 July 2008 (has links)
Real world data is not random: The variability in the datasets that arise in computer vision,
signal processing and other areas is often highly constrained and governed by a number of
degrees of freedom that is much smaller than the superficial dimensionality of the data.
Unsupervised learning methods can be used to automatically discover the “true”, underlying
structure in such datasets and are therefore a central component in many systems that deal
with highdimensional data.
In this thesis we develop several new approaches to modeling the lowdimensional structure
in data. We introduce a new nonparametric framework for latent variable modelling, that in
contrast to previous methods generalizes learned embeddings beyond the training data and its
latent representatives. We show that the computational complexity for learning and applying
the model is much smaller than that of existing methods, and we illustrate its applicability
on several problems.
We also show how we can introduce supervision signals into latent variable models using
conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent
representation and to untangle the factors that contribute to the variability in the data. We
develop a model that uses conditional latent variables to extract rich distributed representations
of image transformations, and we describe a new model for learning transformation
features in structured supervised learning problems.

45 
Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in Coregistered 18FDG PET/CT ImagesMarkel, Daniel 14 December 2011 (has links)
Variability between oncologists in defining the tumor during radiation therapy planning
can be as high as 700% by volume. Robust, automated definition of tumor boundaries
has the ability to significantly improve treatment accuracy and efficiency. However, the information provided in computed tomography (CT) is not sensitive enough to differences between tumor and healthy tissue and positron emission tomography (PET) is hampered by blurriness and low resolution. The textural characteristics of thoracic tissue was investigated and compared with those of tumors found within 21 patient PET and CT images in order to enhance the differences and the boundary between cancerous and healthy tissue. A pattern recognition approach was used from these samples to learn the textural characteristics of each and classify voxels as being either normal or abnormal.
The approach was compared to a number of alternative methods and found to have the
highest overlap with that of an oncologist's tumor definition.

46 
Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in Coregistered 18FDG PET/CT ImagesMarkel, Daniel 14 December 2011 (has links)
Variability between oncologists in defining the tumor during radiation therapy planning
can be as high as 700% by volume. Robust, automated definition of tumor boundaries
has the ability to significantly improve treatment accuracy and efficiency. However, the information provided in computed tomography (CT) is not sensitive enough to differences between tumor and healthy tissue and positron emission tomography (PET) is hampered by blurriness and low resolution. The textural characteristics of thoracic tissue was investigated and compared with those of tumors found within 21 patient PET and CT images in order to enhance the differences and the boundary between cancerous and healthy tissue. A pattern recognition approach was used from these samples to learn the textural characteristics of each and classify voxels as being either normal or abnormal.
The approach was compared to a number of alternative methods and found to have the
highest overlap with that of an oncologist's tumor definition.

47 
Nonlinear Latent Factor Models for Revealing Structure in Highdimensional DataMemisevic, Roland 28 July 2008 (has links)
Real world data is not random: The variability in the datasets that arise in computer vision,
signal processing and other areas is often highly constrained and governed by a number of
degrees of freedom that is much smaller than the superficial dimensionality of the data.
Unsupervised learning methods can be used to automatically discover the “true”, underlying
structure in such datasets and are therefore a central component in many systems that deal
with highdimensional data.
In this thesis we develop several new approaches to modeling the lowdimensional structure
in data. We introduce a new nonparametric framework for latent variable modelling, that in
contrast to previous methods generalizes learned embeddings beyond the training data and its
latent representatives. We show that the computational complexity for learning and applying
the model is much smaller than that of existing methods, and we illustrate its applicability
on several problems.
We also show how we can introduce supervision signals into latent variable models using
conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent
representation and to untangle the factors that contribute to the variability in the data. We
develop a model that uses conditional latent variables to extract rich distributed representations
of image transformations, and we describe a new model for learning transformation
features in structured supervised learning problems.

48 
Exact learning of tree patterns /Amoth, Thomas R. January 1900 (has links)
Thesis (Ph. D.)Oregon State University, 2002. / Typescript (photocopy). Includes bibliographical references (leaves 168171). Also available on the World Wide Web.

49 
TIGER an unsupervised machine learning tactical inference generator /Sidran, David Ezra. Segre, Alberto Maria. January 2009 (has links)
Thesis supervisor: Alberto Maria Segre. Includes bibliographic references (p. 145151).

50 
Representing and learning routine activitiesHexmoor, Henry H. January 1900 (has links)
Thesis (Ph. D.)State University of New York at Buffalo, 1995. / "December 1995." Includes bibliographical references (p. 127142). Also available in print.

Page generated in 0.1008 seconds