• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5698
  • 581
  • 289
  • 275
  • 167
  • 157
  • 84
  • 66
  • 51
  • 43
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9201
  • 9201
  • 3059
  • 1710
  • 1544
  • 1541
  • 1445
  • 1386
  • 1218
  • 1208
  • 1189
  • 1135
  • 1125
  • 1051
  • 1041
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Calibrating recurrent sliding window classifiers for sequential supervised learning

Joshi, Saket Subhash 03 October 2003 (has links)
Sequential supervised learning problems involve assigning a class label to each item in a sequence. Examples include part-of-speech tagging and text-to-speech mapping. A very general-purpose strategy for solving such problems is to construct a recurrent sliding window (RSW) classifier, which maps some window of the input sequence plus some number of previously-predicted items into a prediction for the next item in the sequence. This paper describes a general purpose implementation of RSW classifiers and discusses the highly practical issue of how to choose the size of the input window and the number of previous predictions to incorporate. Experiments on two real-world domains show that the optimal choices vary from one learning algorithm to another. They also depend on the evaluation criterion (number of correctly-predicted items versus number of correctly-predicted whole sequences). We conclude that window sizes must be chosen by cross-validation. The results have implications for the choice of window sizes for other models including hidden Markov models and conditional random fields. / Graduation date: 2004
42

A study of model-based average reward reinforcement learning

Ok, DoKyeong 09 May 1996 (has links)
Reinforcement Learning (RL) is the study of learning agents that improve their performance from rewards and punishments. Most reinforcement learning methods optimize the discounted total reward received by an agent, while, in many domains, the natural criterion is to optimize the average reward per time step. In this thesis, we introduce a model-based average reward reinforcement learning method called "H-learning" and show that it performs better than other average reward and discounted RL methods in the domain of scheduling a simulated Automatic Guided Vehicle (AGV). We also introduce a version of H-learning which automatically explores the unexplored parts of the state space, while always choosing an apparently best action with respect to the current value function. We show that this "Auto-exploratory H-Learning" performs much better than the original H-learning under many previously studied exploration strategies. To scale H-learning to large state spaces, we extend it to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We show that both of these extensions are very effective in significantly reducing the space requirement of H-learning, and in making it converge much faster in the AGV scheduling task. Further, Auto-exploratory H-learning synergistically combines with Bayesian network model learning and value function approximation by local linear regression, yielding a highly effective average reward RL algorithm. We believe that the algorithms presented here have the potential to scale to large applications in the context of average reward optimization. / Graduation date:1996
43

Learning World Models in Environments with Manifest Causal Structure

Bergman, Ruth 05 May 1995 (has links)
This thesis examines the problem of an autonomous agent learning a causal world model of its environment. Previous approaches to learning causal world models have concentrated on environments that are too "easy" (deterministic finite state machines) or too "hard" (containing much hidden state). We describe a new domain --- environments with manifest causal structure --- for learning. In such environments the agent has an abundance of perceptions of its environment. Specifically, it perceives almost all the relevant information it needs to understand the environment. Many environments of interest have manifest causal structure and we show that an agent can learn the manifest aspects of these environments quickly using straightforward learning techniques. We present a new algorithm to learn a rule-based causal world model from observations in the environment. The learning algorithm includes (1) a low level rule-learning algorithm that converges on a good set of specific rules, (2) a concept learning algorithm that learns concepts by finding completely correlated perceptions, and (3) an algorithm that learns general rules. In addition this thesis examines the problem of finding a good expert from a sequence of experts. Each expert has an "error rate"; we wish to find an expert with a low error rate. However, each expert's error rate and the distribution of error rates are unknown. A new expert-finding algorithm is presented and an upper bound on the expected error rate of the expert is derived.
44

Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data

Memisevic, Roland 28 July 2008 (has links)
Real world data is not random: The variability in the data-sets that arise in computer vision, signal processing and other areas is often highly constrained and governed by a number of degrees of freedom that is much smaller than the superficial dimensionality of the data. Unsupervised learning methods can be used to automatically discover the “true”, underlying structure in such data-sets and are therefore a central component in many systems that deal with high-dimensional data. In this thesis we develop several new approaches to modeling the low-dimensional structure in data. We introduce a new non-parametric framework for latent variable modelling, that in contrast to previous methods generalizes learned embeddings beyond the training data and its latent representatives. We show that the computational complexity for learning and applying the model is much smaller than that of existing methods, and we illustrate its applicability on several problems. We also show how we can introduce supervision signals into latent variable models using conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent representation and to untangle the factors that contribute to the variability in the data. We develop a model that uses conditional latent variables to extract rich distributed representations of image transformations, and we describe a new model for learning transformation features in structured supervised learning problems.
45

Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in Co-registered 18-FDG PET/CT Images

Markel, Daniel 14 December 2011 (has links)
Variability between oncologists in defining the tumor during radiation therapy planning can be as high as 700% by volume. Robust, automated definition of tumor boundaries has the ability to significantly improve treatment accuracy and efficiency. However, the information provided in computed tomography (CT) is not sensitive enough to differences between tumor and healthy tissue and positron emission tomography (PET) is hampered by blurriness and low resolution. The textural characteristics of thoracic tissue was investigated and compared with those of tumors found within 21 patient PET and CT images in order to enhance the differences and the boundary between cancerous and healthy tissue. A pattern recognition approach was used from these samples to learn the textural characteristics of each and classify voxels as being either normal or abnormal. The approach was compared to a number of alternative methods and found to have the highest overlap with that of an oncologist's tumor definition.
46

Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in Co-registered 18-FDG PET/CT Images

Markel, Daniel 14 December 2011 (has links)
Variability between oncologists in defining the tumor during radiation therapy planning can be as high as 700% by volume. Robust, automated definition of tumor boundaries has the ability to significantly improve treatment accuracy and efficiency. However, the information provided in computed tomography (CT) is not sensitive enough to differences between tumor and healthy tissue and positron emission tomography (PET) is hampered by blurriness and low resolution. The textural characteristics of thoracic tissue was investigated and compared with those of tumors found within 21 patient PET and CT images in order to enhance the differences and the boundary between cancerous and healthy tissue. A pattern recognition approach was used from these samples to learn the textural characteristics of each and classify voxels as being either normal or abnormal. The approach was compared to a number of alternative methods and found to have the highest overlap with that of an oncologist's tumor definition.
47

Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data

Memisevic, Roland 28 July 2008 (has links)
Real world data is not random: The variability in the data-sets that arise in computer vision, signal processing and other areas is often highly constrained and governed by a number of degrees of freedom that is much smaller than the superficial dimensionality of the data. Unsupervised learning methods can be used to automatically discover the “true”, underlying structure in such data-sets and are therefore a central component in many systems that deal with high-dimensional data. In this thesis we develop several new approaches to modeling the low-dimensional structure in data. We introduce a new non-parametric framework for latent variable modelling, that in contrast to previous methods generalizes learned embeddings beyond the training data and its latent representatives. We show that the computational complexity for learning and applying the model is much smaller than that of existing methods, and we illustrate its applicability on several problems. We also show how we can introduce supervision signals into latent variable models using conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent representation and to untangle the factors that contribute to the variability in the data. We develop a model that uses conditional latent variables to extract rich distributed representations of image transformations, and we describe a new model for learning transformation features in structured supervised learning problems.
48

Exact learning of tree patterns /

Amoth, Thomas R. January 1900 (has links)
Thesis (Ph. D.)--Oregon State University, 2002. / Typescript (photocopy). Includes bibliographical references (leaves 168-171). Also available on the World Wide Web.
49

TIGER an unsupervised machine learning tactical inference generator /

Sidran, David Ezra. Segre, Alberto Maria. January 2009 (has links)
Thesis supervisor: Alberto Maria Segre. Includes bibliographic references (p. 145-151).
50

Representing and learning routine activities

Hexmoor, Henry H. January 1900 (has links)
Thesis (Ph. D.)--State University of New York at Buffalo, 1995. / "December 1995." Includes bibliographical references (p. 127-142). Also available in print.

Page generated in 0.0929 seconds