Spelling suggestions: "subject:"achine learning"" "subject:"cachine learning""
31 |
Inductive machine learning with bias林謀楷, Lam, Mau-kai. January 1994 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
|
32 |
Machine learning methods for computational biologyLi, Limin, 李丽敏 January 2010 (has links)
published_or_final_version / Mathematics / Doctoral / Doctor of Philosophy
|
33 |
Cross-domain subspace learningSi, Si, 斯思 January 2010 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
|
34 |
Learning to co-operate in multi-agent systemsKostiadis, Kostas January 2003 (has links)
No description available.
|
35 |
Modelling of learning in designSim, Siang Kok January 2000 (has links)
No description available.
|
36 |
Learning by experimentationCao, Feng January 1990 (has links)
No description available.
|
37 |
Towards inducing a simulation model descriptionAbdurahiman, Vakulathil January 1994 (has links)
No description available.
|
38 |
Utilising incomplete domain knowledge in an information theoretic guided inductive knowledge discovery algorithmMallen, Jason January 1995 (has links)
No description available.
|
39 |
Cognitive maps in Learning Classifier SystemsBall, N. R. January 1991 (has links)
No description available.
|
40 |
Non-linear Latent Factor Models for Revealing Structure in High-dimensional DataMemisevic, Roland 28 July 2008 (has links)
Real world data is not random: The variability in the data-sets that arise in computer vision,
signal processing and other areas is often highly constrained and governed by a number of
degrees of freedom that is much smaller than the superficial dimensionality of the data.
Unsupervised learning methods can be used to automatically discover the “true”, underlying
structure in such data-sets and are therefore a central component in many systems that deal
with high-dimensional data.
In this thesis we develop several new approaches to modeling the low-dimensional structure
in data. We introduce a new non-parametric framework for latent variable modelling, that in
contrast to previous methods generalizes learned embeddings beyond the training data and its
latent representatives. We show that the computational complexity for learning and applying
the model is much smaller than that of existing methods, and we illustrate its applicability
on several problems.
We also show how we can introduce supervision signals into latent variable models using
conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent
representation and to untangle the factors that contribute to the variability in the data. We
develop a model that uses conditional latent variables to extract rich distributed representations
of image transformations, and we describe a new model for learning transformation
features in structured supervised learning problems.
|
Page generated in 0.1112 seconds