• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4490
  • 531
  • 275
  • 266
  • 157
  • 119
  • 66
  • 55
  • 39
  • 35
  • 24
  • 20
  • 19
  • 12
  • 11
  • Tagged with
  • 7525
  • 7525
  • 2504
  • 1358
  • 1212
  • 1172
  • 1160
  • 1100
  • 1032
  • 971
  • 951
  • 919
  • 878
  • 860
  • 757
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Inductive machine learning with bias

林謀楷, Lam, Mau-kai. January 1994 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
32

Machine learning methods for computational biology

Li, Limin, 李丽敏 January 2010 (has links)
published_or_final_version / Mathematics / Doctoral / Doctor of Philosophy
33

Cross-domain subspace learning

Si, Si, 斯思 January 2010 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
34

Learning to co-operate in multi-agent systems

Kostiadis, Kostas January 2003 (has links)
No description available.
35

Modelling of learning in design

Sim, Siang Kok January 2000 (has links)
No description available.
36

Learning by experimentation

Cao, Feng January 1990 (has links)
No description available.
37

Towards inducing a simulation model description

Abdurahiman, Vakulathil January 1994 (has links)
No description available.
38

Utilising incomplete domain knowledge in an information theoretic guided inductive knowledge discovery algorithm

Mallen, Jason January 1995 (has links)
No description available.
39

Cognitive maps in Learning Classifier Systems

Ball, N. R. January 1991 (has links)
No description available.
40

Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data

Memisevic, Roland 28 July 2008 (has links)
Real world data is not random: The variability in the data-sets that arise in computer vision, signal processing and other areas is often highly constrained and governed by a number of degrees of freedom that is much smaller than the superficial dimensionality of the data. Unsupervised learning methods can be used to automatically discover the “true”, underlying structure in such data-sets and are therefore a central component in many systems that deal with high-dimensional data. In this thesis we develop several new approaches to modeling the low-dimensional structure in data. We introduce a new non-parametric framework for latent variable modelling, that in contrast to previous methods generalizes learned embeddings beyond the training data and its latent representatives. We show that the computational complexity for learning and applying the model is much smaller than that of existing methods, and we illustrate its applicability on several problems. We also show how we can introduce supervision signals into latent variable models using conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent representation and to untangle the factors that contribute to the variability in the data. We develop a model that uses conditional latent variables to extract rich distributed representations of image transformations, and we describe a new model for learning transformation features in structured supervised learning problems.

Page generated in 0.1112 seconds