Return to search

Learning Distributed Representations for Statistical Language Modelling and Collaborative Filtering

With the increasing availability of large datasets machine learning techniques
are becoming an increasingly attractive alternative to expert-designed approaches to solving complex problems in domains where data is abundant.
In this thesis we introduce several models for large sparse discrete datasets. Our approach, which is based on probabilistic models that use distributed representations to alleviate the effects of data sparsity, is applied to statistical language modelling and collaborative filtering.

We introduce three probabilistic language models that represent words using learned
real-valued vectors. Two of the models are based on the Restricted Boltzmann Machine (RBM) architecture while the third one
is a simple deterministic model. We show that the deterministic model outperforms the widely used n-gram models and learns sensible word representations.

To reduce the time complexity of training and making predictions with the deterministic model,
we introduce a hierarchical version of the model, that can be exponentially faster.
The speedup is achieved by structuring the vocabulary as a tree over words and
taking advantage of this structure. We propose a simple feature-based
algorithm for automatic construction of trees over words from data and show that the
resulting models can outperform non-hierarchical neural models as well as the
best n-gram models.

We then turn our attention to collaborative filtering
and show how RBM models can be used to model the distribution of sparse
high-dimensional user rating vectors efficiently, presenting inference
and learning algorithms that scale linearly in the number of observed ratings.
We also introduce the Probabilistic Matrix Factorization model which is based
on the probabilistic formulation of the low-rank matrix approximation problem
for partially observed matrices. The two models are then extended to
allow conditioning on the identities of the rated items whether or not the
actual rating values are known. Our results on the Netflix Prize dataset show
that both RBM and PMF models outperform online SVD models.

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:OTU.1807/24832
Date31 August 2010
CreatorsMnih, Andriy
ContributorsHinton, Geoffrey
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
Languageen_ca
Detected LanguageEnglish
TypeThesis

Page generated in 0.0019 seconds