1 |
Learning Distributed Representations for Statistical Language Modelling and Collaborative FilteringMnih, Andriy 31 August 2010 (has links)
With the increasing availability of large datasets machine learning techniques
are becoming an increasingly attractive alternative to expert-designed approaches to solving complex problems in domains where data is abundant.
In this thesis we introduce several models for large sparse discrete datasets. Our approach, which is based on probabilistic models that use distributed representations to alleviate the effects of data sparsity, is applied to statistical language modelling and collaborative filtering.
We introduce three probabilistic language models that represent words using learned
real-valued vectors. Two of the models are based on the Restricted Boltzmann Machine (RBM) architecture while the third one
is a simple deterministic model. We show that the deterministic model outperforms the widely used n-gram models and learns sensible word representations.
To reduce the time complexity of training and making predictions with the deterministic model,
we introduce a hierarchical version of the model, that can be exponentially faster.
The speedup is achieved by structuring the vocabulary as a tree over words and
taking advantage of this structure. We propose a simple feature-based
algorithm for automatic construction of trees over words from data and show that the
resulting models can outperform non-hierarchical neural models as well as the
best n-gram models.
We then turn our attention to collaborative filtering
and show how RBM models can be used to model the distribution of sparse
high-dimensional user rating vectors efficiently, presenting inference
and learning algorithms that scale linearly in the number of observed ratings.
We also introduce the Probabilistic Matrix Factorization model which is based
on the probabilistic formulation of the low-rank matrix approximation problem
for partially observed matrices. The two models are then extended to
allow conditioning on the identities of the rated items whether or not the
actual rating values are known. Our results on the Netflix Prize dataset show
that both RBM and PMF models outperform online SVD models.
|
2 |
Learning Distributed Representations for Statistical Language Modelling and Collaborative FilteringMnih, Andriy 31 August 2010 (has links)
With the increasing availability of large datasets machine learning techniques
are becoming an increasingly attractive alternative to expert-designed approaches to solving complex problems in domains where data is abundant.
In this thesis we introduce several models for large sparse discrete datasets. Our approach, which is based on probabilistic models that use distributed representations to alleviate the effects of data sparsity, is applied to statistical language modelling and collaborative filtering.
We introduce three probabilistic language models that represent words using learned
real-valued vectors. Two of the models are based on the Restricted Boltzmann Machine (RBM) architecture while the third one
is a simple deterministic model. We show that the deterministic model outperforms the widely used n-gram models and learns sensible word representations.
To reduce the time complexity of training and making predictions with the deterministic model,
we introduce a hierarchical version of the model, that can be exponentially faster.
The speedup is achieved by structuring the vocabulary as a tree over words and
taking advantage of this structure. We propose a simple feature-based
algorithm for automatic construction of trees over words from data and show that the
resulting models can outperform non-hierarchical neural models as well as the
best n-gram models.
We then turn our attention to collaborative filtering
and show how RBM models can be used to model the distribution of sparse
high-dimensional user rating vectors efficiently, presenting inference
and learning algorithms that scale linearly in the number of observed ratings.
We also introduce the Probabilistic Matrix Factorization model which is based
on the probabilistic formulation of the low-rank matrix approximation problem
for partially observed matrices. The two models are then extended to
allow conditioning on the identities of the rated items whether or not the
actual rating values are known. Our results on the Netflix Prize dataset show
that both RBM and PMF models outperform online SVD models.
|
3 |
What Machines Understand about Personality Words after Reading the NewsMoyer, Eric David 15 December 2014 (has links)
No description available.
|
4 |
Connectionist modelling in cognitive science: an exposition and appraisalJaneke, Hendrik Christiaan 28 February 2003 (has links)
This thesis explores the use of artificial neural networks for modelling cognitive processes. It presents an
exposition of the neural network paradigm, and evaluates its viability in relation to the classical, symbolic
approach in cognitive science. Classical researchers have approached the description of cognition by
concentrating mainly on an abstract, algorithmic level of description in which the information processing
properties of cognitive processes are emphasised. The approach is founded on seminal ideas about
computation, and about algorithmic description emanating, amongst others, from the work of Alan Turing
in mathematical logic. In contrast to the classical conception of cognition, neural network approaches are
based on a form of neurocomputation in which the parallel distributed processing mechanisms of the brain
are highlighted. Although neural networks are generally accepted to be more neurally plausible than their
classical counterparts, some classical researchers have argued that these networks are best viewed as
implementation models, and that they are therefore not of much relevance to cognitive researchers because
information processing models of cognition can be developed independently of considerations about
implementation in physical systems.
In the thesis I argue that the descriptions of cognitive phenomena deriving from neural network modelling
cannot simply be reduced to classical, symbolic theories. The distributed representational mechanisms
underlying some neural network models have interesting properties such as similarity-based representation,
content-based retrieval, and coarse coding which do not have straightforward equivalents in classical
systems. Moreover, by placing emphasis on how cognitive processes are carried out by brain-like
mechanisms, neural network research has not only yielded a new metaphor for conceptualising cognition,
but also a new methodology for studying cognitive phenomena. Neural network simulations can be lesioned
to study the effect of such damage on the behaviour of the system, and these systems can be used to study
the adaptive mechanisms underlying learning processes. For these reasons, neural network modelling is best
viewed as a significant theoretical orientation in the cognitive sciences, instead of just an implementational
endeavour. / Psychology / D. Litt. et Phil. (Psychology)
|
5 |
Connectionist modelling in cognitive science: an exposition and appraisalJaneke, Hendrik Christiaan 28 February 2003 (has links)
This thesis explores the use of artificial neural networks for modelling cognitive processes. It presents an
exposition of the neural network paradigm, and evaluates its viability in relation to the classical, symbolic
approach in cognitive science. Classical researchers have approached the description of cognition by
concentrating mainly on an abstract, algorithmic level of description in which the information processing
properties of cognitive processes are emphasised. The approach is founded on seminal ideas about
computation, and about algorithmic description emanating, amongst others, from the work of Alan Turing
in mathematical logic. In contrast to the classical conception of cognition, neural network approaches are
based on a form of neurocomputation in which the parallel distributed processing mechanisms of the brain
are highlighted. Although neural networks are generally accepted to be more neurally plausible than their
classical counterparts, some classical researchers have argued that these networks are best viewed as
implementation models, and that they are therefore not of much relevance to cognitive researchers because
information processing models of cognition can be developed independently of considerations about
implementation in physical systems.
In the thesis I argue that the descriptions of cognitive phenomena deriving from neural network modelling
cannot simply be reduced to classical, symbolic theories. The distributed representational mechanisms
underlying some neural network models have interesting properties such as similarity-based representation,
content-based retrieval, and coarse coding which do not have straightforward equivalents in classical
systems. Moreover, by placing emphasis on how cognitive processes are carried out by brain-like
mechanisms, neural network research has not only yielded a new metaphor for conceptualising cognition,
but also a new methodology for studying cognitive phenomena. Neural network simulations can be lesioned
to study the effect of such damage on the behaviour of the system, and these systems can be used to study
the adaptive mechanisms underlying learning processes. For these reasons, neural network modelling is best
viewed as a significant theoretical orientation in the cognitive sciences, instead of just an implementational
endeavour. / Psychology / D. Litt. et Phil. (Psychology)
|
Page generated in 0.1547 seconds