Return to search

Learning Word Representations with Projective Geometry

Recent work has demonstrated the impressive efficacy of computing representations in hyperbolic space rather than in Euclidean space. This is especially true for multi-relational data and for data containing latent hierarchical structures. In this work, we seek to understand why this is the case. We reflect on the intrinsic properties of hyperbolic geometry and then zero in on one of these as a possible explanation for the performance improvements --- projection. To validate this hypothesis, we propose our projected cone model, $\mathcal{PC}$. This model is designed to capture the effects of projection while not exhibiting other distinguishing properties of hyperbolic geometry. We define the $\mathcal{PC}$ model and determine all of the properties we need in order to conduct machine learning experiments with it. The model is defined as the stereographic projection of a cone into a unit disk. This is analogous to the construction of the Beltrami-Poincaré model of hyperbolic geometry by stereographic projection of one sheet of a two-sheet hyperboloid into the unit disk. We determine the mapping formulae between the cone and the unit disk, its Riemannian metric, and the distance formula between two points in the $\mathcal{PC}$ model. We investigate the learning capacity of our model. Finally, we generalize our model to higher dimensions so that we can perform representation learning in higher dimensions with our $\mathcal{PC}$ model. Because generalizing models into higher dimensions can be difficult, we also introduce a baseline model for comparison. This is a product space model, $\mathcal{PCP}$. It is built up from our rigourously developed, two-dimensional version of the $\mathcal{PC}$ model. We run experiments and compare our results with those obtained by others using the Beltrami-Poincaré model. We find that our model performs almost as well as their Beltrami-Poincaré model, far outperforming representation learning in Euclidean space. We thus conclude that projection indeed is key in explaining the success which hyperbolic geometry brings to representation learning.

Identiferoai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/45910
Date01 February 2024
CreatorsBaker, Patrick
ContributorsInkpen, Diana, Mao, Yongyi
PublisherUniversité d'Ottawa / University of Ottawa
Source SetsUniversité d’Ottawa
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Formatapplication/pdf
RightsAttribution 4.0 International, http://creativecommons.org/licenses/by/4.0/

Page generated in 0.0019 seconds