Spelling suggestions: "subject:"Do (then work)"" "subject:"Do (them work)""
231 |
The processing of derived and inflected words during reading.Niswander, Elizabeth 01 January 2001 (has links) (PDF)
No description available.
|
232 |
Fast priming in reading :: a new eye movement paradigm.Sereno, Sara Crescentia 01 January 1991 (has links) (PDF)
No description available.
|
233 |
The nature of the information stored in the perceptual learning of letter strings.Schindler, Robert M. 01 January 1974 (has links) (PDF)
No description available.
|
234 |
Learning Word Representations with Projective GeometryBaker, Patrick 01 February 2024 (has links)
Recent work has demonstrated the impressive efficacy of computing representations in hyperbolic space rather than in Euclidean space. This is especially true for multi-relational data and for data containing latent hierarchical structures. In this work, we seek to understand why this is the case. We reflect on the intrinsic properties of hyperbolic geometry and then zero in on one of these as a possible explanation for the performance improvements --- projection. To validate this hypothesis, we propose our projected cone model, $\mathcal{PC}$. This model is designed to capture the effects of projection while not exhibiting other distinguishing properties of hyperbolic geometry. We define the $\mathcal{PC}$ model and determine all of the properties we need in order to conduct machine learning experiments with it. The model is defined as the stereographic projection of a cone into a unit disk. This is analogous to the construction of the Beltrami-Poincaré model of hyperbolic geometry by stereographic projection of one sheet of a two-sheet hyperboloid into the unit disk. We determine the mapping formulae between the cone and the unit disk, its Riemannian metric, and the distance formula between two points in the $\mathcal{PC}$ model. We investigate the learning capacity of our model. Finally, we generalize our model to higher dimensions so that we can perform representation learning in higher dimensions with our $\mathcal{PC}$ model. Because generalizing models into higher dimensions can be difficult, we also introduce a baseline model for comparison. This is a product space model, $\mathcal{PCP}$. It is built up from our rigourously developed, two-dimensional version of the $\mathcal{PC}$ model. We run experiments and compare our results with those obtained by others using the Beltrami-Poincaré model. We find that our model performs almost as well as their Beltrami-Poincaré model, far outperforming representation learning in Euclidean space. We thus conclude that projection indeed is key in explaining the success which hyperbolic geometry brings to representation learning.
|
235 |
Word recognition as a function of retinal locus.Mishkin, Mortimer. January 1949 (has links)
No description available.
|
236 |
The effectiveness of a word box instructional approach on word identification and spelling performance for a sample of students with learning disabilities /Joseph, Laurice Marie January 1997 (has links)
No description available.
|
237 |
Word recognition processes : evidence from laboratory-induced slips of the tongue /Camden, Carl Thurman. January 1980 (has links)
No description available.
|
238 |
A graphemic and phonemic analysis of primary level words /Nichols, Gilbert William January 1973 (has links)
No description available.
|
239 |
The relationship between knowledge of word recognition generalizations and reading achievement /Rosso, Barbara Rak January 1971 (has links)
No description available.
|
240 |
Visual word recognition as a function of meaning and graphic familiarity /Greenberg, Seth N. January 1972 (has links)
No description available.
|
Page generated in 0.0612 seconds