Return to search

Representing Linguistic Knowledge with Probabilistic Models

<p> The use of language is one of the defining features of human cognition. Focusing here on two key features of language, <i>productivity</i> and <i>robustness</i>, I examine how basic questions regarding linguistic representation can be approached with the help of probabilistic generative language models, or PGLMs. These statistical models, which capture aspects of linguistic structure in terms of distributions over events, can serve as both the product of language learning and as prior knowledge in real-time language processing. In the first two chapters, I show how PGLMs can be used to make inferences about the nature of people's linguistic representations. In Chapter 1, I look at the representations of language learners, tracing the earliest evidence for a noun category in large developmental corpora. In Chapter 2, I evaluate broad-coverage language models reflecting contrasting assumptions about the information sources and abstractions used for in-context spoken word recognition in their ability to capture people's behavior in a large online game of &ldquo;Telephone.&rdquo; In Chapter 3, I show how these models can be used to examine the properties of lexicons. I use a measure derived from a probabilistic generative model of word structure to provide a novel interpretation of a longstanding linguistic universal, motivating it in terms of cognitive pressures that arise from communication. I conclude by considering the prospects for a unified, expectations-oriented account of language processing and first language learning.</p><p>

Identiferoai:union.ndltd.org:PROQUEST/oai:pqdtoai.proquest.com:10931065
Date21 November 2018
CreatorsMeylan, Stephan Charles
PublisherUniversity of California, Berkeley
Source SetsProQuest.com
LanguageEnglish
Detected LanguageEnglish
Typethesis

Page generated in 0.0019 seconds