Spelling suggestions: "subject:"phrase structure grammar"" "subject:"thrase structure grammar""
1 |
Feature constraint grammarsGötz, Thilo. January 2000 (has links) (PDF)
Tübingen, University, Diss., 1999.
|
2 |
Deverbale Komposita an der Morphologie-Syntax-Semantik-Schnittstelle ein HPSG-Ansatz /Reinhard, Sabine. Unknown Date (has links) (PDF)
Universiẗat, Diss., 2001--Tübingen.
|
3 |
The generation of phrase-structure representations from principlesLeBlanc, David C. January 1990 (has links)
Implementations of grammatical theory have traditionally been based upon Context-
Free Grammar (CFG) formalisms which all but ignore questions of learnability. Even implementations which are based upon theories of Generative Grammar (GG), a paradigm which is supposedly motivated by learnability, rarely address such questions. In this thesis we will examine a GG theory which has been formulated primarily to address questions of learnability and present an implementation based upon this theory. The theory argues from Chomsky's definition of epistemological priority that principles which match elements and structures from prelinguistic systems
with elements and structures in linguistic systems are preferable to those which are defined purely linguistically or non-linguistically. A procedure for constructing phrase-structure representations from prelinguistic relations using principles of node percolation (rather than the traditional X-theory of GG theories or phrase-structure rules of CFG theories) is presented and this procedure integrated into a left-right, primarily bottom-up parsing mechanism. Specifically, we present a parsing mechanism
which derives phrase-structure representations of sentences from Case- and 0-relations using a small number of Percolation Principles. These Percolation Principles
simply determine the categorial features of the dominant node of any two adjacent nodes in a representational tree, doing away with explicit phrase structure
rules altogether. The parsing mechanism also instantiates appropriate empty categories using a filler-driven paradigm for leftward argument and non-argument movement. Procedures modelling learnability are not implemented in this work, but the applicability of the presented model to a computational model of language is discussed. / Science, Faculty of / Computer Science, Department of / Graduate
|
4 |
Incremental constraint-based parsing : an efficient approach for head-final languagesGüngördü, Zelal January 1997 (has links)
In this dissertation, I provide a left-to-right incremental parsing approach for Headdriven Phrase Structure Grammar (HPSG; Pollard and Sag (1987, 1994)). HPSG is a lexicalized, constraint-based theory of grammar, which has also been widely exploited in computational linguistics in recent years. Head-final languages are known to pose problems for the incrementality of head-driven parsing models, proposed for parsing with constraint-based grammar formalisms, in both psycholinguistics and computational linguistics. Therefore, here I further focusmy attention on processing a head-final language, specifically Turkish, to highlight any challenges that may arise in the case of such a language. The dissertation makes two principal contributions, the first part mainly providing the theoretical treatment required for the computational approach presented in the second part. The first part of the dissertation is concerned with the analysis of certain phenomena in Turkish grammar within the framework of HPSG. The phenomena explored in this part include word order variation and relativization in Turkish. Turkish is a head-final language that exhibits a considerable degree of word order freedom, with both local and long-distance scrambling. I focus on the syntactic aspects of this freedomin simple and complex Turkish sentences, detailing the assumptions Imake both to dealwith the variation in the word order, and also to capture certain restrictions on that variation, within the HPSG framework. The second phenomenon, relativization in Turkish, has drawn considerable attention in the literature, all accounts so far being within the tradition of transformational grammar. Here I propose a purely lexical account of the phenomenon within the framework of HPSG, which I claim is empirically more adequate than previous accounts, as well as being computationally more attractive. The motivation behind the work presented in the second part of the dissertation mainly stems from psycholinguistic considerations. Experimental evidence (e.g. Marslen- Wilson (1973)) has shown that human language processing is highly incremental, meaning that humans construct aword-by-word partial representation of an utterance as they hear each word. Here I explore the computational effectiveness of an incremental processing mechanism for HPSG grammars. I argue that any such processing mechanism has to employ some sort of nonmonotonicity in order to guarantee both completeness and termination, and propose a way of doing that without violating the soundness of the overall approach. I present a parsing approach for HPSG grammars that parses a string of words fromleft to right, attaching every word of the input to a global structure as soon as it is encountered, thereby dynamically changing the structure as the parse progresses. I further focus on certain issues that arise in incremental processing of a “free”word order, head-final language like Turkish. First, I investigate howthe parser can benefit from the case values in Turkish in foreseeing the existence of an embedded phrase/clause before encountering its head, thereby improving the incrementality of structuring. Second, I propose a strategy for the incremental recovery of filler-gap relations in certain kinds of unbounded dependency constructions in Turkish, which further enables one to capture a number of (strong) preferences that humans exhibit in processing certain examples with potentially ambiguous long-distance dependency relations.
|
5 |
Selection for clausal complements and tense features /Sato, Hiromi, January 2003 (has links)
Thesis (Ph. D.)--University of Washington, 2003. / Vita. Includes bibliographical references (leaves 231-238).
|
6 |
A theory of lexical functors : light heads in the lexicon and the syntaxSuzuki, Takeru 11 1900 (has links)
This thesis advances a specific model of 1-syntax, based on Hale
and Keyser (1993, 1994) and Dechaine (1996) as a point of departure,
and also proposes a general theory of the relation between the lexicon
and the syntax. One of the essential proposals that I make is the
F\mctionalization Principle, which permits a lexical head to project a
functional projection if and only if the meaning of the head is
represented by 1-syntactic structure without any extra semantic features.
I refer to this type of head as a light head. The Functionalization
Principle leads us to a principled account of various lexical and
functional uses of lexical items such as a passive morpheme -en and
have.
Examples that support my analysis range from adjectival and
verbal passives (e.g. Mary is very pleased and The glass was broken by
BUI), to constructions of alienable and inalienable possession (e.g. John
has Jive bucks and John has blue eyes), to causative/experiential
constructions (e.g. John had his students walk out of class), and to perfect
constructions (e.g. Lucie has advised the prime minister). Furthermore,
the analysis of possessive have is extended to possessive nominals (e.g.
John's cat and John's eyes).
I also examine the implications of the theories of 1-syntax and 1-
functors for Case. I propose that 1-syntactic structure partly determines
inherent Case whereas the 1-functor checks what I call l-Junctor Case
through the Spec-head relation. Furthermore, I show that these analyses of inherent Case and 1-functors account for essential properties of
possessive D (a genitive marker -*s), some Hindi marked subject
constructions and Japanese experiential transitive constructions.
|
7 |
A theory of lexical functors : light heads in the lexicon and the syntaxSuzuki, Takeru 11 1900 (has links)
This thesis advances a specific model of 1-syntax, based on Hale
and Keyser (1993, 1994) and Dechaine (1996) as a point of departure,
and also proposes a general theory of the relation between the lexicon
and the syntax. One of the essential proposals that I make is the
F\mctionalization Principle, which permits a lexical head to project a
functional projection if and only if the meaning of the head is
represented by 1-syntactic structure without any extra semantic features.
I refer to this type of head as a light head. The Functionalization
Principle leads us to a principled account of various lexical and
functional uses of lexical items such as a passive morpheme -en and
have.
Examples that support my analysis range from adjectival and
verbal passives (e.g. Mary is very pleased and The glass was broken by
BUI), to constructions of alienable and inalienable possession (e.g. John
has Jive bucks and John has blue eyes), to causative/experiential
constructions (e.g. John had his students walk out of class), and to perfect
constructions (e.g. Lucie has advised the prime minister). Furthermore,
the analysis of possessive have is extended to possessive nominals (e.g.
John's cat and John's eyes).
I also examine the implications of the theories of 1-syntax and 1-
functors for Case. I propose that 1-syntactic structure partly determines
inherent Case whereas the 1-functor checks what I call l-Junctor Case
through the Spec-head relation. Furthermore, I show that these analyses of inherent Case and 1-functors account for essential properties of
possessive D (a genitive marker -*s), some Hindi marked subject
constructions and Japanese experiential transitive constructions. / Arts, Faculty of / Linguistics, Department of / Graduate
|
8 |
The Mora-constituent interface modelSampath Kumar, Srinivas 18 January 2016 (has links)
Phonological phenomena related to the syllable are often analysed either in terms of the constituents defined in the Onset-Rhyme Model; or in terms of moras after the Moraic Theory. Even as arguments supporting one of these theoretical models over the other continue to be unfurled, the Moraic Theory has gained significant currency in recent years. Situated in the foregoing theoretical climate, this dissertation argues that a full-fledged model of the syllable must incorporate the insights accruing from both constituents and moras. The result is the Mora-Constituency Interface model (MCI). Syllable-internal structure as envisioned in MCI manifests in a Constituency Dimension as well as a Moraic Dimension. The dimensions interface with each other through segment-melody complexes, whose melodic content is associated with the Constituency Dimension and whose segmental (i.e. X-slot) component belongs to the Moraic Dimension. The Constituency Dimension and the Moraic Dimension are both thus necessary even to represent the atomic distinction between segments and melodies in a typical syllable. In terms of its architecture, the Constituency Dimension in MCI is formally identical to the Onset-Rhyme Model and encompasses the Onset, the Nucleus and the Coda, with which melodies are associated. The Nucleus and Coda together constitute the Rhyme. In the Moraic Dimension, moras are assigned to segments on universal, language-specific or contextual grounds. From a functional perspective, the Moraic Dimension is where the metrical relevance of segment-melody complexes is encoded (as moras), while feature-based information pertaining to them is structured in the Constituency Dimension. The independent functional justification for both the dimensions in MCI predicts that segment-melody complexes, though typically split across the dimensions as segments and melodies, may also be associated entirely with the Constituency Dimension or with the Moraic Dimension of a syllable. The former possibility finds empirical expression in extrametrical consonants, and the latter in moraic ambisyllabic consonants. Analogously, a syllable itself may have either just the Constituency Dimension (e.g. extrametrical syllables) or just the Moraic Dimension (e.g. catalectic syllables). The prosodic object called the syllable is thus a composite formal entity tailored from the constituent-syllable (C-s) and the moraic-syllable (M-s).While MCI is thus essentially a model of syllable-internal structure, it also exerts some influence on prosodic structure beyond the syllable. For example, within MCI, feet can be directly constructed from moras, even in languages whose metrical systems are traditionally thought of as being insensitive to mora count. The upshot is that a fully moraic universal foot inventory is possible under MCI.That MCI has implications for the organisation of elements within (segment-melody complexes) and outside (feet) the syllable suggests that the model has the potential to be a general theory of prosodic structure. The model is also on solid cross-linguistic ground, as evidenced by the support it receives from different languages. Those languages include but are not restricted to Kwakwala, Chugach Yupik, Hixkaryana, Paumari, Leti, Pattani Malay, Cantonese, Tamil and English. Keywords: Syllables, constituents, moras, segments, melodies.
|
9 |
Effects of a word's status as a predictable phrasal head on lexical decision and eye movements.Staub, Adrian. 01 January 2006 (has links) (PDF)
No description available.
|
10 |
Uma análise do verbo poder do português brasileiro à luz da HPSG e do léxico gerativoMarruche, Vanessa de Sales 29 August 2012 (has links)
Made available in DSpace on 2015-04-11T13:49:03Z (GMT). No. of bitstreams: 1
vanessa.pdf: 2538261 bytes, checksum: 22a79ec6ec4a9c95e3c0d9514975ea7d (MD5)
Previous issue date: 2012-08-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This study presents an analysis both syntactic and semantic of the verb poder in Brazilian Portuguese. To achieve this goal, we started with a literature review, which consisted of works dedicated to the study of auxiliarity and modality in order to determine what these issues imply and what is usually considered for classifying the verb under investigation as an auxiliary and/or modal verb. As foundations of this study, we used two theories, namely, HPSG (Head-Driven Phrase Structure Grammar Gramática de Estruturas Sintagmáticas Orientadas pelo Núcleo), a model of surface oriented generative grammar, which consists of a phonological, a syntactic and a semantic component, and GL (The Generative Lexicon O Léxico Gerativo), a lexicalist model of semantic interpretation of natural language, which is proposed to deal with problems such as compositionality, semantic creativity, and logical polysemy. Because these models are unable to handle the verb poder of the Brazilian Portuguese as they were originally proposed, it was necessary to use the GL to make some modifications in HPSG, in order to semantically enrich this model of grammar, so that it can cope with the logical polysemy of the verb poder, its behavior as a raising and a control verb, the saturation of its internal argument, as well as to identify when it is an auxiliary verb. The analysis showed that: (a) poder has four meanings inherent to it, namely, CAPACITY, ABILITY, POSSIBILITY and PERMISSION; (b) to saturate the internal argument of poder, the phrase candidate to saturate that argument must be of type [proposition] and the core of that phrase must be of type [event]. In case those types are not identical, the type coercion is applied in order to recover the requested type for that verb; (c) poder is a raising verb when it means POSSIBILITY, in such case it selects no external argument. That is, it accepts as its subject whatever the subject of its VP-complement is; (d) poder is a control verb when it means CAPACITY, ABILITY and/or PERMISSION and in this case it requires that the saturator of its internal argument be of type [entity] when poder means CAPACITY, or of type [animal] when it means ABILITY and/or PERMISSION; (e) poder is an auxiliary verb only when it is a raising verb, because only in this situation it does not impose any selectional restrictions on the external argument and (f ) poder is considered a modal verb because it can express an epistemic notion possibility and at least three non-epistemic notions of modality capacity, ability and permission. / Este trabalho apresenta uma análise tanto sintática quanto semântica do verbo poder do português brasileiro. Para alcançar esse objetivo, partiu-se de uma revisão de literatura, a qual compreendeu trabalhos dedicados ao estudo da auxiliaridade e da modalidade, a fim de verificar o que essas questões implicam e o que geralmente é levado em consideração para classificar o verbo investigado como auxiliar e/ou modal. Como alicerces deste trabalho, foram utilizadas duas teorias, quais sejam, a HPSG (Head-Driven Phrase Structure Grammar Gramática de Estruturas Sintagmáticas Orientadas pelo Núcleo), um modelo de gramática gerativa orientada pela superfície, a qual é constituída de um componente fonológico, um sintático e um semântico, e o GL (The Generative Lexicon O Léxico Gerativo), um modelo lexicalista de interpretação semântica de língua natural, que se propõe a lidar com problemas como a composicionalidade, a criatividade semântica e a polissemia lógica. Devido ao fato de esses modelos não conseguirem lidar com o verbo poder do português brasileiro como eles foram propostos originalmente, foi necessário utilizar o GL para fazer algumas modificações na HPSG, a fim de enriquecer semanticamente esse modelo de gramática, de modo que ele consiga dar conta da polissemia lógica do verbo poder, de seu comportamento como verbo de alçamento e de controle, da saturação de seu argumento interno, além de identificar quando ele é um verbo auxiliar. A análise mostrou que: (a) quatro são os significados inerentes ao verbo poder, quais sejam, CAPACIDADE, HABILIDADE, PERMISSÃO e POSSIBILIDADE; (b) para saturar o argumento interno do verbo poder, o sintagma candidato a saturador deve ser do tipo [proposição], e o núcleo desse sintagma deve ser do tipo [evento] e, não havendo essa identidade de tipos, recorre-se à aplicação da construção de coerção de tipo para recuperar o tipo solicitado por aquele verbo; (c) poder é verbo de alçamento quando significa POSSIBILIDADE e, nesse caso, não seleciona argumento externo. Ou seja, aceita como sujeito qualquer que seja o sujeito de seu VP-complemento; (d) poder é verbo de controle quando significa CAPACIDADE, HABILIDADE e/ou PERMISSÃO e, nesse caso, requer que o sintagma saturador de seu argumento interno seja ou do tipo [entidade], quando significa CAPACIDADE, ou do tipo [animal], quando significa HABILIDADE e/ou PERMISSÃO; (e) poder só é verbo auxiliar quando é um verbo de alçamento, pois só nessa situação não impõe restrições selecionais quanto ao argumento externo; e (f) poder é considerado um verbo modal porque pode expressar uma noção epistêmica possibilidade e pelo menos três noções não epistêmicas de modalidade capacidade, habilidade e permissão.
|
Page generated in 0.2152 seconds