• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 23
  • 12
  • 12
  • 8
  • 7
  • 7
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Mora-constituent interface model

Sampath Kumar, Srinivas 18 January 2016 (has links)
Phonological phenomena related to the syllable are often analysed either in terms of the constituents defined in the Onset-Rhyme Model; or in terms of moras after the Moraic Theory. Even as arguments supporting one of these theoretical models over the other continue to be unfurled, the Moraic Theory has gained significant currency in recent years. Situated in the foregoing theoretical climate, this dissertation argues that a full-fledged model of the syllable must incorporate the insights accruing from both constituents and moras. The result is the Mora-Constituency Interface model (MCI). Syllable-internal structure as envisioned in MCI manifests in a Constituency Dimension as well as a Moraic Dimension. The dimensions interface with each other through segment-melody complexes, whose melodic content is associated with the Constituency Dimension and whose segmental (i.e. X-slot) component belongs to the Moraic Dimension. The Constituency Dimension and the Moraic Dimension are both thus necessary even to represent the atomic distinction between segments and melodies in a typical syllable. In terms of its architecture, the Constituency Dimension in MCI is formally identical to the Onset-Rhyme Model and encompasses the Onset, the Nucleus and the Coda, with which melodies are associated. The Nucleus and Coda together constitute the Rhyme. In the Moraic Dimension, moras are assigned to segments on universal, language-specific or contextual grounds. From a functional perspective, the Moraic Dimension is where the metrical relevance of segment-melody complexes is encoded (as moras), while feature-based information pertaining to them is structured in the Constituency Dimension. The independent functional justification for both the dimensions in MCI predicts that segment-melody complexes, though typically split across the dimensions as segments and melodies, may also be associated entirely with the Constituency Dimension or with the Moraic Dimension of a syllable. The former possibility finds empirical expression in extrametrical consonants, and the latter in moraic ambisyllabic consonants. Analogously, a syllable itself may have either just the Constituency Dimension (e.g. extrametrical syllables) or just the Moraic Dimension (e.g. catalectic syllables). The prosodic object called the syllable is thus a composite formal entity tailored from the constituent-syllable (C-s) and the moraic-syllable (M-s).While MCI is thus essentially a model of syllable-internal structure, it also exerts some influence on prosodic structure beyond the syllable. For example, within MCI, feet can be directly constructed from moras, even in languages whose metrical systems are traditionally thought of as being insensitive to mora count. The upshot is that a fully moraic universal foot inventory is possible under MCI.That MCI has implications for the organisation of elements within (segment-melody complexes) and outside (feet) the syllable suggests that the model has the potential to be a general theory of prosodic structure. The model is also on solid cross-linguistic ground, as evidenced by the support it receives from different languages. Those languages include but are not restricted to Kwakwala, Chugach Yupik, Hixkaryana, Paumari, Leti, Pattani Malay, Cantonese, Tamil and English. Keywords: Syllables, constituents, moras, segments, melodies.
12

Effects of a word's status as a predictable phrasal head on lexical decision and eye movements.

Staub, Adrian. 01 January 2006 (has links) (PDF)
No description available.
13

(In)flexibility of Constituency in Japanese in Multi-Modal Categorial Grammar with Structured Phonology

Kubota, Yusuke 23 August 2010 (has links)
No description available.
14

Uma análise do verbo poder do português brasileiro à luz da HPSG e do léxico gerativo

Marruche, Vanessa de Sales 29 August 2012 (has links)
Made available in DSpace on 2015-04-11T13:49:03Z (GMT). No. of bitstreams: 1 vanessa.pdf: 2538261 bytes, checksum: 22a79ec6ec4a9c95e3c0d9514975ea7d (MD5) Previous issue date: 2012-08-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This study presents an analysis both syntactic and semantic of the verb poder in Brazilian Portuguese. To achieve this goal, we started with a literature review, which consisted of works dedicated to the study of auxiliarity and modality in order to determine what these issues imply and what is usually considered for classifying the verb under investigation as an auxiliary and/or modal verb. As foundations of this study, we used two theories, namely, HPSG (Head-Driven Phrase Structure Grammar Gramática de Estruturas Sintagmáticas Orientadas pelo Núcleo), a model of surface oriented generative grammar, which consists of a phonological, a syntactic and a semantic component, and GL (The Generative Lexicon O Léxico Gerativo), a lexicalist model of semantic interpretation of natural language, which is proposed to deal with problems such as compositionality, semantic creativity, and logical polysemy. Because these models are unable to handle the verb poder of the Brazilian Portuguese as they were originally proposed, it was necessary to use the GL to make some modifications in HPSG, in order to semantically enrich this model of grammar, so that it can cope with the logical polysemy of the verb poder, its behavior as a raising and a control verb, the saturation of its internal argument, as well as to identify when it is an auxiliary verb. The analysis showed that: (a) poder has four meanings inherent to it, namely, CAPACITY, ABILITY, POSSIBILITY and PERMISSION; (b) to saturate the internal argument of poder, the phrase candidate to saturate that argument must be of type [proposition] and the core of that phrase must be of type [event]. In case those types are not identical, the type coercion is applied in order to recover the requested type for that verb; (c) poder is a raising verb when it means POSSIBILITY, in such case it selects no external argument. That is, it accepts as its subject whatever the subject of its VP-complement is; (d) poder is a control verb when it means CAPACITY, ABILITY and/or PERMISSION and in this case it requires that the saturator of its internal argument be of type [entity] when poder means CAPACITY, or of type [animal] when it means ABILITY and/or PERMISSION; (e) poder is an auxiliary verb only when it is a raising verb, because only in this situation it does not impose any selectional restrictions on the external argument and (f ) poder is considered a modal verb because it can express an epistemic notion possibility and at least three non-epistemic notions of modality capacity, ability and permission. / Este trabalho apresenta uma análise tanto sintática quanto semântica do verbo poder do português brasileiro. Para alcançar esse objetivo, partiu-se de uma revisão de literatura, a qual compreendeu trabalhos dedicados ao estudo da auxiliaridade e da modalidade, a fim de verificar o que essas questões implicam e o que geralmente é levado em consideração para classificar o verbo investigado como auxiliar e/ou modal. Como alicerces deste trabalho, foram utilizadas duas teorias, quais sejam, a HPSG (Head-Driven Phrase Structure Grammar Gramática de Estruturas Sintagmáticas Orientadas pelo Núcleo), um modelo de gramática gerativa orientada pela superfície, a qual é constituída de um componente fonológico, um sintático e um semântico, e o GL (The Generative Lexicon O Léxico Gerativo), um modelo lexicalista de interpretação semântica de língua natural, que se propõe a lidar com problemas como a composicionalidade, a criatividade semântica e a polissemia lógica. Devido ao fato de esses modelos não conseguirem lidar com o verbo poder do português brasileiro como eles foram propostos originalmente, foi necessário utilizar o GL para fazer algumas modificações na HPSG, a fim de enriquecer semanticamente esse modelo de gramática, de modo que ele consiga dar conta da polissemia lógica do verbo poder, de seu comportamento como verbo de alçamento e de controle, da saturação de seu argumento interno, além de identificar quando ele é um verbo auxiliar. A análise mostrou que: (a) quatro são os significados inerentes ao verbo poder, quais sejam, CAPACIDADE, HABILIDADE, PERMISSÃO e POSSIBILIDADE; (b) para saturar o argumento interno do verbo poder, o sintagma candidato a saturador deve ser do tipo [proposição], e o núcleo desse sintagma deve ser do tipo [evento] e, não havendo essa identidade de tipos, recorre-se à aplicação da construção de coerção de tipo para recuperar o tipo solicitado por aquele verbo; (c) poder é verbo de alçamento quando significa POSSIBILIDADE e, nesse caso, não seleciona argumento externo. Ou seja, aceita como sujeito qualquer que seja o sujeito de seu VP-complemento; (d) poder é verbo de controle quando significa CAPACIDADE, HABILIDADE e/ou PERMISSÃO e, nesse caso, requer que o sintagma saturador de seu argumento interno seja ou do tipo [entidade], quando significa CAPACIDADE, ou do tipo [animal], quando significa HABILIDADE e/ou PERMISSÃO; (e) poder só é verbo auxiliar quando é um verbo de alçamento, pois só nessa situação não impõe restrições selecionais quanto ao argumento externo; e (f) poder é considerado um verbo modal porque pode expressar uma noção epistêmica possibilidade e pelo menos três noções não epistêmicas de modalidade capacidade, habilidade e permissão.
15

LAKI VERBAL MORPHOSYNTAX

Moradi, Sedigheh 01 January 2015 (has links)
Most western Iranian languages, despite their broad differences, show a common quality when it comes to the verbal agreement of past transitive verbs. Dabir-moghaddam (2013) and Haig (2008) discuss it as a grammaticalized split-agreement to encode S, A, and P, which is sensitive to tense and transitivity, and uses split-ergative constructions for its past transitive verbs. Laki shows vestiges of the same kind of verb-agreement ergativity (Comrie 1978) by using a mixture of affixes and clitics for subject and object marking. In this thesis, I investigate how the different classes of verbs show agreement using four distinct property classes. Considering the special case of the {3 sg} and using Hopper and Traugott's pattern for the cline of grammaticality (2003), I argue that although Laki has already lost the main part of its ergative constructions, the case of the {3 sg} marking is yet another sign that this language is in the process of absolute de-ergativization and its hybrid alignment system is moving toward morphosyntactic unity. As a formal representation of the Laki data, the final part of the thesis provides a morphosyntactic HPSG analysis of the agreement patterns in Laki, using the grammar of cliticized verb-forms (Miller and Sag 1997).
16

Transition-Based Natural Language Parsing with Dependency and Constituency Representations

Hall, Johan January 2008 (has links)
Denna doktorsavhandling undersöker olika aspekter av automatisk syntaktisk analys av texter på naturligt språk. En parser eller syntaktisk analysator, som vi definierar den i denna avhandling, har till uppgift att skapa en syntaktisk analys för varje mening i en text på naturligt språk. Vår metod är datadriven, vilket innebär att den bygger på maskininlärning från uppmärkta datamängder av naturligt språk, s.k. korpusar. Vår metod är också dependensbaserad, vilket innebär att parsning är en process som bygger en dependensgraf för varje mening, bestående av binära relationer mellan ord. Dessutom introducerar avhandlingen en ny metod för att koda frasstrukturer, en annan syntaktisk representationsform, som dependensgrafer vilka kan avkodas utan att information i frasstrukturen går förlorad. Denna metod möjliggör att en dependensbaserad parser kan användas för att syntaktiskt analysera frasstrukturer. Avhandlingen är baserad på fem artiklar, varav tre artiklar utforskar olika aspekter av maskininlärning för datadriven dependensparsning och två artiklar undersöker metoden för dependensbaserad frasstrukturparsning. Den första artikeln presenterar vår första storskaliga empiriska studie av parsning av naturligt språk (i detta fall svenska) med dependensrepresentationer. En transitionsbaserad deterministisk parsningsalgoritm skapar en dependensgraf för varje mening genom att härleda en sekvens av transitioner, och minnesbaserad inlärning (MBL) används för att förutsäga transitionssekvensen. Den andra artikeln undersöker ytterligare hur maskininlärning kan användas för att vägleda en transitionsbaserad dependensparser. Den empiriska studien jämför två metoder för maskininlärning med fem särdragsmodeller för tre språk (kinesiska, engelska och svenska), och studien visar att supportvektormaskiner (SVM) med lexikaliserade särdragsmodeller är bättre lämpade än MBL för att vägleda en transitionsbaserad dependensparser. Den tredje artikeln sammanfattar vår erfarenhet av att optimera MaltParser, vår implementation av transitionsbaserad dependensparsning, för ett stort antal språk. MaltParser har använts för att analysera över tjugo olika språk och var bland de främsta systemen i CoNLLs utvärdering 2006 och 2007. Den fjärde artikeln är vår första undersökning av dependensbaserad frastrukturparsning med konkurrenskraftiga resultat för parsning av tyska. Den femte och sista artikeln introducerar en förbättrad algoritm för att transformera frasstrukturer till dependensgrafer och tillbaka, vilket gör det möjligt att parsa kontinuerliga och diskontinuerliga frasstrukturer utökade med grammatiska funktioner. / Hall, Johan, 2008. Transition-Based Natural Language Parsing with Dependency and Constituency Representations, Acta Wexionensia No 152/2008. ISSN: 1404-4307, ISBN: 978-91-7636-625-7. Written in English. This thesis investigates different aspects of transition-based syntactic parsing of natural language text, where we view syntactic parsing as the process of mapping sentences in unrestricted text to their syntactic representations. Our parsing approach is data-driven, which means that it relies on machine learning from annotated linguistic corpora. Our parsing approach is also dependency-based, which means that the parsing process builds a dependency graph for each sentence consisting of lexical nodes linked by binary relations called dependencies. However, the output of the parsing process is not restricted to dependency-based representations, and the thesis presents a new method for encoding phrase structure representations as dependency representations that enable an inverse transformation without loss of information. The thesis is based on five papers, where three papers explore different ways of using machine learning to guide a transition-based dependency parser and two papers investigate the method for dependency-based phrase structure parsing. The first paper presents our first large-scale empirical study of parsing a natural language (in this case Swedish) with labeled dependency representations using a transition-based deterministic parsing algorithm, where the dependency graph for each sentence is constructed by a sequence of transitions and memory-based learning (MBL) is used to predict the transition sequence. The second paper further investigates how machine learning can be used for guiding a transition-based dependency parser. The empirical study compares two machine learning methods with five feature models for three languages (Chinese, English and Swedish), and the study shows that support vector machines (SVM) with lexicalized feature models are better suited than MBL for guiding a transition-based dependency parser. The third paper summarizes our experience of optimizing and tuning MaltParser, our implementation of transition-based parsing, for a wide range of languages. MaltParser has been applied to over twenty languages and was one of the top-performing systems in the CoNLL shared tasks of 2006 and 2007. The fourth paper is our first investigation of dependency-based phrase structure parsing with competitive results for parsing German. The fifth paper presents an improved encoding method for transforming phrase structure representations into dependency graphs and back. With this method it is possible to parse continuous and discontinuous phrase structure extended with grammatical functions.
17

Sentence Compression by Removing Recursive Structure from Parse Tree

Matsubara, Shigeki, Kato, Yoshihide, Egawa, Seiji 04 December 2008 (has links)
PRICAI 2008: Trends in Artificial Intelligence 10th Pacific Rim International Conference on Artificial Intelligence, Hanoi, Vietnam, December 15-19, 2008. Proceedings
18

Extraction and coordination in phrase structure grammar and categorial grammar

Morrill, Glyn Verden January 1989 (has links)
A large proportion of computationally-oriented theories of grammar operate within the confines of monostratality (i.e. there is only one level of syntactic analysis), compositionality (i.e. the meaning of an expression is determined by the meanings of its syntactic parts, plus their manner of combination), and adjacency (i.e. the only operation on terminal strings is concatenation). This thesis looks at two major approaches falling within these bounds: that based on phrase structure grammar (e.g. Gazdar), and that based on categorial grammar (e.g. Steedman). The theories are examined with reference to extraction and coordination constructions; crucially a range of 'compound' extraction and coordination phenomena are brought to bear. It is argued that the early phrase structure grammar metarules can characterise operations generating compound phenomena, but in so doing require a categorial-like category system. It is also argued that while categorial grammar contains an adequate category apparatus, Steedman's primitives such as composition do not extend to cover the full range of data. A theory is therefore presented integrating the approaches of Gazdar and Steedman. The central issue as regards processing is derivational equivalence: the grammars under consideration typically generate many semantically equivalent derivations of an expression. This problem is addressed by showing how to axiomatise derivational equivalence, and a parser is presented which employs the axiomatisation to avoid following equivalent paths.
19

Dependent nexus subordinate predication structures in English and the Scandinavian languages /

Svenonius, Peter Arne. January 1900 (has links)
Thesis (Ph. D.)--University of California, Santa Cruz, 1994. / Typescript. Includes bibliographical references (leaves 263-288).
20

Generalized ID/LP grammar a formalism for parsing linearization-Based HPSG grammars /

Daniels, Michael W. January 2005 (has links)
Thesis (Ph.D.)--Ohio State University, 2005. / Title from first page of PDF file. Document formatted into pages; contains xiii, 173 p.; also includes graphics. Includes bibliographical references (p. 160-171). Available online via OhioLINK's ETD Center

Page generated in 0.0766 seconds