• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 60
  • 50
  • 24
  • 11
  • 8
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 374
  • 137
  • 69
  • 51
  • 50
  • 41
  • 40
  • 38
  • 37
  • 34
  • 29
  • 29
  • 26
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Překlad povídky J. D. Salingera Dokonalý den pro banánové rybičky. Lingvostylistická analýza / J. D. Salinger's "A Perfect Day for Bananafish" Translation. Linguo-stylistic Analysis

Válková, Veronika January 2019 (has links)
This thesis aims to analyse several translations of J.D. Salinger's short story A Perfect Day for Bananafish: namely two translations and two edited versions of the latter one. The contrastive view is used to discern the main differences in meaning of the original and the translated texts. There are several points of view adopted to recognise the differences. On the syntactic level, the contrast between the texts was found to be present in the treatment of the author's style, predominantly in the narrative perspective. Not employing nominal structures a great deal, the author's style was not lost in the translation process. Comparing the translated texts, a tendency to adhere to the syntactic surface structure of the original text was found in the less recent versions. Another level of analysis explored the temporal relations of the texts, with the original text relying mostly on structures expressing sequences of events and also simultaneity of events, but not the perfective aspect to a great degree. These time relations are also found to be communicated in the translated texts successfully. It is not only the narrator that is discussed in this thesis, as a great portion of the short story consist of dialogues. The analysis aimed, using certain excerpts of the story, to find to what degree...
92

Sentence complexity in children with autism and specific language impairment

McConnell, Sarah Ann 01 May 2010 (has links)
Children with high-functioning autism, children with specific language impairment, children with autism and language impairment, and controls produced sentences after a prompt to form a sentence using a specific word. The sentences were analyzed for syntactic complexity. Children with language impairment, regardless of autism diagnosis, made less complex sentences than their age peers. However, children with autism and language impairment exhibited a broader range of ability than children with language impairment alone. Children with high-functioning autism without concomitant structural language impairment created sentences of similar complexity to age peers. Word variables also influenced sentence complexity, with word meaning (abstract vs. concrete) having the most robust effect and word frequency having a negligible effect. Implications for this study in relation to double-deficit and syntactic bootstrapping models are discussed.
93

An examination and comparison of some syntactic areas of the oral langauge behavior of mildly intellectually handicapped children and normal children

Jones, Robin Glyn, n/a January 1980 (has links)
Some syntactic aspects of the oral language of 20 mildly intellectually handicapped, 20 normal seven year old and 20 normal ten year old children were examined in order to determine the comparative development of the mildly intellectually handicapped children and some of the difficulties they might experience. The language was classified into 24 categories for various types of analysis. These types included traditional counts and an examination of the types of subordination as well as of non-conventional usage. In addition, Developmental Sentence Scoring (Lee : 1974) was used to assess the maturity of personal pronoun and main and secondary verb usage. The sentence repetition technique was employed as a means of assessing competence in a variety of later-developing structures. Questions were designed to assess ability in other specific syntactic areas. Analysis of variance was used to compare group scores and determine if any significant differences occurred. Several significant differences did occur. The findings provided strong evidence that the language of mildly intellectually handicapped children is more like that of children of the same chronological age than it is like that of children of the same mental age and that it is less mature than the former. These handicapped children experience considerable delay in the development of pronouns and verbs and have a high incidence of non-conventional usage. This study also provided evidence of the continuing language development of normal primary age children. Some methods of sampling and analysing oral language were found to be of particular value. Of these the sentence repetition technique seems promising both as a research tool and as a classroom instrument for assessing individual children's language competence. The importance of this and similar research lies in its implications for educational programming.
94

The Syntactic Origin of Old English Sentence Adverbials

Sundmalm, Sara Maria January 2009 (has links)
<p> </p><p>Languages rely on grammatical rules, by which even such variable constituents as adverbials are affected. However, due to the many different positions in Old English sentences taken up by adverbials, it is easy to wrongfully assume that there is an absence of grammatical rules regarding adverbials in Old English. Hence, it may be possible to detect patterns of behaviour among Old English adverbs if their different position and movement within various clauses is studied systematically. This paper has been focused on examining two conjunct adverbs, and two disjunct adverbs, functioning as sentence adverbials in prose, in order to contribute information of where they are base-generated within the syntactic structure of Old English clauses, and thus hopefully contribute to a better understanding of the grammatical system of Old English. 120 sentences of prose containing sentence adverbials have been examined according to the Government and Binding Theory, as introduced in <em>Stæfcræft: An Introduction to Old English Syntax</em>, in order to establish where the different textual constituents of Old English are base-generated.</p>
95

Conceptual Basis of the Lexicon in Machine Translation

Dorr, Bonnie J. 01 August 1989 (has links)
This report describes the organization and content of lexical information required for the task of machine translation. In particular, the lexical-conceptual basis for UNITRAN, an implemented machine translation system, will be described. UNITRAN uses an underlying form called lexical conceptual structure to perform lexical selection and syntactic realization. Lexical word entries have two levels of description: the first is an underlying lexical-semantic representation that is derived from hierarchically organized primitives, and the second is a mapping from this representation to a corresponding syntactic structure. The interaction of these two levels will be discussed and the lexical selection and syntactic realization processes will be described.
96

The particles lé and lá in the grammar of Konkomba

Schwarz, Anne January 2007 (has links)
The paper investigates focus marking devices in the scarcely documented North-Ghanaian Gur language Konkomba. The two particles lé and lá occur under specific focus conditions and are therefore regarded as focus markers in the sparse literature. Comparing the distribution and obligatoriness of both alleged focus markers however, I show that one of the particles, lé, is better analyzed as a connective particle, i.e. as a syntactic rather than as a genuine pragmatic marker, and that comparable syntactic focus marking strategies for sentence-initial constituents are also known from related languages.
97

Transition-Based Natural Language Parsing with Dependency and Constituency Representations

Hall, Johan January 2008 (has links)
Denna doktorsavhandling undersöker olika aspekter av automatisk syntaktisk analys av texter på naturligt språk. En parser eller syntaktisk analysator, som vi definierar den i denna avhandling, har till uppgift att skapa en syntaktisk analys för varje mening i en text på naturligt språk. Vår metod är datadriven, vilket innebär att den bygger på maskininlärning från uppmärkta datamängder av naturligt språk, s.k. korpusar. Vår metod är också dependensbaserad, vilket innebär att parsning är en process som bygger en dependensgraf för varje mening, bestående av binära relationer mellan ord. Dessutom introducerar avhandlingen en ny metod för att koda frasstrukturer, en annan syntaktisk representationsform, som dependensgrafer vilka kan avkodas utan att information i frasstrukturen går förlorad. Denna metod möjliggör att en dependensbaserad parser kan användas för att syntaktiskt analysera frasstrukturer. Avhandlingen är baserad på fem artiklar, varav tre artiklar utforskar olika aspekter av maskininlärning för datadriven dependensparsning och två artiklar undersöker metoden för dependensbaserad frasstrukturparsning. Den första artikeln presenterar vår första storskaliga empiriska studie av parsning av naturligt språk (i detta fall svenska) med dependensrepresentationer. En transitionsbaserad deterministisk parsningsalgoritm skapar en dependensgraf för varje mening genom att härleda en sekvens av transitioner, och minnesbaserad inlärning (MBL) används för att förutsäga transitionssekvensen. Den andra artikeln undersöker ytterligare hur maskininlärning kan användas för att vägleda en transitionsbaserad dependensparser. Den empiriska studien jämför två metoder för maskininlärning med fem särdragsmodeller för tre språk (kinesiska, engelska och svenska), och studien visar att supportvektormaskiner (SVM) med lexikaliserade särdragsmodeller är bättre lämpade än MBL för att vägleda en transitionsbaserad dependensparser. Den tredje artikeln sammanfattar vår erfarenhet av att optimera MaltParser, vår implementation av transitionsbaserad dependensparsning, för ett stort antal språk. MaltParser har använts för att analysera över tjugo olika språk och var bland de främsta systemen i CoNLLs utvärdering 2006 och 2007. Den fjärde artikeln är vår första undersökning av dependensbaserad frastrukturparsning med konkurrenskraftiga resultat för parsning av tyska. Den femte och sista artikeln introducerar en förbättrad algoritm för att transformera frasstrukturer till dependensgrafer och tillbaka, vilket gör det möjligt att parsa kontinuerliga och diskontinuerliga frasstrukturer utökade med grammatiska funktioner. / Hall, Johan, 2008. Transition-Based Natural Language Parsing with Dependency and Constituency Representations, Acta Wexionensia No 152/2008. ISSN: 1404-4307, ISBN: 978-91-7636-625-7. Written in English. This thesis investigates different aspects of transition-based syntactic parsing of natural language text, where we view syntactic parsing as the process of mapping sentences in unrestricted text to their syntactic representations. Our parsing approach is data-driven, which means that it relies on machine learning from annotated linguistic corpora. Our parsing approach is also dependency-based, which means that the parsing process builds a dependency graph for each sentence consisting of lexical nodes linked by binary relations called dependencies. However, the output of the parsing process is not restricted to dependency-based representations, and the thesis presents a new method for encoding phrase structure representations as dependency representations that enable an inverse transformation without loss of information. The thesis is based on five papers, where three papers explore different ways of using machine learning to guide a transition-based dependency parser and two papers investigate the method for dependency-based phrase structure parsing. The first paper presents our first large-scale empirical study of parsing a natural language (in this case Swedish) with labeled dependency representations using a transition-based deterministic parsing algorithm, where the dependency graph for each sentence is constructed by a sequence of transitions and memory-based learning (MBL) is used to predict the transition sequence. The second paper further investigates how machine learning can be used for guiding a transition-based dependency parser. The empirical study compares two machine learning methods with five feature models for three languages (Chinese, English and Swedish), and the study shows that support vector machines (SVM) with lexicalized feature models are better suited than MBL for guiding a transition-based dependency parser. The third paper summarizes our experience of optimizing and tuning MaltParser, our implementation of transition-based parsing, for a wide range of languages. MaltParser has been applied to over twenty languages and was one of the top-performing systems in the CoNLL shared tasks of 2006 and 2007. The fourth paper is our first investigation of dependency-based phrase structure parsing with competitive results for parsing German. The fifth paper presents an improved encoding method for transforming phrase structure representations into dependency graphs and back. With this method it is possible to parse continuous and discontinuous phrase structure extended with grammatical functions.
98

Correcting Syntactic Annotation Errors Using a Synchronous Tree Substitution Grammar

MATSUBARA, Shigeki, KATO, Yoshihide 01 September 2010 (has links)
No description available.
99

Syntactic Complexities of Nine Subclasses of Regular Languages

Li, Baiyu January 2012 (has links)
The syntactic complexity of a regular language is the cardinality of its syntactic semigroup. The syntactic complexity of a subclass of the class of regular languages is the maximal syntactic complexity of languages in that class, taken as a function of the state complexity n of these languages. We study the syntactic complexity of suffix-, bifix-, and factor-free regular languages, star-free languages including three subclasses, and R- and J-trivial regular languages. We found upper bounds on the syntactic complexities of these classes of languages. For R- and J-trivial regular languages, the upper bounds are n! and ⌊e(n-1)!⌋, respectively, and they are tight for n >= 1. Let C^n_k be the binomial coefficient ``n choose k''. For monotonic languages, the tight upper bound is C^{2n-1}_n. We also found tight upper bounds for partially monotonic and nearly monotonic languages. For the other classes of languages, we found tight upper bounds for languages with small state complexities, and we exhibited languages with maximal known syntactic complexities. We conjecture these lower bounds to be tight upper bounds for these languages. We also observed that, for some subclasses C of regular languages, the upper bound on state complexity of the reversal operation on languages in C can be met by languages in C with maximal syntactic complexity. For R- and J-trivial regular languages, we also determined tight upper bounds on the state complexity of the reversal operation.
100

Morphological priming in Spanish-English bilingual children with and without language impairment

Gutierrez, Keila, 1988- 25 June 2012 (has links)
The purpose of this study was to gain insight into the amount of language models (i.e., dose frequency) that Spanish-English bilingual children with and without specific language impairment (SLI) require in order to consistently produce challenging target grammatical forms for 6 morphemes, 3 in English and 3 in Spanish, via a structural priming task. Participants included two 2nd grade children with SLI, five typically developing kindergarten children, and three typically developing 2nd grade peers. Participants were administered 10 control and 10 experimental cloze phrase computer tasks for each morpheme. In the control condition participants finished cloze phrase sentences which targeted the target morpheme while in the experimental task participants heard a model of the target morpheme and were subsequently required to finish the cloze phrase. Results replicated results of structural priming for all groups in each language. Results also indicated that Spanish was more robust in producing morphological priming effects in comparison to English morphological forms possibly due to linguistic differences. Clinical and research implications are discussed. / text

Page generated in 0.0756 seconds