• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 494
  • 494
  • 494
  • 79
  • 70
  • 65
  • 60
  • 52
  • 51
  • 33
  • 33
  • 29
  • 26
  • 26
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Computer assisted lemmatisation of a Cornish text corpus for lexicographical purposes

Mills, Jon January 2002 (has links)
This project sets out to discover and develop techniques for the lemmatisation of a historical corpus of the Cornish language in order that a lemmatised dictionary macrostructure can be generated from the corpus. The system should be capable of uniquely identifying every lexical item that is attested in the corpus. A survey of published and unpublished Cornish dictionaries, glossaries and lexicographical notes was carried out. A corpus was compiled incorporating specially prepared new critical editions. An investigation into the history of Cornish lemmatisation was undertaken. A systemic description of Cornish inflection was written. Three methods of corpus lemmatisation were trialed. Findings were as follows. Lexicographical history shapes current Cornish lexicographical practice. Lexicon based tokenisation has advantages over character based tokenisation. System networks provide the means to generate base forms from attested word types. Grammatical difference is the most reliable way of disambiguating homographs. A lemma that contains three fields, the canonical form, the part-of-speech and a semantic field label, provides of a unique code for every lexeme attested in the corpus. Programs which involve human interaction during the lemmatisation process allow bootstrapping of the lemmatisation database. Computerised morphological processing may be used at least to partially create the lemmatisation database. Disambiguation of at least some of the most common homographs may be automated by the use of computer programs.
142

The computational analysis of morphosyntactic categories in Urdu

Hardie, Andrew January 2004 (has links)
Urdu is a language of the Indo-Aryan family, widely spoken in India and Pakistan, and an important minority language in Europe, North America, and elsewhere. This thesis describes the development of a computer-based system for part-of-speech tagging of Urdu texts, consisting of a tagset, a set of tagging guidelines for manual tagging or post-editing, and the tagger itself. The tagset is defined in accordance with a set of design principles, derived from a survey of good practice in the field of tagset design, including compliance with the EAGLES guidelines on morphosyntactic annotation. These are shown to be extensible to languages, such as Urdu, that are closely related to those languages for which the guidelines were originally devised. The description of Urdu grammar given by Schmidt (1999) is used as a model of the language for the purpose of tagset design. Manual tagging is undertaken using this tagset, by which process a set of tagging guidelines are created, and a set of manually tagged texts to serve as training data is obtained. A rule-based methodology is used here to perform tagging in Urdu. The justification for this choice is discussed. A suite of programs which function together within the Unitag architecture are described. This system (as well as a tokeniser) includes an analyser (Urdutag) based on lexical look-up and word-form analysis, and a disambiguator (Unirule) which removes contextually inappropriate tags using a set of 274 rules. While the system's final performance is not particularly impressive, this is largely due to a paucity of training data leading to a small lexicon, rather than any substantial flaw in the system.
143

Problems connected with the notion of implicature

Koutoupis-Kitis, Elizabeth January 1982 (has links)
As the title suggests, the primary concern of this study is with problems arising from a very widely used notion in the recent literature of linguistics and philosophy, the notion of implicature. As this concept was introduced and developed by the philosopher H. P. Grice, the main part of the study will understandably be centred around his work. Grice distinguished between two main types of implicature, the conventional and the conversational. In the first part we will be concerned with, what Grice called, conventional implicature, and in particular with the linguistic items generating it, as described in his work. Thus the aim of this part of the study will be to investigate the nature of conventional implicata, and to ask whether they can be justifiably claimed to be nonconsequential for truth-evaluation and invariable, as Grice argues. Grice's account in this respect will be found to be partly implausible, as regards his treatment of 'therefore', and partly inadequate, as it fails to take into account the wide ranging function of 'but' - his paradigm of conventional implicature - but treats its variable meaning aspects as invariable, conventional implicature. In view of the intriguing linguistic behaviour of 'but', the main contributions to this topic in the literature will be reviewed. In the second part of the study our primary aim will be to consider in detail linguistic phenomena that come under the rubric of conversational implicature in the literature - with an emphasis on Grice's examples - with a view to detecting common characteristics that can be taken as the parameters along which these phenomena can be defined as a homogeneous class. It will be concluded that they cannot. More stringent criteria will be proposed for membership in a narrowly defined class of conversational implicature. Two classes of background knowledge and assumptions will be described and shown to bear significantly on language production and understanding and, in particular, on the production and understanding of linguistic facts that have been called conversational implicatures. It will be concluded that the term 'conversational implicature' has been misused and abused. The view taken here will be that background knowledge schemes must be taken into account and represented in a language theory, though the difficulties facing such an enterprise are well understood and acknowledged. However, the overall conclusion will be that Grice's proposal effects a cut and dried demarcation between a neat but narrowly defined truth-functional semantics, on the one hand, and an unexplicated pragmatics, on the other, that would, however, include the most intriguing aspects of language use. , This view of language is not very revealing and, hence, uninteresting and unappealing.
144

The interpreter as intercultural mediator

Makarová, Viera January 1998 (has links)
This thesis looks at the role of the Slovak-English interpreter working in the consecutive mode in the business environment especially with regard to rendering cultural references from source texts, whether these are (British) English or Slovak. Since culture in this thesis is taken in the broad sense of the whole way of life, cultural references can also be wide-ranging. The strategy an interpreter will opt for when interpreting cultural references depends on the circumstances under which he or she operates. Interpreting puts constraints on interpreters which make their activity distinct from translation of written texts, where in cases of unknown cultural references, translators can resort to the use of notes. Interpreters are engaged in mediating communication between (two) clients who do not share the same language and who come from differing cultural backgrounds. Due to differences between the (British) English and the Slovak cultures - in their material, spiritual and behavioural aspects - as well as due to lack of knowledge of cultural references which the clients of English-Slovak interpreting have and which was caused historically, some intercultural mediation is needed. Its particular forms are the outcome of the weighing of the circumstances under which the English-Slovak consecutive interpreter works. Moreover, business interpreting contains challenges in the form of the vocabulary of business, a relatively new area for Slovak interpreters. An interpreter, under all the above mentioned constraints, has to fulfil his or her role: to establish and maintain communication between the two parties. Therefore some of the strategies used will try to prevent miscommunication, while others will try to deal with miscommunication once it has occurred.
145

Statistical language learning

Onnis, Luca January 2003 (has links)
Theoretical arguments based on the "poverty of the stimulus" have denied a priori the possibility that abstract linguistic representations can be learned inductively from exposure to the environment, given that the linguistic input available to the child is both underdetermined and degenerate. I reassess such learnability arguments by exploring a) the type and amount of statistical information implicitly available in the input in the form of distributional and phonological cues; b) psychologically plausible inductive mechanisms for constraining the search space; c) the nature of linguistic representations, algebraic or statistical. To do so I use three methodologies: experimental procedures, linguistic analyses based on large corpora of naturally occurring speech and text, and computational models implemented in computer simulations. In Chapters 1,2, and 5, I argue that long-distance structural dependencies - traditionally hard to explain with simple distributional analyses based on ngram statistics - can indeed be learned associatively provided the amount of intervening material is highly variable or invariant (the Variability effect). In Chapter 3, I show that simple associative mechanisms instantiated in Simple Recurrent Networks can replicate the experimental findings under the same conditions of variability. Chapter 4 presents successes and limits of such results across perceptual modalities (visual vs. auditory) and perceptual presentation (temporal vs. sequential), as well as the impact of long and short training procedures. In Chapter 5, I show that generalisation to abstract categories from stimuli framed in non-adjacent dependencies is also modulated by the Variability effect. In Chapter 6, I show that the putative separation of algebraic and statistical styles of computation based on successful speech segmentation versus unsuccessful generalisation experiments (as published in a recent Science paper) is premature and is the effect of a preference for phonological properties of the input. In chapter 7 computer simulations of learning irregular constructions suggest that it is possible to learn from positive evidence alone, despite Gold's celebrated arguments on the unlearnability of natural languages. Evolutionary simulations in Chapter 8 show that irregularities in natural languages can emerge from full regularity and remain stable across generations of simulated agents. In Chapter 9 I conclude that the brain may endowed with a powerful statistical device for detecting structure, generalising, segmenting speech, and recovering from overgeneralisations. The experimental and computational evidence gathered here suggests that statistical language learning is more powerful than heretofore acknowledged by the current literature.
146

A semantic contribution to verbal short-term memory : a test of operational definitions of 'semantic similarity' and input versus output processes

Hunt, Frances Jane January 2007 (has links)
Baddeley and Hitch (1974; Baddeley, 1986, 2000) propose that coding in verbal short-term memory is phonological and that semantic codes are employed in long-term memory. Semantic coding in short-term memory has been investigated to a far lesser degree than phonological codes and the findings have been inconsistent. Some theorists propose that semantic coding is possible (e.g. Nairne, 1990) while other suggest that semantic factors act during recall (e.g. Saint-Aubin & Poirer, 1999a). The following body of work investigates whether semantic coding is possible in short-term memory and examines what constitutes ‘semantic similarity’. Chapter 2 reports two visually presented serial recall experiments comparing semantically similar and dissimilar lists. This revealed that context greatly influences the recall of homophones. Chapter 3 illustrated that category members and synonyms enhanced item recall. However, categories had little impact on order retention, whereas synonyms had a detrimental effect. Chapter 4 employed a matching-span task which is purported to differentiate between input and output processes. It was found that synonyms had a detrimental effect on recall, indicative of the effect being related to input processes. Chapter 5 employed mixed lists using backward and forward recall. It was found that the important factor was that the semantically similar items should be encountered first in order to maximise their impact. This supported the contention of the importance of input factors. Chapter 6 compared phonologically and semantically similar items using an open and a closed word pool. It was found that semantic and phonological similarity has comparable effects when an open word pool and free recall scoring method are employed. Overall, the results were consistent with the idea that phonological and semantic codes can be employed in short-term recall.
147

Aspects of theme and their role in workplace texts

Forey, Gail January 2002 (has links)
The study adopts a systemic functional perspectives and focuses on an analysis of Theme in three workplace text types: memos, letters and reports. The aim of the study is to investigate the function performed by Theme in these texts. The study diverges from Halliday’s identification of Theme and argues that the Subject is an obligatory part of Theme. In examining the function Theme performs, specific features such as the relationship between Theme and genre and between Theme and interpersonal meaning are explored. The study investigates the linguistic realisations in the texts which help understand the way in which the choice of Theme is related to, and perhaps constrained by, the genre. In addition, the linguistic resources used by the writer to construe interpersonal meanings through their choice of Theme are explored. The study investigates Theme from two distinct positions. Firstly a lexico-grammatical analysis of thematic choices in the texts is undertaken. Secondly, the study draws upon informant interpretations and considers the way in which certain thematic choices construe different meanings for different types of reader. The methodology adopted is twofold: an analysis of Theme in a corpus of authentic workplace texts comprised of 30 memos, 22 letters and 10 reports; and an analysis of informant interpretations drawn from focus group interviews with 12 business people and 15 EFL teachers. In both sets of data, Theme is scrutinised with respect to textual, interpersonal, topical and marked themes and the meanings construed through such choices. The findings show that Theme plays an important role in organising the text, as well as in realising ideational and interpersonal meaning. In particular the findings demonstrate that marked Theme, or the term adopted in the present study, ‘extended Theme’, performs a crucial role in representing the workplace as a depersonalised, material world.
148

Quality and efficiency factors in translation

Al-Bustan, Suad Ahmed January 1993 (has links)
It is the objective of this research to carry out the following: (a) Establish working definitions and terminology of translation, quality and efficiency. (b) Propose a means of evaluating quality and efficiency in translation. (c) Review the process used in the private sector as revealed by investigation. (d) Consider whether the method in (b) is suitable in light of (c). If not, what changes must be made. One of the by-products of this study will be to illustrate the benefits of practicability for appreciating quality control to interested parties (e.g. translators and interpreters in corporations, institutions or freelance) and to propose a model for the assessment of evaluation of the translation agency work. This study will be performed by conducting a survey analysis to evaluate the quality and efficiency of the translation agencies in the private sector. This will be carried out in two countries,: Kuwait and the United Kingdom, where translation plays a vital role in everyday life. The survey will be conducted using 21 translation agencies in Kuwait out of a total of 42. As for those in the United Kingdom, the samples will be taken from all regions in the United Kingdom based on statistical random selection. The sample size will be roughly 20-25% of a total of 1009. The results of this survey will thus enable the researcher to review the current practice in translation and to evaluate the issues affecting its quality and efficiency. In conclusion, any changes required in the self-assessment of a Translation Agency will be suggested.
149

Thinking outside the box : processing instruction and individual differences in working memory capacity

Peter, Stephanie Andrea January 2016 (has links)
Processing Instruction is a pedagogic intervention that manipulates the L2 input learners are exposed to in the classroom. Proponents of this intervention claim that it poses a minimal strain on learners’ processing resources. While there has been extensive research on the benefits of Processing Instruction in general and the role of individual differences in particular, no conclusive evidence has been found regarding the role of individual differences in Working Memory Capacity. To explore the question whether Processing Instruction is equally beneficial for learners at different points of the Working Memory Capacity spectrum, a case study on the effects of computer-delivered Processing Instruction has been conducted. German switch prepositions were the target feature and students’ instructional gains were evaluated through sentence- and discourse-level tasks in a pre- and post-test design. Additionally, students’ on-task performance was recorded during instruction. The Working Memory Capacity scores were supplemented with questionnaire data on potential mediating variables such as motivation, anxiety, personality, and aptitude. The analysis of individual learner profiles addressed yet another gap in the literature: Robinson’s (2001) work, Snow’s (1989) aptitude-treatment interaction concept, and Dörnyei & Skehan’s (2003) perspective on individual differences all demand a look at the bigger picture. Yet much of the Second Language Acquisition research to date has operationalised Working Memory according to Baddeley & Hitch’s (1974) model, using quasi-experimental research designs – which usually fail to capture the complex and dynamic nature of Working Memory. This study addressed this gap with attention to the operationalisation of Working Memory, the analysis of task demands as well as perceived difficulty, and a focus on the interplay of several learner variables. Results seem to support the importance of Working Memory for Second Language Acquisition, at least in the short run. However, they also show a clear impact of participant-treatment interactions which might not have become evident in a group-comparison study.
150

Investigating variability in the acquisition of English functional categories by L1 speakers of Latakian Syrian Arabic and L1 speakers of Mandarin Chinese

Melhem, Woroud January 2016 (has links)
A widely studied L2 behaviour in the SLA literature is that of the inconsistency in the production of functional morphology by advanced and endstate L2 learners. The level of inconsistency seems to vary among L2 learners, for instance, SD, a Turkish endstate learner of English (White 2003a) was highly accurate in the production of English inflectional morphology compared with Patty, also an endstate learner of English whose L1 is Chinese (Lardiere 2007). The literature is divided on whether to consider the absence of overt morphology in L2 performance to be a reflection of underlying syntax, thus indicating the absence of corresponding syntactic features, or whether it is an indication of a missing surface inflection only. A proponent of the first account is Hawkins (2009) who claims that a deficit in the L2 syntax, exemplified by the inability of L2 learners to acquire uninterpretable features not instantiated in the L1 grammar beyond the critical period causes the inconsistent suppliance of functional morphology in the interlanguage. On the other hand, Lardiere (2008) and Goad et al. (2003) describe types of post-syntactic problems causing variability: difficulty in mapping between different components of the grammar, and L1 transfer of prosodic structures, respectively. To test the claims of the above hypotheses, this study provides comparative data from two groups of L2 learners who differ with respect to the L1: Latakian Syrian Arabic or Mandarin Chinese. These two languages differ from each other in terms of which functional features are overtly represented in the morphosyntax, but are similar in the manner functional material is prosodified in relation to stems. Results based on the data collected do not lend support to claims of L1 prosodic transfer; they are rather compatible with an account that combines claims from both the Representational Deficit Hypothesis and the Feature Re-assembly hypothesis.

Page generated in 0.0818 seconds