• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 17
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The use of monolingual English and bilingual Arabic-English dictionaries in Kuwait : an experimental investigation into the dictionaries used and reference skills deployed by university students of arts and science

Al-Ajmi, Hashan January 1992 (has links)
This is an empirical investigation into the use of dictionaries by students of English and Science at Kuwait University with a particular focus on bilingual dictionaries of Arabic and English. In the introductory chapter we discuss the increasingly important role of vocabulary in EFL methodology and the relevant emphasis on improving existing dictionaries and teaching students how to make effective use of them. In chapter two we focus on bilingual dictionaries and review their status in EFL methodology. Then structural features of this type of dictionary are discussed with special reference to the problems of translation equivalents, sense discriminations, and intended dictionary function. Chapter three is a critical examination of two bilingual dictionaries in Kuwait. AL-MAWRID (English-Arabic) and DlcrIONARY OF MJDERN WRI'lTEN ARABIC (Arabic-English) are examined in terms of their users and uses, introductory matter, translation equivalents, sense discriminations, illustrative examples, collocations and idioms, grammatical information, and pronunciation. In the fourth chapter we review previous studies of dictionary users and uses and focus on their findings which bear relevance to our investigation. Chapter five is a description of the research method we follow in our investigation i.e. a questionnaire and two translation tests. In chapter six we present and analyse the findings on specific aspects of dictionary use addressed in the questionnaire. Chapter seven is an analysis of translation errors in relation to the type(s) of dictionary used in the Ll-L2 and L2-Ll translation tests. The final chapter summarises the research findings and presents same suggestions with regard to the improvement of existing bilingual dictionaries of English and Arabic and the training of dictionary users.
2

Meaning lists in lexicostatistical studies : evaluation, application, ramifications

Slaska, Natalia January 2006 (has links)
No description available.
3

Lexical richness and accommodation in oral English examinations with Chinese examiners

Zhang, J. January 2014 (has links)
Lexical assessment and lexical accommodation in oral examinations are new research dimensions, which have both theoretical and empirical values, however they are still much neglected. The present research aims to investigate: first, whether or not and how (if so) the measures of lexical richness can differentiate between candidates of three different grades of Graded Examinations in Spoken English of Other Languages (GESE) and whether or not those measures can differentiate good performers from poor performers at the same grade of GESE; second, whether or not and to what extent (if so) Chinese examiners accommodate to the candidates at the lexical level. 180 samples from Grade 2, 5 and 7 GESE were collected. All the data were transcribed into Codes for Human Analysis of Transcripts (CHAT) format for the Child Language Data Exchange System (CHILDES) (MacWhinney 2000) for analysis. First, the lexical measures of Token, Type, Guiraud, Guiraud Advanced (AG) and D of both candidates and examiners were obtained and analyses were conducted to investigate the relationship among them. Secondly, qualitative data were collected from interviews with GESE examiners to interpret the quantitative results. The quantitative results indicate that: 1) all the lexical measures can differentiate candidates of Grade 2 from Grade 5 and can differentiate candidates of Grade 2 from Grade 7 as well. However, there is no significant difference between Grade 5 and Grade 7 candidates' lexical variables. 2) In Grade 2 and Grade 5, all the candidates' lexical variables can distinguish between the qualified and poor performers of the same grade. Only Type, D and AG can differentiate between the qualified and poor candidates in Grade 7. 3) All the GESE score variables are correlated with each other, which shows a halo effect; the only GESE score variables that correlate with all candidate lexical variables in the pooled data is Focus. 4) The examiner variables cannot differentiate between qualified performers and poor performers in the same grade. 5) The only lexical variable that reflects the examiner’s lexical accommodation to the candidate is AG. The qualitative analyses indicate that the GESE examiners employ special characteristics in vocabulary assessment and the data also explain some of the quantitative results. It was found that the Chinese local examiners of GESE might apply meaningful and relevant input and the general communicative ability of the candidate as reliable overall rating strategies, and factors that affected the performance of the Grade 7 candidates are also discussed. The findings may not only shed light on a better understanding of the constructs of vocabulary knowledge and lexical richness, the accommodation the Chinese examiners conducted on candidates, but also provide insight into the design and improvement of examination procedures and training of Chinese oral examiners.
4

Lexicographical explorations of neologisms in the digital age : tracking new words online and comparing wiktionary entries with 'traditional' dictionary representations

Creese, S. January 2017 (has links)
This thesis explores neologisms in two distinct but related contexts: dictionaries and newspapers. Both present neologisms to the world, the former through information and elucidation of meaning, the latter through exemplification of real-world use and behaviour. The thesis first explores the representation of new words in a range of different dictionary types and formats, comparing entries from collaborative dictionary Wiktionary with those in expert-produced dictionaries, both those categorised here as ‘corpus-based’ and those termed ‘corpus-informed’. The former represent the most current of the expert-produced dictionary models, drawing on corpora for almost all of the data they include in an entry, while the latter draw on a mixture of old-style citations and Reading Programmes for much of their data, although this is supplemented with corpus information in some areas. The purpose of this part of the study was to compare degrees of comprehensiveness between the expert and collaborative dictionaries as demonstrated by the level and quality of detail included in new-word entries and in the dictionaries’ responsiveness to new words. This is done by comparing the number and quality of components that appear in a dictionary entry, both the standardised elements found in all of the dictionary types, such as the ‘headword’ at the top of the entry, to the non-standardised elements such as Discussion Forums found almost exclusively in Wiktionary. Wiktionary is found to provide more detailed entries on new words than the expert dictionaries, and to be generally more flexible, responding more quickly and effectively to neologisms. This is due in no small part to the way in which every time an entry or discussion is saved, the entire site updates, something which occurs for expert-produced online dictionaries once a quarter at best. The thesis further explores the way in which the same neologisms are used in four UK national newspapers across the course of their neologic life-cycle. In order to do this, a new methodology is devised for the collection of web-based data for context-rich, genre-specific corpus studies. This produced highly detailed, contextualised data that not only showed how certain newspapers are more likely to use less-well established neologisms (the Independent), while others have an overall stronger record of neologism usage across the 14 years of the study (The Guardian). As well as generating findings on the use and behaviour of neologisms in these newspapers, the manual methodology devised here is compared with a similar automated system, to assess which approach is more appropriate for use in this kind of context-rich database/corpus. The ability to accurately date each article in the study, using information which only the manual methods could accurately access, coupled with the more targeted approach it can offer by excluding unwanted texts from the outset made it the more appropriate approach.
5

Mastering BBC Voices : control and early deployment of a large lexical dataset

Thompson, Ann Georgina January 2012 (has links)
This thesis documents the acquisition, ordering and deployment of lexical material collected for the BBC Voices project, which was conducted during 2004 - 2005. It seeks to present a record of the way in which extensive raw data, generated through an interactive website, were first organised in order to create a coherent and usable database and then applied to initial lexical studies. The work is constructed in two parts. The first part describes the way in which the BBC Voices lexical data were liberated from the encoded format in which they had been collected from respondents, subsequently systematised and finally transferred to a viable database for analysis. Theoretical issues pertaining to the use of the lexical items are identified and discussed in Part 1 and applied in Part 2. The second part of this thesis takes as its focus two studies, using samples of the data in different contexts in order to illustrate their value, accessibility and relevance to linguistic research. The first study is an application to metaphor use in the UK, and the other is geographically based, assessing issues of language stability. The two parts together constitute a synthesis of the formulation and application of a large lexical database. The creation of an accessible lexical resource of this magnitude is of immense value to lexicologists and dialectologists worldwide.
6

Predicting IELTS ratings using vocabulary measures

Demetriou, Theodosia January 2016 (has links)
This thesis addresses the relationship between vocabulary measures and IELTS ratings. The research questions focus on the relationship between measures of lexical richness and teacher ratings. The specific question the thesis seeks to address is: Which measures of lexical richness are the best for predicting the ratings? This question has been considered central in vocabulary measurement research for the last decades particularly in relation to IELTS, one of the most popular exams in the world. Therefore, if a model can predict IELTS scores by using vocabulary measures it could be used as a predictive tool by teachers and researchers worldwide. The research was carried out through two studies, Study 1 and Study 2 and then the model was tested through a third smaller study. Study 1 was a small pilot study which looked at both oral and written data. Study 2 focused on written data only. Measures of both lexical diversity and sophistication were chosen for both studies. Both studies followed similar methodologies with the addition of an extra variable in the second study. For the first study data was collected from 42 IELTS learners whereas for the second study an existing corpus was used. The measures investigated in both studies were: Tokens, TTR, D, Guiraud, Types, Guiraud Advanced and P_Lex. The first four are measures of lexical diversity, the other three measures of lexical sophistication. However, all of the previous measures are measures of breadth of vocabulary. For the second study, a measure of formulaic count was added. This is an aspect of depth of vocabulary used to check if results would improve with this addition. Formulaic sequences were counted in each essay by using Martinez and Schmitt’s (2012) PHRASE List of the 505 most frequent non-transparent multiword expressions in English. The main findings show that all the measures correlate with the ratings but Tokens has the highest correlation of all lexical diversity measures, and Types has the highest correlation of all lexical sophistication measures. TTR, Guiraud and P_Lex can explain 52.8% of the variability in the Lexical ratings. In addition, holistic ratings can be predicted by the same two lexical diversity measures (TTR and Guiraud) but with a different measure of lexical sophistication, Guiraud Advanced. The model consisting of these three measures can explain 49.2% of the variability in the holistic ratings. The formulaic count did not seem to improve the model’s predictive validity, but further analysis from a qualitative angle seemed to explain this behaviour. In Study 3, the holistic ratings model was tested using a small sample of real IELTS data and the examiners comments’ were used for a more qualitative analysis. This revealed that the model underestimated the scores since the range of ratings from the IELTS data was wider than the range of the data from Study 2 which were used as the basis for the model. This proved to be a major hindrance to the study. However, the qualitative analysis confirmed the argument that vocabulary accounts for a high percentage of variance in ratings and provided insights to other aspects that may influence raters which could be added to the model in future research. The issues and limitations of the study and the current findings contribute to the field by stimulating further research into producing a predictive tool that could inform students of their predicted rating before they decide to take the IELTS exam. This could have potential financial benefits for students.
7

Welsh lexical planning and the use of lexis in institutional settings

Robert, Elen January 2013 (has links)
This thesis considers what I call lexical planning initiatives for Welsh – formal attempts to codify and standardise Welsh words. Welsh has been subjected to lexical planning – and purification – attempts for a number of centuries, with lexicographers seeking to coin and standardise Welsh-equivalent words for concepts that have initially emerged through contact with English (and other prestigious languages). Lexical planners have redoubled their efforts in the last fifty years, but especially since 1993, largely as a result of the language revitalisation movement. Lexical planning efforts can be envisaged as attempts to influence the acquisition and use of any lexical resources, but they often focus on specific subject matters, especially from modern or emergent domains or disciplines. Such initiatives are often referred to as terminology planning/standardisation. My research considers the implementation of these planning initiatives, focusing on spoken language data at two research sites: the broadcast media and an office-based workplace. Taking a two-pronged approach to analysis, I ask whether, how and why Welsh speakers use planned lexis. First, I consider the extent to which the lexical content is in keeping with the stipulations of lexical planners in their codification texts. This approach is chiefly quantitative, drawing broadly on corpus linguistics and variationist sociolinguistics. Secondly, taking a more context- and practice-focused, as well as critical approach, I undertake an interaction analysis of the in situ use of lexical resources. From this perspective, we gain a picture of the underlying, sometimes conflicting, ideologies and discourse priorities that motivate lexical choice. This approach considers lexical planning initiatives not as implemented top-down, but embedded in their social milieu. Finally, I consider the implications of my research for the broader revitalisation effort, asking to what extent lexical planning initiatives, as they are currently imagined and conducted, complement other language planning endeavours, and whether and how they might be reconsidered.
8

Semantic neighbourhood density effects in word identification during normal reading : evidence from eye movements

al Farsi, Badriya January 2014 (has links)
Eye movement studies (e.g., lexical ambiguity and semantic plausibility studies) suggesting that word meaning can influence lexical processing relied on contextual information. Therefore, these studies provide only a limited insight into whether the semantic characteristics of a fixated word can be accessed before the completion of its unique word identification. The present thesis investigated the effect of the semantic characteristics of a word in its lexical processing during normal reading. In particular, four experiments were carried out to examine the effects of semantic neighbourhood density (SND, defined by mean distance between a given word and all its co-occurrence neighbours falling within a specific threshold in semantic space, Shaoul & Westbury, 2010a) in normal reading. The findings indicated that the SND characteristics of the fixated word influenced the lexical processing of the fixated word itself and the subsequent words, as evident in early reading time measures associated with lexical processing. These results suggest that a word’s semantic representation can be activated and can influence lexical processing before the completion of unique word identification during normal reading. The findings were discussed in terms of Stolz & Besner’s (1996) embellished interactive-activation model (McClelland & Rumelhart, 1981) and the models of eye movement control during reading.
9

Αλγόριθμοι λεξικογραφικής ανάλυσης και μέθοδοι αυτόματης κατασκευής θησαυρού

Σαλαπάτας, Θεόδωρος 31 August 2010 (has links)
- / -
10

An investigation into lemmatization in Southern Sotho

Makgabutlane, Kelebohile Hilda 01 1900 (has links)
Lemmatization refers to the process whereby a lexicographer assigns a specific place in a dictionary to a word which he regards as the most basic form amongst other related forms. The fact that in Bantu languages formative elements can be added to one another in an often seemingly interminable series till quite long words are produced, evokes curiosity as far as lemmatization is concerned. Being aware of the productive nature of Southern Sotho it is interesting to observe how lexicographers go about handling the question of morphological complexities they are normally faced with in the process of arranging lexical items. This study has shown that some difficulties are encountered as far as adhering to the traditional method of alphabetization is concerned. It does not aim at proposing solutions but does point out some considerations which should be borne in mind in the process of lemmatization. / African Languages / M.A. (African Languages)

Page generated in 0.0304 seconds