• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 338
  • 331
  • 153
  • 60
  • 30
  • 25
  • 23
  • 22
  • 17
  • 14
  • 13
  • 10
  • 10
  • 9
  • 8
  • Tagged with
  • 1205
  • 397
  • 215
  • 153
  • 152
  • 143
  • 95
  • 95
  • 92
  • 92
  • 91
  • 87
  • 82
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A lexical cartography of twentieth century Australia

Arthur, Jillian Mary, n/a January 1999 (has links)
This thesis looks at the relation between the English language and the Australian place. I have studied the vocabulary used by English speakers in Australia in the twentieth century of this geographical place and its environment, and how this vocabulary both constructs multiple and sometimes contesting 'Australias' and positions the settler in particular relations to this place. Although English has occupied Australia for over a century by the time this study begins, the analysis exposes the tensions, the gaps and the unease present in the use of a European language in the Australian place.
72

Advanced Intranet Search Engine

Narayan, Nitesh January 2009 (has links)
<p>Information retrieval has been a prevasive part of human society since its existence.With the advent of internet and World wide Web it became an extensive area of researchand major foucs, which lead to development of various search engines to locate the de-sired information, mostly for globally connected computer networks viz. internet.Butthere is another major part of computer network viz. intranet, which has not seen muchof advancement in information retrieval approaches, in spite of being a major source ofinformation within a large number of organizations.Most common technique for intranet based search engines is still mere database-centric. Thus practically intranets are unable to avail the benefits of sophisticated tech-niques that have been developed for internet based search engines without exposing thedata to commercial search engines.In this Master level thesis we propose a ”state of the art architecture” for an advancedsearch engine for intranet which is capable of dealing with continuously growing sizeof intranets knowledge base. This search engine employs lexical processing of doc-umetns,where documents are indexed and searched based on standalone terms or key-words, along with the semantic processing of the documents where the context of thewords and the relationship among them is given more importance.Combining lexical and semantic processing of the documents give an effective ap-proach to handle navigational queries along with research queries, opposite to the modernsearch engines which either uses lexical processing or semantic processing (or one as themajor) of the documents. We give equal importance to both the approaches in our design,considering best of the both world.This work also takes into account various widely acclaimed concepts like inferencerules, ontologies and active feedback from the user community to continuously enhanceand improve the quality of search results along with the possibility to infer and deducenew knowledge from the existing one, while preparing for the advent of semantic web.</p>
73

Advanced Intranet Search Engine

Narayan, Nitesh January 2009 (has links)
Information retrieval has been a prevasive part of human society since its existence.With the advent of internet and World wide Web it became an extensive area of researchand major foucs, which lead to development of various search engines to locate the de-sired information, mostly for globally connected computer networks viz. internet.Butthere is another major part of computer network viz. intranet, which has not seen muchof advancement in information retrieval approaches, in spite of being a major source ofinformation within a large number of organizations.Most common technique for intranet based search engines is still mere database-centric. Thus practically intranets are unable to avail the benefits of sophisticated tech-niques that have been developed for internet based search engines without exposing thedata to commercial search engines.In this Master level thesis we propose a ”state of the art architecture” for an advancedsearch engine for intranet which is capable of dealing with continuously growing sizeof intranets knowledge base. This search engine employs lexical processing of doc-umetns,where documents are indexed and searched based on standalone terms or key-words, along with the semantic processing of the documents where the context of thewords and the relationship among them is given more importance.Combining lexical and semantic processing of the documents give an effective ap-proach to handle navigational queries along with research queries, opposite to the modernsearch engines which either uses lexical processing or semantic processing (or one as themajor) of the documents. We give equal importance to both the approaches in our design,considering best of the both world.This work also takes into account various widely acclaimed concepts like inferencerules, ontologies and active feedback from the user community to continuously enhanceand improve the quality of search results along with the possibility to infer and deducenew knowledge from the existing one, while preparing for the advent of semantic web.
74

Selectional preferences of semantically primitive verbs in English : the periphrastic causatives and verbs of becoming

Childers, Zachary Witter 12 December 2013 (has links)
Analyses of English verb meaning often rely on quasi-aspectual operators embedded in event structures to explain shared properties across classes. These operators scope over temporally basic meaning elements that make up the idiosyncratic semantic core of complex verbs. While the inventory of operators – or semantic primes – differ from proposal to proposal, they are generally presented as a closed class that includes at least CAUSE and BECOME, and their presence and location in event structures account for several alternation and ambiguity phenomena. In this study, I investigate a number verbs whose decompositions would include only operator(s) and event structure frames under most current decompositional lexical theories; in particular, the periphrastic causatives (cause, make, etc) and the verbs of becoming (become, get, etc). I account for differences in the selectional behavior of these verbs by positing incorporated meaning components beyond the purely aspectual or event structural. Based in part on regularities among corpus collocations, I propose additional meaning distinctions among these verbs along the parameters of causal patient complicity, sentiment, and register. / text
75

The semantic representation of concrete and abstract words

De Mornay Davies, Paul January 1997 (has links)
This thesis examines the various approaches which have been taken to investigate the concrete/abstract word distinction both in normal subjects and in patients who, as a result of brain damage, have an impairment of lexical semantic representations. The nature of the definition task as a tool for assessing the semantic representations of concrete and abstract terms was examined. It was found that definitions for abstract words differed from those of concrete words only in style, not in semantic content. The metalinguistic demands of the definition task therefore make it inappropriate for assessing the semantic representations of concrete and abstract terms in patients with any form of language impairment. The performance of four patients with semantic impairments was examined using a variety of tasks designed to assess concrete and abstract word comprehension. While some of the data can be accommodated within the framework of several theories, no single theory can adequately account for the patterns of performance in all four patients. An alternative model of semantic memory is therefore proposed in which concreteness and frequency interact at the semantic level. Jones' Ease of Predication Hypothesis, which states that the difference between concrete and abstract terms can be explained in terms of disproportionate numbers of underlying semantic features (or "predicates") was also investigated. It was found that the ease of predication variable does not accurately reflect either predicate or feature distributions, and is simply another index of concreteness. As such, the validity of this concept as the basis of theories of semantic representation should be questioned. Models based on the assumption of a "richer" semantic representation for concrete words (e.g.: Plaut & Shallice, 1993) are therefore undermined by these data. The possibility that concrete and abstract concepts can be accessed from their most salient predicates and/or features was examined in a series of semantic priming experiments. It was concluded that it is not possible to prime either concrete or abstract concepts from their constituent parts. Significant facilitation only occurred for items in which the prime and target were synonymous and therefore map onto concepts which share almost identical semantic representations. In summary, it is apparent that no current theory of semantic representation can adequately account for the range of findings with regard to the concrete/abstract word distinction. The most plausible account is some form of distributed connectionist model. However, such models are based on unsubstantiated assumptions about the nature of abstract word representations in the semantic network. Alternative proposals are therefore discussed.
76

Lexical segmentation in normal and neurologically impaired speech comprehension

Lloyd, Andrew J. January 1998 (has links)
No description available.
77

Complement functions in Cantonese: a lexical-functional grammar approach

李逸薇, Lee, Yat-mei. January 2002 (has links)
published_or_final_version / Linguistics / Master / Master of Philosophy
78

Out of this word : the effect of parafoveal orthographic information on central word processing

Dare, Natasha January 2010 (has links)
The aim of this thesis is to investigate the effect of parafoveal information on central word processing. This topic impacts on two controversial areas of research: the allocation of attention during reading, and letter processing during word recognition. Researchers into the role of attention during reading are split into two camps, with some believing that attention is allocated serially to consecutive words and others that it is spread across multiple words in parallel. This debate has been informed by the results of recent experiments that test a key prediction of the parallel processing theory that parafoveal and foveal processing occur concurrently. However, there is a gap in the literature for tightly-controlled experiments to further test this prediction. In contrast, the study of the processing that letters undergo during word recognition has a long history, with many researchers concluding that letter identity is processed only conjointly with letter ‘slot’ position within a word, known as ‘slot-based’ coding. However, recent innovative studies have demonstrated that more word priming is produced from prime letter strings containing letter transpositions than from primes containing letter substitutions, although this work has not been extended to parafoveal letter prime presentations. This thesis will also discuss the neglected subject of how research into these separate topics of text reading and isolated word recognition can be integrated via parafoveal processing. It presents six experiments designed to investigate how our responses to a central word are affected by varying its relationship with simultaneously presented parafoveal information. Experiment 1 introduced the Flanking Letters Lexical Decision task in which a lexical decision was made to words flanked by bigrams either orthographically related or unrelated to the response word; the results indicated that there is parafoveal orthographic priming but did not support the ‘slot-based’ coding theory as letter order was unimportant. Experiments 2-4 involved eye-tracking of participants who read sentences containing a boundary change that allowed the presentation of an orthographically related word in parafoveal vision. Experiment 2 demonstrated that an orthographically related word at position n+1 reduces first-pass fixations on word n, indicating parallel processing of these words. Experiment 4 replicated this result, and also showed that altering the letter identity of word n+1 reduced orthographic priming whereas altering letter order did not, indicating that slot-based coding of letters does not occur during reading. However, Experiment 3 found that an orthographically related word presented at position n-1 did not prime word n, signifying the influence of reading direction on parafoveal processing. Experiment 5 investigated whether the parallel processing that words undergo during text reading conditions our representations of isolated words; lexical decision times to words flanked by bigrams that formed plausible or implausible contexts did not differ. Lastly, one possible cause of the reading disorder dyslexia is under- or over- processing of parafoveal information. Experiment 6 therefore replicated Experiment 1 including a sample of dyslexia sufferers but found no interaction between reading ability and parafoveal processing. Overall, the results of this thesis lead to the conclusion that there is extensive processing of parafoveal information during both reading (indicating parallel processing) and word recognition (contraindicating slot-based coding), and that underpinning both our reading and word recognition processes is the flexibility of our information-gathering mechanisms.
79

Semi-supervised lexical acquisition for wide-coverage parsing

Thomforde, Emily Jane January 2013 (has links)
State-of-the-art parsers suffer from incomplete lexicons, as evidenced by the fact that they all contain built-in methods for dealing with out-of-lexicon items at parse time. Since new labelled data is expensive to produce and no amount of it will conquer the long tail, we attempt to address this problem by leveraging the enormous amount of raw text available for free, and expanding the lexicon offline, with a semi-supervised word learner. We accomplish this with a method similar to self-training, where a fully trained parser is used to generate new parses with which the next generation of parser is trained. This thesis introduces Chart Inference (CI), a two-phase word-learning method with Combinatory Categorial Grammar (CCG), operating on the level of the partial parse as produced by a trained parser. CI uses the parsing model and lexicon to identify the CCG category type for one unknown word in a context of known words by inferring the type of the sentence using a model of end punctuation, then traversing the chart from the top down, filling in each empty cell as a function of its mother and its sister. We first specify the CI algorithm, and then compare it to two baseline wordlearning systems over a battery of learning tasks. CI is shown to outperform the baselines in every task, and to function in a number of applications, including grammar acquisition and domain adaptation. This method performs consistently better than self-training, and improves upon the standard POS-backoff strategy employed by the baseline StatCCG parser by adding new entries to the lexicon. The first learning task establishes lexical convergence over a toy corpus, showing that CI’s ability to accurately model a target lexicon is more robust to initial conditions than either of the baseline methods. We then introduce a novel natural language corpus based on children’s educational materials, which is fully annotated with CCG derivations. We use this corpus as a testbed to establish that CI is capable in principle of recovering the whole range of category types necessary for a wide-coverage lexicon. The complexity of the learning task is then increased, using the CCGbank corpus, a version of the Penn Treebank, and showing that CI improves as its initial seed corpus is increased. The next experiment uses CCGbank as the seed and attempts to recover missing question-type categories in the TREC question answering corpus. The final task extends the coverage of the CCGbank-trained parser by running CI over the raw text of the Gigaword corpus. Where appropriate, a fine-grained error analysis is also undertaken to supplement the quantitative evaluation of the parser performance with deeper reasoning as to the linguistic points of the lexicon and parsing model.
80

Aspectual complex predicates in Punjabi

Akhtar, Raja Nasim January 2000 (has links)
No description available.

Page generated in 0.0214 seconds