• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 942
  • 159
  • 80
  • 59
  • 27
  • 18
  • 17
  • 13
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1638
  • 1638
  • 1543
  • 608
  • 548
  • 450
  • 379
  • 365
  • 256
  • 254
  • 254
  • 223
  • 216
  • 192
  • 188
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computer assisted grammar construction

Shih, Hsue-Hueh January 1995 (has links)
No description available.
2

An investigation into statistically-based lexical ambiguity resolution

Sutton, Stephen January 1992 (has links)
No description available.
3

Syntactic pre-processing in single-word prediction for disabled people

Wood, Matthew Edward John January 1996 (has links)
No description available.
4

The contribution of phonological processes to implicit memory for verbal stimuli

Carney, Rosemary Gai January 2002 (has links)
No description available.
5

A computational model of task oriented discourse

Elliot, Mark James January 1995 (has links)
No description available.
6

Temporal information in newswire articles : an annotation scheme and corpus study

Setzer, Andrea January 2002 (has links)
Many natural language processing applications, such as information extraction, question answering, topic detection and tracking, would benefit significantly from the ability to accurately position reported events in time, either relatively with respect to other events or absolutely with respect to calendrical time. However, relatively little work has been done to date on the automatic extraction of temporal information from text. Before we can progress to automatically position reported events in time, we must gain an understanding of the mechanisms used to do this in language. This understanding can be promoted through the development of all annotation scheme, which allows us to identify the textual expressions conveying events, times and temporal relations in a corpus of 'real' text. This thesis describes a fine-grained annotation scheme with which we can capture all events, times and temporal relations reported ill a text. To aid the application of the scheme to text, a graphical annotation tool has been developed. This tool not only allows easy markup of sophisticated temporal annotations, it also contains an interactive, inference-based component supporting the gathering of temporal relations. The annotation scheme and the tool have been evaluated through the construction of a trial corpus during a pilot study. In this study, a group of annotators was supplied with a description of the annotation scheme and asked to apply it to a trial corpus. The pilot study showed that the annotation scheme was difficult to apply, but is feasible with improvements to the definition of the annotation scheme and the tool. Analysis of the resulting trial corpus also provides preliminary results on the relative extent to which different linguistic mechanisms, explicit and implicit, are used to convey temporal relational information in text.
7

Focusing : a dual systems account for the apparent hemispheric lateralisation of language

Wray, Alison Margaret January 1987 (has links)
A model termed the 'Focusing Hypothesis' is presented. It is proposed that language processing is shared by an analytic and a holistic system, according to a task specific balance of demand and efficiency. The analytic system could function alone, but it is more economical, in normal communication, for holistic processing to operate up to clausal level and analysis to deal with the evaluation of propositions. The severe limitations on the abilities of the holistic system originate from its use of formulae to recognise familiar words in familiar structures. Where problems arise, the analytic system 'trouble-shoots', by focusing attention onto the language, at the expense of propositional focus. The relative involvement of the two systems is variable, according to the strategy selected from a task specific strategy option range; the strategy option range and preferences within it are built up as a response to the environmental requirements placed on the individual. Apparent evidence for left hemisphere lateralised language is re-examined in the light of this hypothesis, which proposes that the test environment of most psycholinguistic and clinical assessments induces a language-focusing strategy and thus deactivates the right hemisphere (holistic) mechanisms. It is predicted that careful modifications to the methods of test administration could reveal right hemisphere activity by permitting it to occur. Support for the hypothesis is drawn from the literature relating to neurophysiological (dynamic) studies and from the reported symptoms of left and right hemisphere damaged patients. Accounts of polyglot (bilingual) acquisition and storage and of differential language loss in polyglot aphasia are also examined. Output processing is examined with reference to one specific hypothesis (Pawley & Syder 1983) which closely aligns with the one for input presented by the Focusing Hypothesis. Two experiments attempt to examine contrasts in strategy as a function of age (Experiment I) and stimulus type (Experiment II). Neither displays strong patterns of the kind predicted to be associated with contrasts in hemispheric superiority according to strategy choice, and it is suggested that, despite the attempt, the experimental designs failed to enable consistent access to the proposition-focused strategies held to be operational in normal communication, that is, those involving holistic processing.
8

Combining Text Structure and Meaning to Support Text Mining

McDonald, Daniel Merrill January 2006 (has links)
Text mining methods strive to make unstructured text more useful for decision making. As part of the mining process, language is processed prior to analysis. Processing techniques have often focused primarily on either text structure or text meaning in preparing documents for analysis. As approaches have evolved over the years, increases in the use of lexical semantic parsing usually have come at the expense of full syntactic parsing. This work explores the benefits of combining structure and meaning or syntax and lexical semantics to support the text mining process.Chapter two presents the Arizona Summarizer, which includes several processing approaches to automatic text summarization. Each approach has varying usage of structural and lexical semantic information. The usefulness of the different summaries is evaluated in the finding stage of the text mining process. The summary produced using structural and lexical semantic information outperforms all others in the browse task. Chapter three presents the Arizona Relation Parser, a system for extracting relations from medical texts. The system is a grammar-based system that combines syntax and lexical semantic information in one grammar for relation extraction. The relation parser attempts to capitalize on the high precision performance of semantic systems and the good coverage of the syntax-based systems. The parser performs in line with the top reported systems in the literature. Chapter four presents the Arizona Entity Finder, a system for extracting named entities from text. The system greatly expands on the combination grammar approach from the relation parser. Each tag is given a semantic and syntactic component and placed in a tag hierarchy. Over 10,000 tags exist in the hierarchy. The system is tested on multiple domains and is required to extract seven additional types of entities in the second corpus. The entity finder achieves a 90 percent F-measure on the MUC-7 data and an 87 percent F-measure on the Yahoo data where additional entity types were extracted.Together, these three chapters demonstrate that combining text structure and meaning in algorithms to process language has the potential to improve the text mining process. A lexical semantic grammar is effective at recognizing domain-specific entities and language constructs. Syntax information, on the other hand, allows a grammar to generalize its rules when possible. Balancing performance and coverage in light of the world's growing body of unstructured text is important.
9

Financial information extraction using pre-defined and user-definable templates in the Lolita system

Constantino, Marco January 1997 (has links)
Financial operators have today access to an extremely large amount of data, both quantitative and qualitative, real-time or historical and can use this information to support their decision-making process. Quantitative data are largely processed by automatic computer programs, often based on artificial intelligence techniques, that produce quantitative analysis, such as historical price analysis or technical analysis of price behaviour. Differently, little progress has been made in the processing of qualitative data, which mainly consists of financial news articles from financial newspapers or on-line news providers. As a result the financial market players are overloaded with qualitative information which is potentially extremely useful but, due to the lack of time, is often ignored. The goal of this work is to reduce the qualitative data-overload of the financial operators. The research involves the identification of the information in the source financial articles which is relevant for the financial operators' investment decision making process and to implement the associated templates in the LOLITA system. The system should process a large number of source articles and extract specific templates according to the relevant information located in the source articles. The project also involves the design and implementation in LOLITA of a user- definable template interface for allowing the users to easily design new templates using sentences in natural language. This allows user-defined information extraction from source texts. This differs from most of existing information extraction systems which require the developers to code the templates directly in the system. The results of the research have shown that the system performed well in the extraction of financial templates from source articles which would allow the financial operator to reduce his qualitative data-overload. The results have also shown that the user-definable template interface is a viable approach to user-defined information extraction. A trade-off has been identified between the ease of use of the user-definable template interface and the loss of performance compared to hand- coded templates.
10

Robust processing for constraint-based grammar formalisms

Fouvry, Frederik January 2003 (has links)
No description available.

Page generated in 0.0875 seconds