• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 6
  • 1
  • Tagged with
  • 48
  • 48
  • 48
  • 29
  • 26
  • 24
  • 19
  • 13
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Understanding and improving object-oriented software through static software analysis : a thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science in the University of Canterbury /

Irwin, Warwick. January 2007 (has links)
Thesis (Ph. D.)--University of Canterbury, 2007. / Typescript (photocopy). Includes bibliographical references (p. 191-197). Also available via the World Wide Web.
22

Learning for semantic parsing with kernels under various forms of supervision

Kate, Rohit Jaivant, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
23

Learning for semantic parsing and natural language generation using statistical machine translation techniques

Wong, Yuk Wah, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
24

Parallel Parsing in a Multiprocessor Environment

Sarkar, Dilip 01 January 1988 (has links) (PDF)
Parsing in a multiprocessor environment is considered. Two models for asynchronous bottom-up parallel parsing are presented. A method for estimating speedup in asynchronous bottom-up parallel parsing is developed, and it is used to estimate speedup obtainable by bottom-up parallel parsing of Pascal-like languages. It is found that bottom-up parallel parsing algorithms can attain a maximum speedup of 0 (L1/2) with (L1/2) processors, where L is the number of tokens in the string being parsed. Hence, bottom-up parallel parsing technique does not yield good speedup. A new parsing technique is proposed for parsing a class of block-structured languages. The novelty of the technique is that it is inherently parallel. By applying this new technique, a string of L tokens can be parsed in O (log L) time with (L /log L) processors. The parsing algorithm uses a parenthesis-matching algorithm developed here. The parenthesis-matching algorithm can find matching of a sequence of parentheses in O (log L) time with (L /log L) processors. Thus, the new parsing algorithm is cost optimal.
25

The Design and Implementation of a Prolog Parser Using Javacc

Gupta, Pankaj 08 1900 (has links)
Operatorless Prolog text is LL(1) in nature and any standard LL parser generator tool can be used to parse it. However, the Prolog text that conforms to the ISO Prolog standard allows the definition of dynamic operators. Since Prolog operators can be defined at run-time, operator symbols are not present in the grammar rules of the language. Unless the parser generator allows for some flexibility in the specification of the grammar rules, it is very difficult to generate a parser for such text. In this thesis we discuss the existing parsing methods and their modified versions to parse languages with dynamic operator capabilities. Implementation details of a parser using Javacc as a parser generator tool to parse standard Prolog text is provided. The output of the parser is an “Abstract Syntax Tree” that reflects the correct precedence and associativity rules among the various operators (static and dynamic) of the language. Empirical results are provided that show that a Prolog parser that is generated by the parser generator like Javacc is comparable in efficiency to a hand-coded parser.
26

Semi-automatic acquisition of domain-specific semantic structures.

January 2000 (has links)
Siu, Kai-Chung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 99-106). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Thesis Outline --- p.5 / Chapter 2 --- Background --- p.6 / Chapter 2.1 --- Natural Language Understanding --- p.6 / Chapter 2.1.1 --- Rule-based Approaches --- p.7 / Chapter 2.1.2 --- Stochastic Approaches --- p.8 / Chapter 2.1.3 --- Phrase-Spotting Approaches --- p.9 / Chapter 2.2 --- Grammar Induction --- p.10 / Chapter 2.2.1 --- Semantic Classification Trees --- p.11 / Chapter 2.2.2 --- Simulated Annealing --- p.12 / Chapter 2.2.3 --- Bayesian Grammar Induction --- p.12 / Chapter 2.2.4 --- Statistical Grammar Induction --- p.13 / Chapter 2.3 --- Machine Translation --- p.14 / Chapter 2.3.1 --- Rule-based Approach --- p.15 / Chapter 2.3.2 --- Statistical Approach --- p.15 / Chapter 2.3.3 --- Example-based Approach --- p.16 / Chapter 2.3.4 --- Knowledge-based Approach --- p.16 / Chapter 2.3.5 --- Evaluation Method --- p.19 / Chapter 3 --- Semi-Automatic Grammar Induction --- p.20 / Chapter 3.1 --- Agglomerative Clustering --- p.20 / Chapter 3.1.1 --- Spatial Clustering --- p.21 / Chapter 3.1.2 --- Temporal Clustering --- p.24 / Chapter 3.1.3 --- Free Parameters --- p.26 / Chapter 3.2 --- Post-processing --- p.27 / Chapter 3.3 --- Chapter Summary --- p.29 / Chapter 4 --- Application to the ATIS Domain --- p.30 / Chapter 4.1 --- The ATIS Domain --- p.30 / Chapter 4.2 --- Parameters Selection --- p.32 / Chapter 4.3 --- Unsupervised Grammar Induction --- p.35 / Chapter 4.4 --- Prior Knowledge Injection --- p.40 / Chapter 4.5 --- Evaluation --- p.43 / Chapter 4.5.1 --- Parse Coverage in Understanding --- p.45 / Chapter 4.5.2 --- Parse Errors --- p.46 / Chapter 4.5.3 --- Analysis --- p.47 / Chapter 4.6 --- Chapter Summary --- p.49 / Chapter 5 --- Portability to Chinese --- p.50 / Chapter 5.1 --- Corpus Preparation --- p.50 / Chapter 5.1.1 --- Tokenization --- p.51 / Chapter 5.2 --- Experiments --- p.52 / Chapter 5.2.1 --- Unsupervised Grammar Induction --- p.52 / Chapter 5.2.2 --- Prior Knowledge Injection --- p.56 / Chapter 5.3 --- Evaluation --- p.58 / Chapter 5.3.1 --- Parse Coverage in Understanding --- p.59 / Chapter 5.3.2 --- Parse Errors --- p.60 / Chapter 5.4 --- Grammar Comparison Across Languages --- p.60 / Chapter 5.5 --- Chapter Summary --- p.64 / Chapter 6 --- Bi-directional Machine Translation --- p.65 / Chapter 6.1 --- Bilingual Dictionary --- p.67 / Chapter 6.2 --- Concept Alignments --- p.68 / Chapter 6.3 --- Translation Procedures --- p.73 / Chapter 6.3.1 --- The Matching Process --- p.74 / Chapter 6.3.2 --- The Searching Process --- p.76 / Chapter 6.3.3 --- Heuristics to Aid Translation --- p.81 / Chapter 6.4 --- Evaluation --- p.82 / Chapter 6.4.1 --- Coverage --- p.83 / Chapter 6.4.2 --- Performance --- p.86 / Chapter 6.5 --- Chapter Summary --- p.89 / Chapter 7 --- Conclusions --- p.90 / Chapter 7.1 --- Summary --- p.90 / Chapter 7.2 --- Future Work --- p.92 / Chapter 7.2.1 --- Suggested Improvements on Grammar Induction Process --- p.92 / Chapter 7.2.2 --- Suggested Improvements on Bi-directional Machine Trans- lation --- p.96 / Chapter 7.2.3 --- Domain Portability --- p.97 / Chapter 7.3 --- Contributions --- p.97 / Bibliography --- p.99 / Chapter A --- Original SQL Queries --- p.107 / Chapter B --- Induced Grammar --- p.109 / Chapter C --- Seeded Categories --- p.111
27

Realization of automatic concept extraction for Chinese conceptual information retrieval =: 中文槪念訊息檢索中自動槪念抽取的實踐. / 中文槪念訊息檢索中自動槪念抽取的實踐 / Realization of automatic concept extraction for Chinese conceptual information retrieval =: Zhong wen gai nian xun xi jian suo zhong zi dong gai nian chou qu de shi jian. / Zhong wen gai nian xun xi jian suo zhong zi dong gai nian chou qu de shi jian

January 1998 (has links)
Wai Ip Lam. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 84-87). / Text in English; abstract also in Chinese. / Wai Ip Lam. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background --- p.5 / Chapter 2.1 --- Information Retrieval --- p.5 / Chapter 2.1.1 --- Index Extraction --- p.6 / Chapter 2.1.2 --- Other Approaches to Extracting Indexes --- p.7 / Chapter 2.1.3 --- Conceptual Information Retrieval --- p.8 / Chapter 2.1.4 --- Information Extraction --- p.9 / Chapter 2.2 --- Natural Language Parsing --- p.9 / Chapter 2.2.1 --- Linguistics-based --- p.10 / Chapter 2.2.2 --- Corpus-based --- p.11 / Chapter 3 --- Concept Extraction --- p.13 / Chapter 3.1 --- Concepts in Sentences --- p.13 / Chapter 3.1.1 --- Semantic Structures and Themantic Roles --- p.13 / Chapter 3.1.2 --- Syntactic Functions --- p.14 / Chapter 3.2 --- Representing Concepts --- p.15 / Chapter 3.3 --- Application to Conceptual Information Retrieval --- p.18 / Chapter 3.4 --- Overview of Our Concept Extraction Model --- p.20 / Chapter 3.4.1 --- Corpus Training --- p.21 / Chapter 3.4.2 --- Sentence Analyzing --- p.22 / Chapter 4 --- Noun Phrase Detection --- p.23 / Chapter 4.1 --- Significance of Noun Phrase Detection --- p.23 / Chapter 4.1.1 --- Noun Phrases versus Terminals in Parse Trees --- p.23 / Chapter 4.1.2 --- Quantitative Analysis of Applying Noun Phrase Detection --- p.26 / Chapter 4.2 --- An Algorithm for Chinese Noun Phrase Partial Parsing --- p.28 / Chapter 4.2.1 --- The Hybrid Approach --- p.28 / Chapter 4.2.2 --- CNP3´ؤThe Chinese NP Partial Parser --- p.30 / Chapter 5 --- Rule Extraction and SVO Parsing --- p.35 / Chapter 5.1 --- Annotation of Corpora --- p.36 / Chapter 5.1.1 --- Components of Chinese Sentence Patterns --- p.36 / Chapter 5.1.2 --- Annotating Sentence Structures --- p.37 / Chapter 5.1.3 --- Illustrative Examples --- p.38 / Chapter 5.2 --- Parsing with Rules Obtained Directly from Corpora --- p.43 / Chapter 5.2.1 --- Extracting Rules --- p.43 / Chapter 5.2.2 --- Parsing --- p.44 / Chapter 5.3 --- Using Word Specific Information --- p.45 / Chapter 6 --- Generalization of Rules --- p.48 / Chapter 6.1 --- Essence of Chinese Linguistics on Generalization --- p.49 / Chapter 6.1.1 --- Classification of Chinese Sentence Patterns --- p.50 / Chapter 6.1.2 --- Revision of Chinese Verb Phrase Classification --- p.52 / Chapter 6.2 --- Initial Generalization --- p.53 / Chapter 6.2.1 --- Generalizing Rules --- p.55 / Chapter 6.2.2 --- Dealing with Alternative Results --- p.58 / Chapter 6.2.3 --- Parsing --- p.58 / Chapter 6.2.4 --- An illustrative Example --- p.59 / Chapter 6.3 --- Further Generalization --- p.60 / Chapter 7 --- Experiments on SVO Parsing --- p.62 / Chapter 7.1 --- Experimental Setup --- p.63 / Chapter 7.2 --- Effect of Adopting Noun Phrase Detection --- p.65 / Chapter 7.3 --- Results of Generalization --- p.68 / Chapter 7.4 --- Reliability Evaluation --- p.69 / Chapter 7.4.1 --- Covergence Sequence Tests --- p.69 / Chapter 7.4.2 --- Cross Evaluation Tests --- p.72 / Chapter 7.5 --- Overall Performance --- p.75 / Chapter 8 --- Conclusions --- p.79 / Chapter 8.1 --- Summary --- p.79 / Chapter 8.2 --- Contribution --- p.81 / Chapter 8.3 --- Future Directions --- p.81 / Chapter 8.3.1 --- Improvements in Parsing --- p.81 / Chapter 8.3.2 --- Concept Representations --- p.82 / Chapter 8.3.3 --- Non-IR Applications --- p.83 / Bibliography --- p.84 / Appendix --- p.88 / Chapter A --- The Extended Part of Speech Tag Set --- p.88
28

Cross-Lingual Transfer of Natural Language Processing Systems

Rasooli, Mohammad Sadegh January 2019 (has links)
Accurate natural language processing systems rely heavily on annotated datasets. In the absence of such datasets, transfer methods can help to develop a model by transferring annotations from one or more rich-resource languages to the target language of interest. These methods are generally divided into two approaches: 1) annotation projection from translation data, aka parallel data, using supervised models in rich-resource languages, and 2) direct model transfer from annotated datasets in rich-resource languages. In this thesis, we demonstrate different methods for transfer of dependency parsers and sentiment analysis systems. We propose an annotation projection method that performs well in the scenarios for which a large amount of in-domain parallel data is available. We also propose a method which is a combination of annotation projection and direct transfer that can leverage a minimal amount of information from a small out-of-domain parallel dataset to develop highly accurate transfer models. Furthermore, we propose an unsupervised syntactic reordering model to improve the accuracy of dependency parser transfer for non-European languages. Finally, we conduct a diverse set of experiments for the transfer of sentiment analysis systems in different data settings. A summary of our contributions are as follows: * We develop accurate dependency parsers using parallel text in an annotation projection framework. We make use of the fact that the density of word alignments is a valuable indicator of reliability in annotation projection. * We develop accurate dependency parsers in the absence of a large amount of parallel data. We use the Bible data, which is in orders of magnitude smaller than a conventional parallel dataset, to provide minimal cues for creating cross-lingual word representations. Our model is also capable of boosting the performance of annotation projection with a large amount of parallel data. Our model develops cross-lingual word representations for going beyond the traditional delexicalized direct transfer methods. Moreover, we propose a simple but effective word translation approach that brings in explicit lexical features from the target language in our direct transfer method. * We develop different syntactic reordering models that can change the source treebanks in rich-resource languages, thus preventing learning a wrong model for a non-related language. Our experimental results show substantial improvements over non-European languages. * We develop transfer methods for sentiment analysis in different data availability scenarios. We show that we can leverage cross-lingual word embeddings to create accurate sentiment analysis systems in the absence of annotated data in the target language of interest. We believe that the novelties that we introduce in this thesis indicate the usefulness of transfer methods. This is appealing in practice, especially since we suggest eliminating the requirement for annotating new datasets for low-resource languages which is expensive, if not impossible, to obtain.
29

A robust unification-based parser for Chinese natural language processing.

January 2001 (has links)
Chan Shuen-ti Roy. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 168-175). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.12 / Chapter 1.1. --- The nature of natural language processing --- p.12 / Chapter 1.2. --- Applications of natural language processing --- p.14 / Chapter 1.3. --- Purpose of study --- p.17 / Chapter 1.4. --- Organization of this thesis --- p.18 / Chapter 2. --- Organization and methods in natural language processing --- p.20 / Chapter 2.1. --- Organization of natural language processing system --- p.20 / Chapter 2.2. --- Methods employed --- p.22 / Chapter 2.3. --- Unification-based grammar processing --- p.22 / Chapter 2.3.1. --- Generalized Phase Structure Grammar (GPSG) --- p.27 / Chapter 2.3.2. --- Head-driven Phrase Structure Grammar (HPSG) --- p.31 / Chapter 2.3.3. --- Common drawbacks of UBGs --- p.33 / Chapter 2.4. --- Corpus-based processing --- p.34 / Chapter 2.4.1. --- Drawback of corpus-based processing --- p.35 / Chapter 3. --- Difficulties in Chinese language processing and its related works --- p.37 / Chapter 3.1. --- A glance at the history --- p.37 / Chapter 3.2. --- Difficulties in syntactic analysis of Chinese --- p.37 / Chapter 3.2.1. --- Writing system of Chinese causes segmentation problem --- p.38 / Chapter 3.2.2. --- Words serving multiple grammatical functions without inflection --- p.40 / Chapter 3.2.3. --- Word order of Chinese --- p.42 / Chapter 3.2.4. --- The Chinese grammatical word --- p.43 / Chapter 3.3. --- Related works --- p.45 / Chapter 3.3.1. --- Unification grammar processing approach --- p.45 / Chapter 3.3.2. --- Corpus-based processing approach --- p.48 / Chapter 3.4. --- Restatement of goal --- p.50 / Chapter 4. --- SERUP: Statistical-Enhanced Robust Unification Parser --- p.54 / Chapter 5. --- Step One: automatic preprocessing --- p.57 / Chapter 5.1. --- Segmentation of lexical tokens --- p.57 / Chapter 5.2. --- "Conversion of date, time and numerals" --- p.61 / Chapter 5.3. --- Identification of new words --- p.62 / Chapter 5.3.1. --- Proper nouns ´ؤ Chinese names --- p.63 / Chapter 5.3.2. --- Other proper nouns and multi-syllabic words --- p.67 / Chapter 5.4. --- Defining smallest parsing unit --- p.82 / Chapter 5.4.1. --- The Chinese sentence --- p.82 / Chapter 5.4.2. --- Breaking down the paragraphs --- p.84 / Chapter 5.4.3. --- Implementation --- p.87 / Chapter 6. --- Step Two: grammar construction --- p.91 / Chapter 6.1. --- Criteria in choosing a UBG model --- p.91 / Chapter 6.2. --- The grammar in details --- p.92 / Chapter 6.2.1. --- The PHON feature --- p.93 / Chapter 6.2.2. --- The SYN feature --- p.94 / Chapter 6.2.3. --- The SEM feature --- p.98 / Chapter 6.2.4. --- Grammar rules and features principles --- p.99 / Chapter 6.2.5. --- Verb phrases --- p.101 / Chapter 6.2.6. --- Noun phrases --- p.104 / Chapter 6.2.7. --- Prepositional phrases --- p.113 / Chapter 6.2.8. --- """Ba2"" and ""Bei4"" constructions" --- p.115 / Chapter 6.2.9. --- The terminal node S --- p.119 / Chapter 6.2.10. --- Summary of phrasal rules --- p.121 / Chapter 6.2.11. --- Morphological rules --- p.122 / Chapter 7. --- Step Three: resolving structural ambiguities --- p.128 / Chapter 7.1. --- Sources of ambiguities --- p.128 / Chapter 7.2. --- The traditional practices: an illustration --- p.132 / Chapter 7.3. --- Deficiency of current practices --- p.134 / Chapter 7.4. --- A new point of view: Wu (1999) --- p.140 / Chapter 7.5. --- Improvement over Wu (1999) --- p.142 / Chapter 7.6. --- Conclusion on semantic features --- p.146 / Chapter 8. --- "Implementation, performance and evaluation" --- p.148 / Chapter 8.1. --- Implementation --- p.148 / Chapter 8.2. --- Performance and evaluation --- p.150 / Chapter 8.2.1. --- The test set --- p.150 / Chapter 8.2.2. --- Segmentation of lexical tokens --- p.150 / Chapter 8.2.3. --- New word identification --- p.152 / Chapter 8.2.4. --- Parsing unit segmentation --- p.156 / Chapter 8.2.5. --- The grammar --- p.158 / Chapter 8.3. --- Overall performance of SERUP --- p.162 / Chapter 9. --- Conclusion --- p.164 / Chapter 9.1. --- Summary of this thesis --- p.164 / Chapter 9.2. --- Contribution of this thesis --- p.165 / Chapter 9.3. --- Future work --- p.166 / References --- p.168 / Appendix I --- p.176 / Appendix II --- p.181 / Appendix III --- p.183
30

Lexical approaches to backoff in statistical parsing

Lakeland, Corrin, n/a January 2006 (has links)
This thesis develops a new method for predicting probabilities in a statistical parser so that more sophisticated probabilistic grammars can be used. A statistical parser uses a probabilistic grammar derived from a training corpus of hand-parsed sentences. The grammar is represented as a set of constructions - in a simple case these might be context-free rules. The probability of each construction in the grammar is then estimated by counting its relative frequency in the corpus. A crucial problem when building a probabilistic grammar is to select an appropriate level of granularity for describing the constructions being learned. The more constructions we include in our grammar, the more sophisticated a model of the language we produce. However, if too many different constructions are included, then our corpus is unlikely to contain reliable information about the relative frequency of many constructions. In existing statistical parsers two main approaches have been taken to choosing an appropriate granularity. In a non-lexicalised parser constructions are specified as structures involving particular parts-of-speech, thereby abstracting over individual words. Thus, in the training corpus two syntactic structures involving the same parts-of-speech but different words would be treated as two instances of the same event. In a lexicalised grammar the assumption is that the individual words in a sentence carry information about its syntactic analysis over and above what is carried by its part-of-speech tags. Lexicalised grammars have the potential to provide extremely detailed syntactic analyses; however, Zipf�s law makes it hard for such grammars to be learned. In this thesis, we propose a method for optimising the trade-off between informative and learnable constructions in statistical parsing. We implement a grammar which works at a level of granularity in between single words and parts-of-speech, by grouping words together using unsupervised clustering based on bigram statistics. We begin by implementing a statistical parser to serve as the basis for our experiments. The parser, based on that of Michael Collins (1999), contains a number of new features of general interest. We then implement a model of word clustering, which we believe is the first to deliver vector-based word representations for an arbitrarily large lexicon. Finally, we describe a series of experiments in which the statistical parser is trained using categories based on these word representations.

Page generated in 0.074 seconds