• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 156
  • 18
  • 12
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 196
  • 196
  • 196
  • 196
  • 195
  • 53
  • 50
  • 49
  • 42
  • 33
  • 31
  • 29
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The use of multiple speech recognition hypotheses for natural language understanding.

January 2003 (has links)
Wang Ying. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 102-104). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview --- p.1 / Chapter 1.2 --- Thesis Goals --- p.3 / Chapter 1.3 --- Thesis Outline --- p.3 / Chapter 2 --- Background --- p.4 / Chapter 2.1 --- Speech Recognition --- p.4 / Chapter 2.2 --- Natural Language Understanding --- p.6 / Chapter 2.2.1 --- Rule-based Approach --- p.7 / Chapter 2.2.2 --- Corpus-based Approach --- p.7 / Chapter 2.3 --- Integration of Speech Recognition with NLU --- p.8 / Chapter 2.3.1 --- Word Graph --- p.9 / Chapter 2.3.2 --- N-best List --- p.9 / Chapter 2.4 --- The ATIS Domain --- p.10 / Chapter 2.5 --- Chapter Summary --- p.14 / Chapter 3 --- Generation of Speech Recognition Hypotheses --- p.15 / Chapter 3.1 --- Grammar Development for the OpenSpeech Recognizer --- p.16 / Chapter 3.2 --- Generation of Speech Recognition Hypotheses --- p.22 / Chapter 3.3 --- Evaluation of Speech Recognition Hypotheses --- p.24 / Chapter 3.3.1 --- Recognition Accuracy --- p.24 / Chapter 3.3.2 --- Concept Accuracy --- p.28 / Chapter 3.4 --- Results and Analysis --- p.33 / Chapter 3.5 --- Chapter Summary --- p.38 / Chapter 4 --- Belief Networks for NLU --- p.40 / Chapter 4.1 --- Problem Formulation --- p.40 / Chapter 4.2 --- The Original NLU Framework --- p.41 / Chapter 4.2.1 --- Semantic Tagging --- p.41 / Chapter 4.2.2 --- Concept Selection --- p.42 / Chapter 4.2.3 --- Bayesian Inference --- p.43 / Chapter 4.2.4 --- Thresholding --- p.44 / Chapter 4.2.5 --- Goal Identification --- p.45 / Chapter 4.3 --- Evaluation Method of Goal Identification Performance --- p.45 / Chapter 4.4 --- Baseline Result --- p.48 / Chapter 4.5 --- Chapter Summary --- p.50 / Chapter 5 --- The Effects of Recognition Errors on NLU --- p.51 / Chapter 5.1 --- Experiments --- p.51 / Chapter 5.1.1 --- Perfect Case´ؤThe Use of Transcripts --- p.53 / Chapter 5.1.2 --- Train on Recognition Hypotheses --- p.53 / Chapter 5.1.3 --- Test on Recognition Hypotheses --- p.55 / Chapter 5.1.4 --- Train and Test on Recognition Hypotheses --- p.56 / Chapter 5.2 --- Analysis of Results --- p.60 / Chapter 5.3 --- Chapter Summary --- p.67 / Chapter 6 --- The Use of Multiple Speech Recognition Hypotheses for NLU --- p.69 / Chapter 6.1 --- The Extended NLU Framework --- p.76 / Chapter 6.1.1 --- Semantic Tagging --- p.76 / Chapter 6.1.2 --- Recognition Confidence Score Normalization --- p.77 / Chapter 6.1.3 --- Concept Selection --- p.79 / Chapter 6.1.4 --- Bayesian Inference --- p.80 / Chapter 6.1.5 --- Combination with Confidence Scores --- p.81 / Chapter 6.1.6 --- Thresholding --- p.84 / Chapter 6.1.7 --- Goal Identification --- p.84 / Chapter 6.2 --- Experiments --- p.86 / Chapter 6.2.1 --- The Use of First Best Recognition Hypothesis --- p.86 / Chapter 6.2.2 --- Train on Multiple Recognition Hypotheses --- p.86 / Chapter 6.2.3 --- Test on Multiple Recognition Hypotheses --- p.87 / Chapter 6.2.4 --- Train and Test on Multiple Recognition Hypotheses --- p.88 / Chapter 6.3 --- Significance Testing --- p.90 / Chapter 6.4 --- Result Analysis --- p.91 / Chapter 6.5 --- Chapter Summary --- p.97 / Chapter 7 --- Conclusions and Future Work --- p.98 / Chapter 7.1 --- Conclusions --- p.98 / Chapter 7.2 --- Contribution --- p.99 / Chapter 7.3 --- Future Work --- p.100 / Bibliography --- p.102 / Chapter A --- Speech Recognition Hypotheses Distribution --- p.105 / Chapter B --- Recognition Errors in Three Kinds of Queries --- p.107 / Chapter C --- The Effects of Recognition Errors in N-Best list on NLU --- p.114 / Chapter D --- Training on Multiple Recognition Hypotheses --- p.117 / Chapter E --- Testing on Multiple Recognition Hypotheses --- p.132 / Chapter F --- Hand-designed Grammar For ATIS --- p.139
42

Semantic annotation of Chinese texts with message structures based on HowNet

Wong, Ping-wai. January 2007 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2007. / Title proper from title frame. Also available in printed format.
43

Generating natural language text in response to questions about database structure /

McKeown, Kathleen R. January 1900 (has links)
Thesis (Ph. D.)--University of Pennsylvania, 1982. / Cover title. Includes bibliographical references and index.
44

TERRESA a task-based message-driven parallel semantic network system /

Lee, Chain-Wu. January 1900 (has links)
Thesis (Ph. D.)--State University of New York at Buffalo, 1999. / "January 25, 1999." Includes bibliographical references (leaves 201-209). Also available in print.
45

Integrating top-down and bottom-up approaches in inductive logic programming: applications in natural language processing and relational data mining

Tang, Lap Poon Rupert 28 August 2008 (has links)
Not available / text
46

Learning for information extraction: from named entity recognition and disambiguation to relation extraction

Bunescu, Razvan Constantin, 1975- 28 August 2008 (has links)
Information Extraction, the task of locating textual mentions of specific types of entities and their relationships, aims at representing the information contained in text documents in a structured format that is more amenable to applications in data mining, question answering, or the semantic web. The goal of our research is to design information extraction models that obtain improved performance by exploiting types of evidence that have not been explored in previous approaches. Since designing an extraction system through introspection by a domain expert is a laborious and time consuming process, the focus of this thesis will be on methods that automatically induce an extraction model by training on a dataset of manually labeled examples. Named Entity Recognition is an information extraction task that is concerned with finding textual mentions of entities that belong to a predefined set of categories. We approach this task as a phrase classification problem, in which candidate phrases from the same document are collectively classified. Global correlations between candidate entities are captured in a model built using the expressive framework of Relational Markov Networks. Additionally, we propose a novel tractable approach to phrase classification for named entity recognition based on a special Junction Tree representation. Classifying entity mentions into a predefined set of categories achieves only a partial disambiguation of the names. This is further refined in the task of Named Entity Disambiguation, where names need to be linked to their actual denotations. In our research, we use Wikipedia as a repository of named entities and propose a ranking approach to disambiguation that exploits learned correlations between words from the name context and categories from the Wikipedia taxonomy. Relation Extraction refers to finding relevant relationships between entities mentioned in text documents. Our approaches to this information extraction task differ in the type and the amount of supervision required. We first propose two relation extraction methods that are trained on documents in which sentences are manually annotated for the required relationships. In the first method, the extraction patterns correspond to sequences of words and word classes anchored at two entity names occurring in the same sentence. These are used as implicit features in a generalized subsequence kernel, with weights computed through training of Support Vector Machines. In the second approach, the implicit extraction features are focused on the shortest path between the two entities in the word-word dependency graph of the sentence. Finally, in a significant departure from previous learning approaches to relation extraction, we propose reducing the amount of required supervision to only a handful of pairs of entities known to exhibit or not exhibit the desired relationship. Each pair is associated with a bag of sentences extracted automatically from a very large corpus. We extend the subsequence kernel to handle this weaker form of supervision, and describe a method for weighting features in order to focus on those correlated with the target relation rather than with the individual entities. The resulting Multiple Instance Learning approach offers a competitive alternative to previous relation extraction methods, at a significantly reduced cost in human supervision. / text
47

Learning for semantic parsing with kernels under various forms of supervision

Kate, Rohit Jaivant, 1978- 28 August 2008 (has links)
Not available / text
48

Learning for semantic parsing and natural language generation using statistical machine translation techniques

Wong, Yuk Wah, 1979- 28 August 2008 (has links)
Not available
49

Automatic question generation : a syntactical approach to the sentence-to-question generation case

Ali, Husam Deeb Abdullah Deeb January 2012 (has links)
Humans are not often very skilled in asking good questions because of their inconsistent mind in certain situations. Thus, Question Generation (QG) and Question Answering (QA) became the two major challenges for the Natural Language Processing (NLP), Natural Language Generation (NLG), Intelligent Tutoring System, and Information Retrieval (IR) communities, recently. In this thesis, we consider a form of Sentence-to-Question generation task where given a sentence as input, the QG system would generate a set of questions for which the sentence contains, implies, or needs answers. Since the given sentence may be a complex sentence, our system generates elementary sentences from the input complex sentences using a syntactic parser. A Part of Speech (POS) tagger and a Named Entity Recognizer (NER) are used to encode necessary information. Based on the subject, verb, object and preposition information, sentences are classified in order to determine the type of questions to be generated. We conduct extensive experiments on the TREC-2007 (Question Answering Track) dataset. The scenario for the main task in the TREC-2007 QA track was that an adult, native speaker of English is looking for information about a target of interest. Using the given target, we filter out the important sentences from the large sentence pool and generate possible questions from them. Once we generate all the questions from the sentences, we perform a recall-based evaluation. That is, we count the overlap of our system generated questions with the given questions in the TREC dataset. For a topic, we get a recall 1.0 if all the given TREC questions are generated by our QG system and 0.0 if opposite. To validate the performance of our QG system, we took part in the First Question Generation Shared Task Evaluation Challenge, QGSTEC in 2010. Experimental analysis and evaluation results along with a comparison of different participants of QGSTEC'2010 show potential significance of our QG system. / x, 125 leaves : ill. ; 29 cm
50

Integrating intention and convention to organize problem solving dialogues

Turner, Elise Hill 12 1900 (has links)
No description available.

Page generated in 0.4989 seconds