• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 5
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 132
  • 132
  • 66
  • 55
  • 47
  • 45
  • 36
  • 31
  • 28
  • 26
  • 25
  • 19
  • 19
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Nuggeteer: Automatic Nugget-Based Evaluation Using Descriptions and Judgements

Marton, Gregory 09 January 2006 (has links)
TREC Definition and Relationship questions are evaluated on thebasis of information nuggets that may be contained in systemresponses. Human evaluators provide informal descriptions of eachnugget, and judgements (assignments of nuggets to responses) for eachresponse submitted by participants.The best present automatic evaluation for these kinds of questions isPourpre. Pourpre uses a stemmed unigram similarity of responses withnugget descriptions, yielding an aggregate result that is difficult tointerpret, but is useful for relative comparison. Nuggeteer, bycontrast, uses both the human descriptions and the human judgements,and makes binary decisions about each response, so that the end resultis as interpretable as the official score.I explore n-gram length, use of judgements, stemming, and termweighting, and provide a new algorithm quantitatively comparable to,and qualitatively better than the state of the art.
2

Functional inferences over heterogeneous data

Nuamah, Kwabena Amoako January 2018 (has links)
Inference enables an agent to create new knowledge from old or discover implicit relationships between concepts in a knowledge base (KB), provided that appropriate techniques are employed to deal with ambiguous, incomplete and sometimes erroneous data. The ever-increasing volumes of KBs on the web, available for use by automated systems, present an opportunity to leverage the available knowledge in order to improve the inference process in automated query answering systems. This thesis focuses on the FRANK (Functional Reasoning for Acquiring Novel Knowledge) framework that responds to queries where no suitable answer is readily contained in any available data source, using a variety of inference operations. Most question answering and information retrieval systems assume that answers to queries are stored in some form in the KB, thereby limiting the range of answers they can find. We take an approach motivated by rich forms of inference using techniques, such as regression, for prediction. For instance, FRANK can answer “what country in Europe will have the largest population in 2021?" by decomposing Europe geo-spatially, using regression on country population for past years and selecting the country with the largest predicted value. Our technique, which we refer to as Rich Inference, combines heuristics, logic and statistical methods to infer novel answers to queries. It also determines what facts are needed for inference, searches for them, and then integrates the diverse facts and their formalisms into a local query-specific inference tree. Our primary contribution in this thesis is the inference algorithm on which FRANK works. This includes (1) the process of recursively decomposing queries in way that allows variables in the query to be instantiated by facts in KBs; (2) the use of aggregate functions to perform arithmetic and statistical operations (e.g. prediction) to infer new values from child nodes; and (3) the estimation and propagation of uncertainty values into the returned answer based on errors introduced by noise in the KBs or errors introduced by aggregate functions. We also discuss many of the core concepts and modules that constitute FRANK. We explain the internal “alist” representation of FRANK that gives it the required flexibility to tackle different kinds of problems with minimal changes to its internal representation. We discuss the grammar for a simple query language that allows users to express queries in a formal way, such that we avoid the complexities of natural language queries, a problem that falls outside the scope of this thesis. We evaluate the framework with datasets from open sources.
3

Cross-Lingual Question Answering for Corpora with Question-Answer Pairs

Huang, Shiuan-Lung 02 August 2005 (has links)
Question answering from a corpus of question-answer (QA) pairs accepts a user question in a natural language, and retrieves relevant QA pairs in the corpus. Most of existing question answering techniques are monolingual in nature. That is, the language used for expressing a user question is identical to that for the QA pairs in the corpus. However, with the globalization of business environments and advances in Internet technology, more and more online information and knowledge are stored in the question-answer pair format on the Internet or intranet in different languages. To facilitate users¡¦ access to these QA-pair documents using natural language queries in such a multilingual environment, there is a pressing need for the support of cross-lingual question answering (CLQA). In response, this study designs a thesaurus based CLQA technique. We empirically evaluate our proposed CLQA technique, using a monolingual question answering technique and a machine translation based CLQA technique as performance benchmarks. Our empirical evaluation results show that our proposed CLQA technique achieves a satisfactory effectiveness when using that of the monolingual question answering technique as a performance reference. Moreover, our empirical evaluation results suggest our proposed thesaurus based CLQA technique significantly outperforms the benchmark machine translation based CLQA technique.
4

Question answering using document tagging and question classification

Dubien, Stephen, University of Lethbridge. Faculty of Arts and Science January 2005 (has links)
Question answering (QA) is a relatively new area of research. QA is retriecing answers to questions rather than information retrival systems (search engines), which retrieve documents. This means that question answering systems will possibly be the next generation of search engines. What is left to be done to allow QA to be the next generation of search engines? The answer is higher accuracy, which can be achieved by investigating methods of questions answering. I took the approach of designing a question answering system that is based on document tagging and question classification. Question classification extracts useful information from the question about how to answer the question. Document tagging extracts useful information from the documents, which will be used in finding the answer to the question. We used different available systems to tage the documents. Our system classifies the questions using manually developed rules. I also investigated different ways which can use both these methods to answer questions and found that our methods had a comparable accuracy to some systems that use deeper processing techniques. This thesis includes investigations into modules of a question answering system and gives insights into how to go about developing a question answering system based on document tagging and question classification. I also evaluated our current system with the questions from the TREC 2004 question answering track. / viii, 139 leaves ; 29 cm.
5

Answer set programming with clause learning

Ward, Jeffrey Alan, January 2004 (has links)
Thesis (Ph. D.)--Ohio State University, 2004. / Title from first page of PDF file. Document formatted into pages; contains xv, 170 p. : ill. Advisors: Timothy J. Long and John S. Schlipf, Department of Computer Science and Engineering. Includes bibliographical references (p. 165-170).
6

Computational Natural Language Inference: Robust and Interpretable Question Answering

Sharp, Rebecca, Sharp, Rebecca January 2017 (has links)
We address the challenging task of computational natural language inference, by which we mean bridging two or more natural language texts while also providing an explanation of how they are connected. In the context of question answering (i.e., finding short answers to natural language questions), this inference connects the question with its answer and we learn to approximate this inference with machine learning. In particular, here we present four approaches to question answering, each of which shows a significant improvement in performance over baseline methods. In our first approach, we make use of the underlying discourse structure inherent in free text (i.e. whether the text contains an explanation, elaboration, contrast, etc.) in order to increase the amount of training data for (and subsequently the performance of) a monolingual alignment model. In our second work, we propose a framework for training customized lexical semantics models such that each one represents a single semantic relation. We use causality as a use case, and demonstrate that our customized model is able to both identify causal relations as well as significantly improve our ability to answer causal questions. We then propose two approaches that seek to answer questions by learning to rank human-readable justifications for the answers, such that the model selects the answer with the best justification. The first uses a graph-structured representation of the background knowledge and performs information aggregation to construct multi-sentence justifications. The second reduces pre-processing costs by limiting itself to a single sentence and using a neural network to learn a latent representation of the background knowledge. For each of these, we show that in addition to significant improvement in correctly answering questions, we also outperform a strong baseline in terms of the quality of the answer justification given.
7

Automated question answering : template-based approach

Sneiders, Eriks January 2002 (has links)
<p>The rapid growth in the development of Internet-basedinformation systems increases the demand for natural langu-ageinterfaces that are easy to set up and maintain. Unfortunately,the problem of understanding natural language queries is farfrom being solved. Therefore this research proposes a simplertask of matching a one-sentence-long user question to a numberof question templates, which cover the knowledge domain of theinformation system, without in-depth understanding of the userquestion itself.The research started with development of an FAQ(Frequently Asked Question) answering system that providespre-stored answers to user questions asked in ordinary English.The language processing technique developed for FAQ retrievaldoes not analyze user questions. Instead, analysis is appliedto FAQs in the database long before any user questions aresubmitted. Thus, the work of FAQ retrieval is reduced tokeyword matching without understanding the questions, and thesystem still creates an illusion of intelligence.Further, the research adapted the FAQ answering techniqueto a question-answering interface for a structured database,e.g., relational database. The entity-relationship model of thedatabase is covered with an exhaustive collection of questiontemplates - dynamic, parameterized "frequently asked questions"- that describe the entities, their attributes, and therelationships in form of natural language questions. Unlike astatic FAQ, a question template contains entity slots - freespace for data instances that represent the main concepts inthe question. In order to answer a user question, the systemfinds matching question templates and data instances that fillthe entity slots. The associated answer templates create theanswer.Finally, the thesis introduces a generic model oftemplate-based question answering which is a summary andgene-ralization of the features common for the above systems:they (i) split the application-specific knowledge domain into anumber of question-specific knowledge domains, (ii) attach aquestion template, whose answer is known in advance, to eachknowledge domain, and (iii) match the submitted user questionto each question template within the context of its ownknowledge domain.</p><p><b>Keywords:</b>automated question answering, FAQ answering,question-answering system, template-based question answering,question template, natural language based interface</p>
8

Automated question answering : template-based approach

Sneiders, Eriks January 2002 (has links)
The rapid growth in the development of Internet-basedinformation systems increases the demand for natural langu-ageinterfaces that are easy to set up and maintain. Unfortunately,the problem of understanding natural language queries is farfrom being solved. Therefore this research proposes a simplertask of matching a one-sentence-long user question to a numberof question templates, which cover the knowledge domain of theinformation system, without in-depth understanding of the userquestion itself.The research started with development of an FAQ(Frequently Asked Question) answering system that providespre-stored answers to user questions asked in ordinary English.The language processing technique developed for FAQ retrievaldoes not analyze user questions. Instead, analysis is appliedto FAQs in the database long before any user questions aresubmitted. Thus, the work of FAQ retrieval is reduced tokeyword matching without understanding the questions, and thesystem still creates an illusion of intelligence.Further, the research adapted the FAQ answering techniqueto a question-answering interface for a structured database,e.g., relational database. The entity-relationship model of thedatabase is covered with an exhaustive collection of questiontemplates - dynamic, parameterized "frequently asked questions"- that describe the entities, their attributes, and therelationships in form of natural language questions. Unlike astatic FAQ, a question template contains entity slots - freespace for data instances that represent the main concepts inthe question. In order to answer a user question, the systemfinds matching question templates and data instances that fillthe entity slots. The associated answer templates create theanswer.Finally, the thesis introduces a generic model oftemplate-based question answering which is a summary andgene-ralization of the features common for the above systems:they (i) split the application-specific knowledge domain into anumber of question-specific knowledge domains, (ii) attach aquestion template, whose answer is known in advance, to eachknowledge domain, and (iii) match the submitted user questionto each question template within the context of its ownknowledge domain. Keywords:automated question answering, FAQ answering,question-answering system, template-based question answering,question template, natural language based interface / <p>NR 20140805</p>
9

Topic indexing and retrieval for open domain factoid question answering

Ahn, Kisuh January 2009 (has links)
Factoid Question Answering is an exciting area of Natural Language Engineering that has the potential to replace one major use of search engines today. In this dissertation, I introduce a new method of handling factoid questions whose answers are proper names. The method, Topic Indexing and Retrieval, addresses two issues that prevent current factoid QA system from realising this potential: They can’t satisfy users’ demand for almost immediate answers, and they can’t produce answers based on evidence distributed across a corpus. The first issue arises because the architecture common to QA systems is not easily scaled to heavy use because so much of the work is done on-line: Text retrieved by information retrieval (IR) undergoes expensive and time-consuming answer extraction while the user awaits an answer. If QA systems are to become as heavily used as popular web search engines, this massive process bottle-neck must be overcome. The second issue of how to make use of the distributed evidence in a corpus is relevant when no single passage in the corpus provides sufficient evidence for an answer to a given question. QA systems commonly look for a text span that contains sufficient evidence to both locate and justify an answer. But this will fail in the case of questions that require evidence from more than one passage in the corpus. Topic Indexing and Retrieval method developed in this thesis addresses both these issues for factoid questions with proper name answers by restructuring the corpus in such a way that it enables direct retrieval of answers using off-the-shelf IR. The method has been evaluated on 377 TREC questions with proper name answers and 41 questions that require multiple pieces of evidence from different parts of the TREC AQUAINT corpus. With regards to the first evaluation, scores of 0.340 in Accuracy and 0.395 in Mean Reciprocal Rank (MRR) show that the Topic Indexing and Retrieval performs well for this type of questions. A second evaluation compares performance on a corpus of 41 multi-evidence questions by a question-factoring baseline method that can be used with the standard QA architecture and by my Topic Indexing and Retrieval method. The superior performance of the latter (MRR of 0.454 against 0.341) demonstrates its value in answering such questions.
10

Acquiring syntactic and semantic transformations in question answering

Kaisser, Michael January 2010 (has links)
One and the same fact in natural language can be expressed in many different ways by using different words and/or a different syntax. This phenomenon, commonly called paraphrasing, is the main reason why Natural Language Processing (NLP) is such a challenging task. This becomes especially obvious in Question Answering (QA) where the task is to automatically answer a question posed in natural language, usually in a text collection also consisting of natural language texts. It cannot be assumed that an answer sentence to a question uses the same words as the question and that these words are combined in the same way by using the same syntactic rules. In this thesis we describe methods that can help to address this problem. Firstly we explore how lexical resources, i.e. FrameNet, PropBank and VerbNet can be used to recognize a wide range of syntactic realizations that an answer sentence to a given question can have. We find that our methods based on these resources work well for web-based Question Answering. However we identify two problems: 1) All three resources as of yet have significant coverage issues. 2) These resources are not suitable to identify answer sentences that show some form of indirect evidence. While the first problem hinders performance currently, it is not a theoretical problem that renders the approach unsuitable–it rather shows that more efforts have to be made to produce more complete resources. The second problem is more persistent. Many valid answer sentences–especially in small, journalistic corpora–do not provide direct evidence for a question, rather they strongly suggest an answer without logically implying it. Semantically motivated resources like FrameNet, PropBank and VerbNet can not easily be employed to recognize such forms of indirect evidence. In order to investigate ways of dealing with indirect evidence, we used Amazon’s Mechanical Turk to collect over 8,000 manually identified answer sentences from the AQUAINT corpus to the over 1,900 TREC questions from the 2002 to 2006 QA tracks. The pairs of answer sentences and their corresponding questions form the QASP corpus, which we released to the public in April 2008. In this dissertation, we use the QASP corpus to develop an approach to QA based on matching dependency relations between answer candidates and question constituents in the answer sentences. By acquiring knowledge about syntactic and semantic transformations from dependency relations in the QASP corpus, additional answer candidates can be identified that could not be linked to the question with our first approach.

Page generated in 0.0452 seconds