• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 103
  • 8
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 153
  • 153
  • 73
  • 61
  • 53
  • 52
  • 44
  • 39
  • 36
  • 29
  • 26
  • 26
  • 20
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Automated question answering : template-based approach

Sneiders, Eriks January 2002 (has links)
The rapid growth in the development of Internet-basedinformation systems increases the demand for natural langu-ageinterfaces that are easy to set up and maintain. Unfortunately,the problem of understanding natural language queries is farfrom being solved. Therefore this research proposes a simplertask of matching a one-sentence-long user question to a numberof question templates, which cover the knowledge domain of theinformation system, without in-depth understanding of the userquestion itself.The research started with development of an FAQ(Frequently Asked Question) answering system that providespre-stored answers to user questions asked in ordinary English.The language processing technique developed for FAQ retrievaldoes not analyze user questions. Instead, analysis is appliedto FAQs in the database long before any user questions aresubmitted. Thus, the work of FAQ retrieval is reduced tokeyword matching without understanding the questions, and thesystem still creates an illusion of intelligence.Further, the research adapted the FAQ answering techniqueto a question-answering interface for a structured database,e.g., relational database. The entity-relationship model of thedatabase is covered with an exhaustive collection of questiontemplates - dynamic, parameterized "frequently asked questions"- that describe the entities, their attributes, and therelationships in form of natural language questions. Unlike astatic FAQ, a question template contains entity slots - freespace for data instances that represent the main concepts inthe question. In order to answer a user question, the systemfinds matching question templates and data instances that fillthe entity slots. The associated answer templates create theanswer.Finally, the thesis introduces a generic model oftemplate-based question answering which is a summary andgene-ralization of the features common for the above systems:they (i) split the application-specific knowledge domain into anumber of question-specific knowledge domains, (ii) attach aquestion template, whose answer is known in advance, to eachknowledge domain, and (iii) match the submitted user questionto each question template within the context of its ownknowledge domain. Keywords:automated question answering, FAQ answering,question-answering system, template-based question answering,question template, natural language based interface / <p>NR 20140805</p>
12

Topic indexing and retrieval for open domain factoid question answering

Ahn, Kisuh January 2009 (has links)
Factoid Question Answering is an exciting area of Natural Language Engineering that has the potential to replace one major use of search engines today. In this dissertation, I introduce a new method of handling factoid questions whose answers are proper names. The method, Topic Indexing and Retrieval, addresses two issues that prevent current factoid QA system from realising this potential: They can’t satisfy users’ demand for almost immediate answers, and they can’t produce answers based on evidence distributed across a corpus. The first issue arises because the architecture common to QA systems is not easily scaled to heavy use because so much of the work is done on-line: Text retrieved by information retrieval (IR) undergoes expensive and time-consuming answer extraction while the user awaits an answer. If QA systems are to become as heavily used as popular web search engines, this massive process bottle-neck must be overcome. The second issue of how to make use of the distributed evidence in a corpus is relevant when no single passage in the corpus provides sufficient evidence for an answer to a given question. QA systems commonly look for a text span that contains sufficient evidence to both locate and justify an answer. But this will fail in the case of questions that require evidence from more than one passage in the corpus. Topic Indexing and Retrieval method developed in this thesis addresses both these issues for factoid questions with proper name answers by restructuring the corpus in such a way that it enables direct retrieval of answers using off-the-shelf IR. The method has been evaluated on 377 TREC questions with proper name answers and 41 questions that require multiple pieces of evidence from different parts of the TREC AQUAINT corpus. With regards to the first evaluation, scores of 0.340 in Accuracy and 0.395 in Mean Reciprocal Rank (MRR) show that the Topic Indexing and Retrieval performs well for this type of questions. A second evaluation compares performance on a corpus of 41 multi-evidence questions by a question-factoring baseline method that can be used with the standard QA architecture and by my Topic Indexing and Retrieval method. The superior performance of the latter (MRR of 0.454 against 0.341) demonstrates its value in answering such questions.
13

Learning Automatic Question Answering from Community Data

Wang, Di 21 August 2012 (has links)
Although traditional search engines can retrieval thousands or millions of web links related to input keywords, users still need to manually locate answers to their information needs from multiple returned documents or initiate further searches. Question Answering (QA) is an effective paradigm to address this problem, which automatically finds one or more accurate and concise answers to natural language questions. Existing QA systems often rely on off-the-shelf Natural Language Processing (NLP) resources and tools that are not optimized for the QA task. Additionally, they tend to require hand-crafted rules to extract properties from input questions which, in turn, means that it would be time and manpower consuming to build comprehensive QA systems. In this thesis, we study the potentials of using the Community Question Answering (cQA) archives as a central building block of QA systems. To that end, this thesis proposes two cQA-based query expansion and structured query generation approaches, one employed in Text-based QA and the other in Ontology-based QA. In addition, based on above structured query generation method, an end-to-end open-domain Ontology-based QA is developed and evaluated on a standard factoid QA benchmark.
14

Acquiring syntactic and semantic transformations in question answering

Kaisser, Michael January 2010 (has links)
One and the same fact in natural language can be expressed in many different ways by using different words and/or a different syntax. This phenomenon, commonly called paraphrasing, is the main reason why Natural Language Processing (NLP) is such a challenging task. This becomes especially obvious in Question Answering (QA) where the task is to automatically answer a question posed in natural language, usually in a text collection also consisting of natural language texts. It cannot be assumed that an answer sentence to a question uses the same words as the question and that these words are combined in the same way by using the same syntactic rules. In this thesis we describe methods that can help to address this problem. Firstly we explore how lexical resources, i.e. FrameNet, PropBank and VerbNet can be used to recognize a wide range of syntactic realizations that an answer sentence to a given question can have. We find that our methods based on these resources work well for web-based Question Answering. However we identify two problems: 1) All three resources as of yet have significant coverage issues. 2) These resources are not suitable to identify answer sentences that show some form of indirect evidence. While the first problem hinders performance currently, it is not a theoretical problem that renders the approach unsuitable–it rather shows that more efforts have to be made to produce more complete resources. The second problem is more persistent. Many valid answer sentences–especially in small, journalistic corpora–do not provide direct evidence for a question, rather they strongly suggest an answer without logically implying it. Semantically motivated resources like FrameNet, PropBank and VerbNet can not easily be employed to recognize such forms of indirect evidence. In order to investigate ways of dealing with indirect evidence, we used Amazon’s Mechanical Turk to collect over 8,000 manually identified answer sentences from the AQUAINT corpus to the over 1,900 TREC questions from the 2002 to 2006 QA tracks. The pairs of answer sentences and their corresponding questions form the QASP corpus, which we released to the public in April 2008. In this dissertation, we use the QASP corpus to develop an approach to QA based on matching dependency relations between answer candidates and question constituents in the answer sentences. By acquiring knowledge about syntactic and semantic transformations from dependency relations in the QASP corpus, additional answer candidates can be identified that could not be linked to the question with our first approach.
15

Formally-based tools and techniques for human-computer dialogues

Alexander, Heather January 1986 (has links)
With ever cheaper and more powerful technology. the proliferation of computer systems, and higher expectations of their users, the user interface is now seen as a crucial part of any interactive system. As the designers and users of interactive software have found, though, it can be both difficult and costly to create good interactive software. It is therefore appropriate to look at ways of "engineering" the interface as well as the application. which we choose to do by using the software engineering techniques of specification and prototyping. Formally specifying the user interface allows the designer to reason about its properties in the light of the many guidelines on the subject. Early availability of prototypes of the user interface allows the designer to experiment with alternative options and to elicit feedback from potential users. This thesis presents tools and techniques (collectively called SPI) for specifying and prototyping the dialogues between an interactive system and its users. They are based on a formal specification and rapid prototyping method and notation called me too. and were originally designed as an extension to me too. They have also been implemented under UNIX*. thus enabling a transition from the formal specification to its implementation. *UNIX is a trademark of AT&T Bell Laboratories.
16

Learning Automatic Question Answering from Community Data

Wang, Di 21 August 2012 (has links)
Although traditional search engines can retrieval thousands or millions of web links related to input keywords, users still need to manually locate answers to their information needs from multiple returned documents or initiate further searches. Question Answering (QA) is an effective paradigm to address this problem, which automatically finds one or more accurate and concise answers to natural language questions. Existing QA systems often rely on off-the-shelf Natural Language Processing (NLP) resources and tools that are not optimized for the QA task. Additionally, they tend to require hand-crafted rules to extract properties from input questions which, in turn, means that it would be time and manpower consuming to build comprehensive QA systems. In this thesis, we study the potentials of using the Community Question Answering (cQA) archives as a central building block of QA systems. To that end, this thesis proposes two cQA-based query expansion and structured query generation approaches, one employed in Text-based QA and the other in Ontology-based QA. In addition, based on above structured query generation method, an end-to-end open-domain Ontology-based QA is developed and evaluated on a standard factoid QA benchmark.
17

Query processing strategies in a distributed data base.

Bodorik, Peter. Carleton University. Dissertation. Engineering, Electrical. January 1985 (has links)
Thesis (Ph. D.)--Carleton University, 1985. / Also available in electronic format on the Internet.
18

Language modeling approaches to question answering /

Banerjee, Protima. Han, Hyoil. January 2009 (has links)
Thesis (Ph.D.)--Drexel University, 2009. / Includes abstract and vita. Includes bibliographical references (leaves 187-198).
19

Understanding the sustainability of online question answering communities in China : the case of "Yahoo! Answers China" /

Jin, Xiao Ling Kathy. January 2009 (has links) (PDF)
Thesis (Ph.D.)--City University of Hong Kong, 2009. / "Submitted to Department of Information Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves 88-106)
20

A methodology for domain-specific conceptual data modeling and querying

Tian, Hao. January 2007 (has links)
Thesis (Ph. D.)--Georgia State University, 2007. / Rajshekhar Sunderraman, committee chair; Paul S. Katz, Yanqing Zhang, Ying Zhu, committee members. Electronic text (128 p. : ill.) : digital, PDF file. Description based on contents viewed Oct. 15, 2007; title from file title page. Includes bibliographical references (p. 124-128).

Page generated in 0.1497 seconds