• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 8
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 155
  • 73
  • 61
  • 53
  • 52
  • 44
  • 39
  • 36
  • 30
  • 26
  • 26
  • 20
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

computational framework for question processing in community question answering services: 一個社區問答服務中問題處理的計算框架. / 一個社區問答服務中問題處理的計算框架 / A computational framework for question processing in community question answering services: Yi ge she qu wen da fu wu zhong wen ti chu li de ji suan kuang jia. / Yi ge she qu wen da fu wu zhong wen ti chu li de ji suan kuang jia

January 2014 (has links)
社區問答服務如雅虎知識+和百度知道為大規模的使用者提供按需問答服務。近年來,隨著社區內大量增加的問題,社區問答服務在問題解決和知識學習的效率上面臨著不小的挑戰。為了方便回答者找到合適的問題,幫助提問者更高效的獲取資訊,本論丈提出了個社區問答服務中問題處理的計算框架。 / 該計算框架中包含三部分:流行度分析與預測,路由,以及結構化。第一部分分析了影響問題流行度的因素,發現使用者和話題的交集造成了不同的問題流行度。基於這個發現,我們提出了個基於相互增強的標籤傳播演算法,利用問題交本和提問者簡介預測問題流行度。實驗結果證明提出的演算法比先進的基準方法更能區別高流行度和低流行度的問題。 / 第二部分目的在於把新提出的問題路由給潛在的回答者。我們提出的問題路由框架考慮了回答者的專業知識和可用性。為了估算回答者的專業知識,我們提出了三個模型。第一個模型來源於查詢詞相似語言模型,之後的兩個模型通過加入回答品質進步優化第一個模型。對於估計回答者的可用性,我借助了一個自回歸模型。實現結果證明引入答案品質顯著提高了問題路由的效果。此外,利用相似回答者在相似問題上的答案品質可以做出更準確的回答者專業知識預測,準而提高路由性能。回答者的可用性估計則進一步提高了路由的效果。 / 在問題路由中,回答者專業知識估計起到了至關重要的作用。然而目前的方法使用全部的簡介去對所有回答者進行估計,效率不高又費時。為了解決這個問題,我們借助問題所在的類別構建了類別-回答者索引來過濾不相關的回答者,並提出了類別敏感的語言模型來估計使用者專業知識。實驗結果說明了:一,類別-回答者索引極大縮小了相關回答者的範圈,降低了計算時間:二,類別敏感的語言模型相比現今的基準方法,可更準確估計回答者專業知識。 / 在框架的第三個部分,我們提出了個新穎的基於分層實體的方法結構化社區問答服務中的問題。由於大量文檔的存在,傳統的基於清單的問題組織在內容流質和知識學習上效率低下。為了解決這個問題,我們利用大規模的實體庫,構建了個三步框架把問題結構化到“實體樹"中。實驗結果反映了該框架的有效性。我們進步從使用者和系統兩方面評價實體樹在組織知識上的表現。在用戶層面上,用戶調查表明,使用者在基於實體樹的問題組織上知識學習的表現比基於列表的顯著提高。在系統層面上,實體樹通過再排序明顯提高了系統的問題搜索效果。 / 概括起來,該論丈在概念框架和實證基礎兩方面為社區問答中的問題處理做出了貢獻。 / Community Question Answering (CQA) services, such as Yahoo! Answersand Baidu Zhidao, provide a platform for a great number of users to ask and answer for their own needs. In recent years, the efficiency of CQA services for question solving and knowledge learning, however, is challenged by a sharp increase of questions raised in the communities. To facilitate answerers access to proper questions and help askers get information more efficiently, in this thesis we propose a computational framework for question processing in CQA services. / The framework consists of three components: popularity analysisand prediction, routing, and structuralization. The first componentanalyzes the factors affecting question popularity, and observes that the interaction of users and topics leads to the difference of question popularity. Based on the findings, we propose a mutual reinforcement-based label propagation algorithm to predict question popularity using features of question texts and asker profiles. Empirical results demonstrate that our algorithm is more effective in distinguishing high-popularity questions from low-popularity ones than other state-of-the-art baselines. / The second component aims to route new questions to potential answerers in CQA services. The proposed question routing (QR) framework considers both answerer expertise and answerer availiability. To estimate answerer expertise, we propose three models. The first one is derived from the query likelihood language model, and the latter two models utilize the answer quality to refine the first model. To estimate answerer availability, we employ an autoregressive model. Experimental results demonstrate that leveraging answer quality can greatly improve the performance of QR. In addition, utilizing similar answerers’ answer quality on similar questions provides more accurate expertise estimation and thus gives better QR performance. Moreover, answerer availability estimation further boosts the performance of QR. / Expertise estimation plays a key role in QR. However, current approaches employ full profiles to estimate all answerers’ expertise, which is ineffective and time-consuming. To address this problem, we construct category-answerer indexes for filtering irrelevant answerersand develop category-sensitive language models for estimating answerer expertise. Experimental results show that: first, category-answerer indexes produce a much shorter list of relevant answerers to be routed, with computational costs substantially reduced; second, category-sensitive language models obtain more accurate expertise estimation relative to state-of-the-art baselines. / In the third component, we propose a novel hierarchical entity based approach to structuralize questions in CQA services. Traditional list-based organization of questions is not effective for content browsing and knowledge learning due to large volume of documents. To address this problem, we utilize a large-scale entity repository, and construct a three-step framework to structuralize questionsin “cluster entity trees (CETs). Experimental results show the effectiveness of the framework in constructing CET. We further evaluate the performance of CET on knowledge organization from both user and system aspects. From a user aspect, our user study demonstrates that, with CET-based organization, users perform significantly better in knowledge learning than using list-based approach. From a system aspect, CET substantially boosts the performance on question search through re-ranking. / In summary, this thesis contributes both a conceptual framework and an empirical foundation to question processing in CQA services. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Li, Baichuan. / Thesis (Ph.D.) Chinese University of Hong Kong, 2014. / Includes bibliographical references (leaves 138-161). / Abstracts also in Chinese. / Li, Baichuan.
2

The process of question answering

Lehnert, Wendy G. January 1977 (has links)
Thesis--Yale. / Includes bibliographical references (leaves 469-472).
3

Nuggeteer: Automatic Nugget-Based Evaluation Using Descriptions and Judgements

Marton, Gregory 09 January 2006 (has links)
TREC Definition and Relationship questions are evaluated on thebasis of information nuggets that may be contained in systemresponses. Human evaluators provide informal descriptions of eachnugget, and judgements (assignments of nuggets to responses) for eachresponse submitted by participants.The best present automatic evaluation for these kinds of questions isPourpre. Pourpre uses a stemmed unigram similarity of responses withnugget descriptions, yielding an aggregate result that is difficult tointerpret, but is useful for relative comparison. Nuggeteer, bycontrast, uses both the human descriptions and the human judgements,and makes binary decisions about each response, so that the end resultis as interpretable as the official score.I explore n-gram length, use of judgements, stemming, and termweighting, and provide a new algorithm quantitatively comparable to,and qualitatively better than the state of the art.
4

Functional inferences over heterogeneous data

Nuamah, Kwabena Amoako January 2018 (has links)
Inference enables an agent to create new knowledge from old or discover implicit relationships between concepts in a knowledge base (KB), provided that appropriate techniques are employed to deal with ambiguous, incomplete and sometimes erroneous data. The ever-increasing volumes of KBs on the web, available for use by automated systems, present an opportunity to leverage the available knowledge in order to improve the inference process in automated query answering systems. This thesis focuses on the FRANK (Functional Reasoning for Acquiring Novel Knowledge) framework that responds to queries where no suitable answer is readily contained in any available data source, using a variety of inference operations. Most question answering and information retrieval systems assume that answers to queries are stored in some form in the KB, thereby limiting the range of answers they can find. We take an approach motivated by rich forms of inference using techniques, such as regression, for prediction. For instance, FRANK can answer “what country in Europe will have the largest population in 2021?" by decomposing Europe geo-spatially, using regression on country population for past years and selecting the country with the largest predicted value. Our technique, which we refer to as Rich Inference, combines heuristics, logic and statistical methods to infer novel answers to queries. It also determines what facts are needed for inference, searches for them, and then integrates the diverse facts and their formalisms into a local query-specific inference tree. Our primary contribution in this thesis is the inference algorithm on which FRANK works. This includes (1) the process of recursively decomposing queries in way that allows variables in the query to be instantiated by facts in KBs; (2) the use of aggregate functions to perform arithmetic and statistical operations (e.g. prediction) to infer new values from child nodes; and (3) the estimation and propagation of uncertainty values into the returned answer based on errors introduced by noise in the KBs or errors introduced by aggregate functions. We also discuss many of the core concepts and modules that constitute FRANK. We explain the internal “alist” representation of FRANK that gives it the required flexibility to tackle different kinds of problems with minimal changes to its internal representation. We discuss the grammar for a simple query language that allows users to express queries in a formal way, such that we avoid the complexities of natural language queries, a problem that falls outside the scope of this thesis. We evaluate the framework with datasets from open sources.
5

Cross-Lingual Question Answering for Corpora with Question-Answer Pairs

Huang, Shiuan-Lung 02 August 2005 (has links)
Question answering from a corpus of question-answer (QA) pairs accepts a user question in a natural language, and retrieves relevant QA pairs in the corpus. Most of existing question answering techniques are monolingual in nature. That is, the language used for expressing a user question is identical to that for the QA pairs in the corpus. However, with the globalization of business environments and advances in Internet technology, more and more online information and knowledge are stored in the question-answer pair format on the Internet or intranet in different languages. To facilitate users¡¦ access to these QA-pair documents using natural language queries in such a multilingual environment, there is a pressing need for the support of cross-lingual question answering (CLQA). In response, this study designs a thesaurus based CLQA technique. We empirically evaluate our proposed CLQA technique, using a monolingual question answering technique and a machine translation based CLQA technique as performance benchmarks. Our empirical evaluation results show that our proposed CLQA technique achieves a satisfactory effectiveness when using that of the monolingual question answering technique as a performance reference. Moreover, our empirical evaluation results suggest our proposed thesaurus based CLQA technique significantly outperforms the benchmark machine translation based CLQA technique.
6

KGScore-Open: Leveraging Knowledge Graph Semantics For Open-QA Evaluation

Hausman, Nicholas 01 June 2024 (has links) (PDF)
Evaluating active Question Answering (QA) systems, as users ask questions outside of the original testing data, has proven to be difficult, due to the difficulty of gauging answer quality without ground truth responses. We propose KGScore-Open, a configurable system capable of scoring questions and answers in Open Domain Question Answering (Open-QA) without ground truth answers present by leveraging DBPedia, a Knowledge Graph (KG) derived from Wikipedia, to score question-answer pairs. The system maps entities from questions and answers to DBPedia nodes, constructs a Knowledge Graph based on these entities, and calculates a relatedness score. Our system is validated on multiple datasets, achieving up to 83% accuracy in differentiating relevant from irrelevant answers in the Natural Questions dataset, 55% accuracy in classifying correct versus incorrect answers (hallucinations) in the TruthfulQA and HaluEval datasets, and 54% accuracy on the QA-Eval task using the EVOUNA dataset. The contributions of this work include a novel scoring system for indicating both relevancy and answer confidence in Open-QA without the need for ground truth answers, demonstrated efficacy across various tasks, and an extendable framework applicable to different KGs for evaluating QA systems of other domains.
7

Question answering using document tagging and question classification

Dubien, Stephen, University of Lethbridge. Faculty of Arts and Science January 2005 (has links)
Question answering (QA) is a relatively new area of research. QA is retriecing answers to questions rather than information retrival systems (search engines), which retrieve documents. This means that question answering systems will possibly be the next generation of search engines. What is left to be done to allow QA to be the next generation of search engines? The answer is higher accuracy, which can be achieved by investigating methods of questions answering. I took the approach of designing a question answering system that is based on document tagging and question classification. Question classification extracts useful information from the question about how to answer the question. Document tagging extracts useful information from the documents, which will be used in finding the answer to the question. We used different available systems to tage the documents. Our system classifies the questions using manually developed rules. I also investigated different ways which can use both these methods to answer questions and found that our methods had a comparable accuracy to some systems that use deeper processing techniques. This thesis includes investigations into modules of a question answering system and gives insights into how to go about developing a question answering system based on document tagging and question classification. I also evaluated our current system with the questions from the TREC 2004 question answering track. / viii, 139 leaves ; 29 cm.
8

Answer set programming with clause learning

Ward, Jeffrey Alan, January 2004 (has links)
Thesis (Ph. D.)--Ohio State University, 2004. / Title from first page of PDF file. Document formatted into pages; contains xv, 170 p. : ill. Advisors: Timothy J. Long and John S. Schlipf, Department of Computer Science and Engineering. Includes bibliographical references (p. 165-170).
9

The deductive pathfinder creating derivation plans for inferential question-answering /

Klahr, Philip, January 1975 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1975. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 157-162).
10

Computational Natural Language Inference: Robust and Interpretable Question Answering

Sharp, Rebecca, Sharp, Rebecca January 2017 (has links)
We address the challenging task of computational natural language inference, by which we mean bridging two or more natural language texts while also providing an explanation of how they are connected. In the context of question answering (i.e., finding short answers to natural language questions), this inference connects the question with its answer and we learn to approximate this inference with machine learning. In particular, here we present four approaches to question answering, each of which shows a significant improvement in performance over baseline methods. In our first approach, we make use of the underlying discourse structure inherent in free text (i.e. whether the text contains an explanation, elaboration, contrast, etc.) in order to increase the amount of training data for (and subsequently the performance of) a monolingual alignment model. In our second work, we propose a framework for training customized lexical semantics models such that each one represents a single semantic relation. We use causality as a use case, and demonstrate that our customized model is able to both identify causal relations as well as significantly improve our ability to answer causal questions. We then propose two approaches that seek to answer questions by learning to rank human-readable justifications for the answers, such that the model selects the answer with the best justification. The first uses a graph-structured representation of the background knowledge and performs information aggregation to construct multi-sentence justifications. The second reduces pre-processing costs by limiting itself to a single sentence and using a neural network to learn a latent representation of the background knowledge. For each of these, we show that in addition to significant improvement in correctly answering questions, we also outperform a strong baseline in terms of the quality of the answer justification given.

Page generated in 0.1165 seconds