• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 103
  • 8
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 153
  • 153
  • 73
  • 61
  • 53
  • 52
  • 44
  • 39
  • 36
  • 29
  • 26
  • 26
  • 20
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Identifying reputation collectors in community question answering (CQA) sites: Exploring the dark side of social media

Roy, P.K., Singh, J.P., Baabdullah, A.M., Kizgin, Hatice, Rana, Nripendra P. 08 August 2019 (has links)
Yes / This research aims to identify users who are posting as well as encouraging others to post low-quality and duplicate contents on community question answering sites. The good guys called Caretakers and the bad guys called Reputation Collectors are characterised by their behaviour, answering pattern and reputation points. The proposed system is developed and analysed over publicly available Stack Exchange data dump. A graph based methodology is employed to derive the characteristic of Reputation Collectors and Caretakers. Results reveal that Reputation Collectors are primary sources of low-quality answers as well as answers to duplicate questions posted on the site. The Caretakers answer limited questions of challenging nature and fetches maximum reputation against those questions whereas Reputation Collectors answers have so many low-quality and duplicate questions to gain the reputation point. We have developed algorithms to identify the Caretakers and Reputation Collectors of the site. Our analysis finds that 1.05% of Reputation Collectors post 18.88% of low quality answers. This study extends previous research by identifying the Reputation Collectors and 2 how they collect their reputation points.
92

Recommending best answer in a collaborative question answering system

Chen, Lin January 2009 (has links)
The World Wide Web has become a medium for people to share information. People use Web-based collaborative tools such as question answering (QA) portals, blogs/forums, email and instant messaging to acquire information and to form online-based communities. In an online QA portal, a user asks a question and other users can provide answers based on their knowledge, with the question usually being answered by many users. It can become overwhelming and/or time/resource consuming for a user to read all of the answers provided for a given question. Thus, there exists a need for a mechanism to rank the provided answers so users can focus on only reading good quality answers. The majority of online QA systems use user feedback to rank users’ answers and the user who asked the question can decide on the best answer. Other users who didn’t participate in answering the question can also vote to determine the best answer. However, ranking the best answer via this collaborative method is time consuming and requires an ongoing continuous involvement of users to provide the needed feedback. The objective of this research is to discover a way to recommend the best answer as part of a ranked list of answers for a posted question automatically, without the need for user feedback. The proposed approach combines both a non-content-based reputation method and a content-based method to solve the problem of recommending the best answer to the user who posted the question. The non-content method assigns a score to each user which reflects the users’ reputation level in using the QA portal system. Each user is assigned two types of non-content-based reputations cores: a local reputation score and a global reputation score. The local reputation score plays an important role in deciding the reputation level of a user for the category in which the question is asked. The global reputation score indicates the prestige of a user across all of the categories in the QA system. Due to the possibility of user cheating, such as awarding the best answer to a friend regardless of the answer quality, a content-based method for determining the quality of a given answer is proposed, alongside the non-content-based reputation method. Answers for a question from different users are compared with an ideal (or expert) answer using traditional Information Retrieval and Natural Language Processing techniques. Each answer provided for a question is assigned a content score according to how well it matched the ideal answer. To evaluate the performance of the proposed methods, each recommended best answer is compared with the best answer determined by one of the most popular link analysis methods, Hyperlink-Induced Topic Search (HITS). The proposed methods are able to yield high accuracy, as shown by correlation scores: Kendall correlation and Spearman correlation. The reputation method outperforms the HITS method in terms of recommending the best answer. The inclusion of the reputation score with the content score improves the overall performance, which is measured through the use of Top-n match scores.
93

Odpovídání na otázky nad strukturovanými daty / Question Answering over Structured Data

Birger, Mark January 2017 (has links)
Tato práce se zabývá problematikou odpovídání na otázky nad strukturovanými daty. Ve většině případů jsou strukturovaná data reprezentována pomocí propojených grafů, avšak ukrytí koncové struktury dát je podstatné pro využití podobných systémů jako součástí rozhraní s přirozeným jazykem. Odpovídající systém byl navržen a vyvíjen v rámci této práce. V porovnání s tradičními odpovídajícími systémy, které jsou založené na lingvistické analýze nebo statistických metodách, náš systém zkoumá poskytnutý graf a ve výsledků generuje sémantické vazby na základě vstupních párů otázka-odpověd'. Vyvíjený systém je nezávislý na struktuře dát, ale pro účely vyhodnocení jsme využili soubor dát z Wikidata a DBpedia. Kvalita výsledného systému a zkoumaného přístupu byla vyhodnocena s využitím připraveného datasetu a standartních metrik.
94

Knowledge Extraction for Hybrid Question Answering

Usbeck, Ricardo 18 May 2017 (has links)
Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources.
95

Can Wizards be Polyglots: Towards a Multilingual Knowledge-grounded Dialogue System

Liu, Evelyn Kai Yan January 2022 (has links)
The research of open-domain, knowledge-grounded dialogue systems has been advancing rapidly due to the paradigm shift introduced by large language models (LLMs). While the strides have improved the performance of the dialogue systems, the scope is mostly monolingual and English-centric. The lack of multilingual in-task dialogue data further discourages research in this direction. This thesis explores the use of transfer learning techniques to extend the English-centric dialogue systems to multiple languages. In particular, this work focuses on five typologically diverse languages, of which well-performing models could generalize to the languages that are part of the language family as the target languages, hence widening the accessibility of the systems to speakers of various languages. I propose two approaches: Multilingual Retrieval-Augmented Dialogue Model (xRAD) and Multilingual Generative Dialogue Model (xGenD). xRAD is adopted from a pre-trained multilingual question answering (QA) system and comprises a neural retriever and a multilingual generation model. Prior to the response generation, the retriever fetches relevant knowledge and conditions the retrievals to the generator as part of the dialogue context. This approach can incorporate knowledge into conversational agents, thus improving the factual accuracy of a dialogue model. In addition, xRAD has advantages over xGenD because of its modularity, which allows the fusion of QA and dialogue systems so long as appropriate pre-trained models are employed. On the other hand, xGenD takes advantage of an existing English dialogue model and performs a zero-shot cross-lingual transfer by training sequentially on English dialogue and multilingual QA datasets. Both automated and human evaluation were carried out to measure the models' performance against the machine translation baseline. The result showed that xRAD outperformed xGenD significantly and surpassed the baseline in most metrics, particularly in terms of relevance and engagingness. Whilst xRAD performance was promising to some extent, a detailed analysis revealed that the generated responses were not actually grounded in the retrieved paragraphs. Suggestions were offered to mitigate the issue, which hopefully could lead to significant progress of multilingual knowledge-grounded dialogue systems in the future.
96

[en] A QUESTION-ANSWERING CONVERSATIONAL AGENT WITH RECOMMENDATIONS BASED ON A DOMAIN ONTOLOGY / [pt] UM AGENTE CONVERSACIONAL PERGUNTA-RESPOSTA COM RECOMENDAÇÕES BASEADAS EM UMA ONTOLOGIA DE DOMÍNIO

JESSICA PALOMA SOUSA CARDOSO 05 November 2020 (has links)
[pt] A oferta de serviços por meio de interfaces conversacionais, ou chatbots, tem se tornado cada vez mais popular, com aplicações que variam de aplicativos de bancos e reserva de bilheteria a consultas em um banco de dados. No entanto, dado a quantidade massiva de dados disponível em alguns domínios, o usuário pode ter dificuldade em formular as consultas e recuperar as informações desejadas. Esta dissertação tem como objetivo investigar e avaliar o uso de recomendações na busca de informações numa base de dados de filmes através de chatbots. Neste trabalho, implementamos um chatbot por meio do uso de frameworks e técnicas da área de processamento de linguagem natural (NLP - Natural Language Processing). Para o reconhecimento de entidades e intenções, utilizamos o framework RASA NLU. Para a identificação das relações entre essas entidades, utilizamos as redes Transformers. Além disso, propomos diferentes estratégias para recomendações feitas a partir da ontologia de domínio. Para avaliação deste trabalho, conduzimos um estudo com usuários para avaliar o impacto das recomendações no uso do chatbot e aceitação da tecnologia por meio de um questionário baseado no Technology Acceptance Model (TAM). Por fim, discutimos os resultados do estudo, suas limitações e oportunidades de futuras melhorias. / [en] The offer of services provided through conversational interfaces, or chatbots, has become increasingly popular, with applications that range from bank applications and ticket booking to database queries. However, given the massive amount of data available in some domains, the user may find it difficult to formulate queries and retrieve the desired information. This dissertation investigates and evaluates the use of the recommendations in the search for information on a movie database through a chatbot. In this work, we implement a chatbot with the use of frameworks and techniques from the area of natural language processing (NLP). For the recognition of entities and intents, we use the RASA NLU framework. For the identification of relations between those entities, we use the Transformers networks. In addition, we propose different strategies for the recommendation from the domain ontology. To evaluate this work, we have conducted an empirical study with volunteer users to assess the impact of the recommendations on chatbot use and the acceptance of the technology through a survey based on the Technology Acceptance Model (TAM). Lastly, we discuss the results of this study, its limitations, and avenues for future improvements.
97

Knowledge acquisition from user reviews for interactive question answering

Konstantinova, Natalia January 2013 (has links)
Nowadays, the effective management of information is extremely important for all spheres of our lives and applications such as search engines and question answering systems help users to find the information that they need. However, even when assisted by these various applications, people sometimes struggle to find what they want. For example, when choosing a product customers can be confused by the need to consider many features before they can reach a decision. Interactive question answering (IQA) systems can help customers in this process, by answering questions about products and initiating a dialogue with the customers when their needs are not clearly defined. The focus of this thesis is how to design an interactive question answering system that will assist users in choosing a product they are looking for, in an optimal way, when a large number of similar products are available. Such an IQA system will be based on selecting a set of characteristics (also referred to as product features in this thesis), that describe the relevant product, and narrowing the search space. We believe that the order in which these characteristics are presented in terms of these IQA sessions is of high importance. Therefore, they need to be ranked in order to have a dialogue which selects the product in an efficient manner. The research question investigated in this thesis is whether product characteristics mentioned in user reviews are important for a person who is likely to purchase a product and can therefore be used when designing an IQA system. We focus our attention on products such as mobile phones; however, the proposed techniques can be adapted for other types of products if the data is available. Methods from natural language processing (NLP) fields such as coreference resolution, relation extraction and opinion mining are combined to produce various rankings of phone features. The research presented in this thesis employs two corpora which contain texts related to mobile phones specifically collected for this thesis: a corpus of Wikipedia articles about mobile phones and a corpus of mobile phone reviews published on the Epinions.com website. Parts of these corpora were manually annotated with coreference relations, mobile phone features and relations between mentions of the phone and its features. The annotation is used to develop a coreference resolution module as well as a machine learning-based relation extractor. Rule-based methods for identification of coreference chains describing the phone are designed and thoroughly evaluated against the annotated gold standard. Machine learning is used to find links between mentions of the phone (identified by coreference resolution) and phone features. It determines whether some phone feature belong to the phone mentioned in the same sentence or not. In order to find the best rankings, this thesis investigates several settings. One of the hypotheses tested here is that the relatively low results of the proposed baseline are caused by noise introduced by sentences which are not directly related to the phone and phone feature. To test this hypothesis, only sentences which contained mentions of the mobile phone and a phone feature linked to it were processed to produce rankings of the phones features. Selection of the relevant sentences is based on the results of coreference resolution and relation extraction. Another hypothesis is that opinionated sentences are a good source for ranking the phone features. In order to investigate this, a sentiment classification system is also employed to distinguish between features mentioned in positive and negative contexts. The detailed evaluation and error analysis of the methods proposed form an important part of this research and ensure that the results provided in this thesis are reliable.
98

Computers and Natural Language: Will They Find Happiness Together?

Prall, James W. January 1985 (has links)
Permission from the author to release this work as open access is pending. Please contact the ICS library if you would like to view this work.
99

Répondre à des questions à réponses multiples sur le Web / Answering multiple answer questions from the Web

Falco, Mathieu-Henri 22 May 2014 (has links)
Les systèmes de question-réponse renvoient une réponse précise à une question formulée en langue naturelle. Les systèmes de question-réponse actuels, ainsi que les campagnes d'évaluation les évaluant, font en général l'hypothèse qu'une seule réponse est attendue pour une question. Or nous avons constaté que, souvent, ce n'était pas le cas, surtout quand on cherche les réponses sur le Web et non dans une collection finie de documents.Nous nous sommes donc intéressés au traitement des questions attendant plusieurs réponses à travers un système de question-réponse sur le Web en français. Pour cela, nous avons développé le système Citron capable d'extraire des réponses multiples différentes à des questions factuelles en domaine ouvert, ainsi que de repérer et d'extraire le critère variant (date, lieu) source de la multiplicité des réponses. Nous avons montré grâce à notre étude de différents corpus que les réponses à de telles questions se trouvaient souvent dans des tableaux ou des listes mais que ces structures sont difficilement analysables automatiquement sans prétraitement. C'est pourquoi, nous avons également développé l'outil Kitten qui permet d'extraire le contenu des documents HTML sous forme de texte et aussi de repérer, analyser et formater ces structures. Enfin, nous avons réalisé deux expériences avec des utilisateurs. La première expérience évaluait Citron et les êtres humains sur la tâche d'extraction de réponse multiples : les résultats ont montré que Citron était plus rapide que les êtres humains et que l'écart entre la qualité des réponses de Citron et celle des utilisateurs était raisonnable. La seconde expérience a évalué la satisfaction des utilisateurs concernant la présentation de réponses multiples : les résultats ont montré que les utilisateurs préféraient la présentation de Citron agrégeant les réponses et y ajoutant un critère variant (lorsqu'il existe) par rapport à la présentation utilisée lors des campagnes d'évaluation. / Question answering systems find and extract a precise answer to a question in natural language. Both current question-answering systems and evaluation campaigns often assume that only one single answeris expected for a question. Our corpus studies show that this is rarely the case, specially when answers are extracted from the Web instead of a frozen collection of documents.We therefore focus on questions expecting multiple correct answers fromthe Web by developping the question-answering system Citron. Citron is dedicated to extracting multiple answers in open domain and identifying theshifting criteria (date, location) which is often the reason of this answer multiplicity Our corpus studies show that the answers of this kind of questions are often located in structures such as tables and lists which cannot be analysed without a suitable preprocessing. Consequently we developed the Kitten software which aims at extracting text information from HTML documents and also both identifying and formatting these structures.We finally evaluate Citron through two experiments involving users. Thefirst experiment evaluates both Citron and human beings on a multipleanswer extraction task: results show that Citron was faster than humans andthat the quality difference between answers extracted by Citron andhumans was reasonable. The second experiment evaluates user satisfaction regarding the presentation of multiple answers: results show that user shave a preference for Citron presentation aggregating answers and adding the shifting criteria (if it exists) over the presentation used by evaluation campaigns.
100

Uma arquitetura de question-answering instanciada no domínio de doenças crônicas / A question-answering architecture instantiated on the domains of chronic disease

Almansa, Luciana Farina 08 August 2016 (has links)
Nos ambientes médico e de saúde, especificamente no tratamento clínico do paciente, o papel da informação descrita nos prontuários médicos é registrar o estado de saúde do paciente e auxiliar os profissionais diretamente ligados ao tratamento. A investigação dessas informações de estado clínico em pesquisas científicas na área de biomedicina podem suportar o desenvolvimento de padrões de prevenção e tratamento de enfermidades. Porém, ler artigos científicos é uma tarefa que exige tempo e disposição, uma vez que realizar buscas por informações específicas não é uma tarefa simples e a área médica e de saúde está em constante atualização. Além disso, os profissionais desta área, em sua grande maioria, possuem uma rotina estressante, trabalhando em diversos empregos e atendendo muitos pacientes em um único dia. O objetivo deste projeto é o desenvolvimento de um Framework de Question Answering (QA) para suportar o desenvolvimento de sistemas de QA, que auxiliem profissionais da área da saúde na busca rápida por informações, especificamente, em epigenética e doenças crônicas. Durante o processo de construção do framework, estão sendo utilizados dois frameworks desenvolvidos anteriormente pelo grupo de pesquisa da mestranda: o SisViDAS e o FREDS, além de desenvolver os demais módulos de processamento de pergunta e de respostas. O QASF foi avaliado por meio de uma coleção de referências e medidas estatísticas de desempenho e os resultados apontam valores de precisão em torno de 0.7 quando a revocação era 0.3, para ambos o número de artigos recuperados e analisados eram 200. Levando em consideração que as perguntas inseridas no QASF são longas, com 70 termos por pergunta em média, e complexas, o QASF apresentou resultados satisfatórios. Este projeto pretende contribuir na diminuição do tempo gasto por profissionais da saúde na busca por informações de interesse, uma vez que sistemas de QA fornecem respostas diretas e precisas sobre uma pergunta feita pelo usuário / The medical record describes health conditions of patients helping experts to make decisions about the treatment. The biomedical scientific knowledge can improve the prevention and the treatment of diseases. However, the search for relevant knowledge may be a hard task because it is necessary time and the healthcare research is constantly updating. Many healthcare professionals have a stressful routine, because they work in different hospitals or medical offices, taking care many patients per day. The goal of this project is to design a Question Answering Framework to support faster and more precise searches for information in epigenetic, chronic disease and thyroid images. To develop the proposal, we are reusing two frameworks that have already developed: SisViDAS and FREDS. These two frameworks are being exploited to compose a document processing module. The other modules (question and answer processing) are being completely developed. The QASF was evaluated by a reference collection and performance measures. The results show 0.7 of precision and 0.3 of recall for two hundred articles retrieved. Considering that the questions inserted on the framework have an average of seventy terms, the QASF shows good results. This project intends to decrease search time once QA systems provide straight and precise answers in a process started by a user question in natural language

Page generated in 0.1106 seconds