• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 5
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 132
  • 132
  • 66
  • 55
  • 47
  • 45
  • 36
  • 31
  • 28
  • 26
  • 25
  • 19
  • 19
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Solving University entrance assessment using information retrieval / Resolvendo Vestibular utilizando recuperação de informação

Igor Cataneo Silveira 05 July 2018 (has links)
Answering questions posed in natural language is a key task in Artificial Intelligence. However, producing a successful Question Answering (QA) system is challenging, since it requires text understanding, information retrieval, information extraction and text production. This task is made even harder by the difficulties in collecting reliable datasets and in evaluating techniques, two pivotal points for machine learning approaches. This has led many researchers to focus on Multiple-Choice Question Answering (MCQA), a special case of QA where systems must select the correct answers from a small set of alternatives. One particularly interesting type of MCQA is solving Standardized Tests, such as Foreign Language Proficiency exams, Elementary School Science exams and University Entrance exams. These exams provide easy-to-evaluate challenging multiple-choice questions of varying difficulties about large, but limited, domains. The Exame Nacional do Ensino Médio (ENEM) is a High School level exam taken every year by students all over Brazil. It is widely used by Brazilian universities as an entrance exam and is the world\'s second biggest university entrance examination in number of registered candidates. This exam consists in writing an essay and solving a multiple-choice test comprising questions on four major topics: Humanities, Language, Science and Mathematics. Questions inside each major topic are not segmented by standard scholar disciplines (e.g. Geography, Biology, etc.) and often require interdisciplinary reasoning. Moreover, the previous editions of the exam and their solutions are freely available online, making it a suitable benchmark for MCQA. In this work we automate solving the ENEM focusing, for simplicity, on purely textual questions that do not require mathematical thinking. We formulate the problem of answering multiple-choice questions as finding the candidate-answer most similar to the statement. We investigate two approaches for measuring textual similarity of candidate-answer and statement. The first approach addresses this as a Text Information Retrieval (IR) problem, that is, as a problem of finding in a database the most relevant document to a query. Our queries are made of statement plus candidate-answer and we use three different corpora as database: the first comprises plain-text articles extracted from a dump of the Wikipedia in Portuguese language; the second contains only the text given in the question\'s header and the third is composed by pairs of question and correct answer extracted from ENEM assessments. The second approach is based on Word Embedding (WE), a method to learn vectorial representation of words in a way such that semantically similar words have close vectors. WE is used in two manners: to augment IR\'s queries by adding related words to those on the query according to the WE model, and to create vectorial representations for statement and candidate-answers. Using these vectorial representations we answer questions either directly, by selecting the candidate-answer that maximizes the cosine similarity to the statement, or indirectly, by extracting features from the representations and then feeding them into a classifier that decides which alternative is the answer. Along with the two mentioned approaches we investigate how to enhance them using WordNet, a structured lexical database where words are connected according to some relations like synonymy and hypernymy. Finally, we combine different configurations of the two approaches and their WordNet variations by creating an ensemble of algorithms found by a greedy search. This ensemble chooses an answer by the majority voting of its components. The first approach achieved an average of 24% accuracy using the headers, 25% using the pairs database and 26.9% using Wikipedia. The second approach achieved 26.6% using WE indirectly and 28% directly. The ensemble achieved 29.3% accuracy. These results, slightly above random guessing (20%), suggest that these techniques can capture some of the necessary skills to solve standardized tests. However, more sophisticated techniques that perform text understanding and common sense reasoning might be required to achieve human-level performance. / Responder perguntas feitas em linguagem natural é uma capacidade há muito desejada pela Inteligência Artificial. Porém, produzir um sistema de Question Answering (QA) é uma tarefa desafiadora, uma vez que ela requer entendimento de texto, recuperação de informação, extração de informação e produção de texto. Além disso, a tarefa se torna ainda mais difícil dada a dificuldade em coletar datasets confiáveis e em avaliar as técnicas utilizadas, sendo estes pontos de suma importância para abordagens baseadas em aprendizado de máquina. Isto tem levado muitos pesquisadores a focar em Multiple-Choice Question Answering (MCQA), um caso especial de QA no qual os sistemas devem escolher a resposta correta dentro de um grupo de possíveis respostas. Um caso particularmente interessante de MCQA é o de resolver testes padronizados, tal como testes de proficiência linguística, teste de ciências para ensino fundamental e vestibulares. Estes exames fornecem perguntas de múltipla escolha de fácil avaliação sobre diferentes domínios e de diferentes dificuldades. O Exame Nacional do Ensino Médio (ENEM) é um exame realizado anualmente por estudantes de todo Brasil. Ele é utilizado amplamente por universidades brasileiras como vestibular e é o segundo maior vestibular do mundo em número de candidatos inscritos. Este exame consiste em escrever uma redação e resolver uma parte de múltipla escolha sobre questões de: Ciências Humanas, Linguagens, Matemática e Ciências Naturais. As questões nestes tópicos não são divididas por matérias escolares (Geografia, Biologia, etc.) e normalmente requerem raciocínio interdisciplinar. Ademais, edições passadas do exame e suas soluções estão disponíveis online, tornando-o um benchmark adequado para MCQA. Neste trabalho nós automatizamos a resolução do ENEM focando, por simplicidade, em questões puramente textuais que não requerem raciocínio matemático. Nós formulamos o problema de responder perguntas de múltipla escolha como um problema de identificar a alternativa mais similar à pergunta. Nós investigamos duas abordagens para medir a similaridade textual entre pergunta e alternativa. A primeira abordagem trata a tarefa como um problema de Recuperação de Informação Textual (IR), isto é, como um problema de identificar em uma base de dados qualquer qual é o documento mais relevante dado uma consulta. Nossas consultas são feitas utilizando a pergunta mais alternativa e utilizamos três diferentes conjuntos de texto como base de dados: o primeiro é um conjunto de artigos em texto simples extraídos da Wikipedia em português; o segundo contém apenas o texto dado no cabeçalho da pergunta e o terceiro é composto por pares de questão-alternativa correta extraídos de provas do ENEM. A segunda abordagem é baseada em Word Embedding (WE), um método para aprender representações vetoriais de palavras de tal modo que palavras semanticamente próximas possuam vetores próximos. WE é usado de dois modos: para aumentar o texto das consultas de IR e para criar representações vetoriais para a pergunta e alternativas. Usando essas representações vetoriais nós respondemos questões diretamente, selecionando a alternativa que maximiza a semelhança de cosseno em relação à pergunta, ou indiretamente, extraindo features das representações e dando como entrada para um classificador que decidirá qual alternativa é a correta. Junto com as duas abordagens nós investigamos como melhorá-las utilizando a WordNet, uma base estruturada de dados lexicais onde palavras são conectadas de acordo com algumas relações, tais como sinonímia e hiperonímia. Por fim, combinamos diferentes configurações das duas abordagens e suas variações usando WordNet através da criação de um comitê de resolvedores encontrado através de uma busca gulosa. O comitê escolhe uma alternativa através de voto majoritário de seus constituintes. A primeira abordagem teve 24% de acurácia utilizando o cabeçalho, 25% usando a base de dados de pares e 26.9% usando Wikipedia. A segunda abordagem conseguiu 26.6% de acurácia usando WE indiretamente e 28% diretamente. O comitê conseguiu 29.3%. Estes resultados, pouco acima do aleatório (20%), sugerem que essas técnicas conseguem captar algumas das habilidades necessárias para resolver testes padronizados. Entretanto, técnicas mais sofisticadas, capazes de entender texto e de executar raciocínio de senso comum talvez sejam necessárias para alcançar uma performance humana.
22

Improving Conversation Quality of Data-driven Dialog Systems and Applications in Conversational Question Answering

Baheti, Ashutosh January 2020 (has links)
No description available.
23

Automated question answering for clinical comparison questions

Leonhard, Annette Christa January 2012 (has links)
This thesis describes the development and evaluation of new automated Question Answering (QA) methods tailored to clinical comparison questions that give clinicians a rank-ordered list of MEDLINE® abstracts targeted to natural language clinical drug comparison questions (e.g. ”Have any studies directly compared the effects of Pioglitazone and Rosiglitazone on the liver?”). Three corpora were created to develop and evaluate a new QA system for clinical comparison questions called RetroRank. RetroRank takes the clinician’s plain text question as input, processes it and outputs a rank-ordered list of potential answer candidates, i.e. MEDLINE® abstracts, that is reordered using new post-retrieval ranking strategies to ensure the most topically-relevant abstracts are displayed as high in the result set as possible. RetroRank achieves a significant improvement over the PubMed recency baseline and performs equal to or better than previous approaches to post-retrieval ranking relying on query frames and annotated data such as the approach by Demner-Fushman and Lin (2007). The performance of RetroRank shows that it is possible to successfully use natural language input and a fully automated approach to obtain answers to clinical drug comparison questions. This thesis also introduces two new evaluation corpora of clinical comparison questions with “gold standard” references that are freely available and are a valuable resource for future research in medical QA.
24

Productivity Considerations for Online Help Systems

Shultz, Charles R. (Charles Richard) 05 1900 (has links)
The purpose of this study was to determine if task type, task complexity, and search mechanism would have a significant affect on task performance. The problem motivating this study is the potential for systems online help designers to construct systems that can improve the performance of computer users when they need help.
25

Class-free answer typing

Pinchak, Christopher Unknown Date
No description available.
26

Class-free answer typing

Pinchak, Christopher 11 1900 (has links)
Answer typing is an important aspect of the question answering process. Most commonly addressed with the use of a fixed set of possible answer classes via question classification, answer typing influences which answers will ultimately be selected as correct. Answer typing introduces the concept of type-appropriate responses. Such responses are plausible in the context of question answering when they are believable as answers to a given question. This notion of type-appropriateness is distinct from correctness, as there may exist many type-appropriate responses that are not correct answers. Type-appropriate responses can even exist for other kinds of queries that are not strictly questions. This work introduces class-free models of answer type for certain kinds of questions as well as models of type-appropriateness useful to the domain of information retrieval. Models built for both open-ended noun phrase questions and how-adjective questions are designed to evaluate the type-appropriateness of a candidate answer directly rather than via the use of an intermediary question class (as is done with question classification). Experiments show a meaningful improvement over alternative typing strategies for these kinds of questions. Ideas from these models are then applied outside of the domain of question answering in an effort to improve traditional information retrieval results. Experiments comparing reranked results with those of the Google search engine show improvements are made in those rare situations for which Google provides less than ideal results.
27

Supporting novice application users in learning by trial and error and reading help

Andrade, Oscar Daniel, January 2009 (has links)
Thesis (M.S.)--University of Texas at El Paso, 2009. / Title from title screen. Vita. CD-ROM. Includes bibliographical references. Also available online.
28

Handling inconsistency in databases and data integration systems /

Bravo, Loreto, January 1900 (has links)
Thesis (Ph.D.) - Carleton University, 2007. / Includes bibliographical references (p. 238-250). Also available in electronic format on the Internet.
29

Strojové učení pro odpovídání na otázky v češtině / Machine Learning for Question Answering in Czech

Pastorek, Peter January 2020 (has links)
This Master's thesis deals with teaching neural network question answering in Czech. Neural networks are created in Python programming language using the PyTorch library. They are created based on the LSTM structure. They are trained on the Czech SQAD dataset. Because Czech data set is smaller than the English data sets, I opted to extend neural networks with algorithmic procedures. For easier application of algorithmic procedures and better accuracy, I divide question answering into smaller parts.
30

Using Gaze Tracking to Tackle Duplicate Questions on Community Based Question Answering Websites: A Case Study of Ifixit

Gandhi, Pankti 01 June 2018 (has links)
The number of unanswered questions on Community based Question Answering (CQA) websites has increased significantly due to the rising number of duplicate questions. This is a serious problem, one that could lead to the decline of such beneficial websites. This thesis presents novel avenues that use gaze tracking technology and behavioral testing to tackle this problem. Based on prior studies on web search behaviors, we assumed that adding contextual information (snippets) to proposed related questions displayed on the `Ask a Question' page of the CQA website iFixit would improve the asker experience and reduce their tendency to post a new duplicate question. The first lab experiment where this web page was redesigned and compared to the original one was conducted on 8 participants. Results confirmed that participants were more likely to find an answer to their question on the redesigned page. A second experiment, conducted remotely and on a larger sample of 74 participants, aimed to discover strategic attributes that increase the perceived similarity of question pairs. These attributes were used in the third lab experiment (20 participants) to redesign and assess the snippets from Experiment 1. Results indicated that snippets containing `symptom(s)' and `cause(s)' attributes constitute an incremental improvement over basic snippets: they are perceived as slightly more relevant and require significantly less gaze fixations on the asker's part.

Page generated in 0.0662 seconds