• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 8
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 155
  • 73
  • 61
  • 53
  • 52
  • 44
  • 39
  • 36
  • 30
  • 26
  • 26
  • 20
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A methodology for domain-specific conceptual data modeling and querying

Tian, Hao. January 2007 (has links)
Thesis (Ph. D.)--Georgia State University, 2007. / Rajshekhar Sunderraman, committee chair; Paul S. Katz, Yanqing Zhang, Ying Zhu, committee members. Electronic text (128 p. : ill.) : digital, PDF file. Description based on contents viewed Oct. 15, 2007; title from file title page. Includes bibliographical references (p. 124-128).
22

Hybrid Question Answering over Linked Data

Bahmid, Rawan 13 August 2018 (has links)
The emergence of Linked Data in the form of knowledge graphs in RDF has been one of the most recent evolutions of the Semantic Web. This led to the development of question answering systems based on RDF and SPARQL to allow end users to access and benefit from these knowledge graphs. However, a lot of information on the Web is still unstructured, which restricts the ability of answering questions whose answer does not exist in a knowledge base. To tackle this issue, hybrid question answering has emerged as an important challenge. In fact, hybrid question answering entails the task of question answering by combining both structured (RDF) and unstructured knowledge sources (text) into one answer. This thesis tackles hybrid question answering based on natural language questions. It focuses on the analysis and improvement of an open source system called HAWK, identifies its limitations and provides solutions and recommendations in the form of a generic question-answering pipeline called HAWK_R. Our system mostly uses heuristic methods, patterns and the ontological schema and knowledge base and provides three main additions: question classification, annotation and answer verification and ranking based on query content. Our results show a clear improvement over the original HAWK based on several Question Answering over Linked Data (QALD) competitions. In fact, our methods are not limited to HAWK and can also help increase the performance of other question answering systems.
23

Interpretable Question Answering using Deep Embedded Knowledge Reasoning to Solve Qualitative Word Problems

January 2020 (has links)
abstract: One of the measures to determine the intelligence of a system is through Question Answering, as it requires a system to comprehend a question and reason using its knowledge base to accurately answer it. Qualitative word problems are an important subset of such problems, as they require a system to recognize and reason with qualitative knowledge expressed in natural language. Traditional approaches in this domain include multiple modules to parse a given problem and to perform the required reasoning. Recent approaches involve using large pre-trained Language models like the Bidirection Encoder Representations from Transformers for downstream question answering tasks through supervision. These approaches however either suffer from errors between multiple modules, or are not interpretable with respect to the reasoning process employed. The proposed solution in this work aims to overcome these drawbacks through a single end-to-end trainable model that performs both the required parsing and reasoning. The parsing is achieved through an attention mechanism, whereas the reasoning is performed in vector space using soft logic operations. The model also enforces constraints in the form of auxiliary loss terms to increase the interpretability of the underlying reasoning process. The work achieves state of the art accuracy on the QuaRel dataset and matches that of the QuaRTz dataset with additional interpretability. / Dissertation/Thesis / Masters Thesis Computer Science 2020
24

Deductive, Inductive and Abductive Reasoning over Natural Language Text: A Case Study with Adaptations, Behaviors and Variations in Organisms

January 2019 (has links)
abstract: Question answering is a challenging problem and a long term goal of Artificial Intelligence. There are many approaches proposed to solve this problem, including end to end machine learning systems, Information Retrieval based approaches and Textual Entailment. Despite being popular, these methods find difficulty in solving problems that require multi level reasoning and combining independent pieces of knowledge, for example, a question like "What adaptation is necessary in intertidal ecosystems but not in reef ecosystems?'', requires the system to consider qualities, behaviour or features of an organism living in an intertidal ecosystem and compare with that of an organism in a reef ecosystem to find the answer. The proposed solution is to solve a genre of questions, which is questions based on "Adaptation, Variation and Behavior in Organisms", where there are various different independent sets of knowledge required for answering questions along with reasoning. This method is implemented using Answer Set Programming and Natural Language Inference (which is based on machine learning ) for finding which of the given options is more probable to be the answer by matching it with the knowledge base. To evaluate this approach, a dataset of questions and a knowledge base in the domain of "Adaptation, Variation and Behavior in Organisms" is created. / Dissertation/Thesis / Masters Thesis Computer Science 2019
25

Visual Question Answering in the Medical Domain

Sharma, Dhruv 21 July 2020 (has links)
Medical images are extremely complicated to comprehend for a person without expertise. The limited number of practitioners across the globe often face the issue of fatigue due to the high number of cases. This fatigue, physical and mental, can induce human-errors during the diagnosis. In such scenarios, having an additional opinion can be helpful in boosting the confidence of the decision-maker. Thus, it becomes crucial to have a reliable Visual Question Answering (VQA) system which can provide a "second opinion" on medical cases. However, most of the VQA systems that work today cater to real-world problems and are not specifically tailored for handling medical images. Moreover, the VQA system for medical images needs to consider a limited amount of training data available in this domain. In this thesis, we develop a deep learning-based model for VQA on medical images taking the associated challenges into account. Our MedFuseNet system aims at maximizing the learning with minimal complexity by breaking the problem statement into simpler tasks and weaving everything together to predict the answer. We tackle two types of answer prediction - categorization and generation. We conduct an extensive set of both quantitative and qualitative analyses to evaluate the performance of MedFuseNet. Our results conclude that MedFuseNet outperforms other state-of-the-art methods available in the literature for these tasks. / Master of Science / Medical images are extremely complicated to comprehend for a person without expertise. The limited number of practitioners across the globe often face the issue of fatigue due to the high number of cases. This fatigue, physical and mental, can induce human-errors during the diagnosis. In such scenarios, having an additional opinion can be helpful in boosting the confidence of the decision-maker. Thus, it becomes crucial to have a reliable Visual Question Answering (VQA) system which can provide a "second opinion" on medical cases. However, most of the VQA systems that work today cater to real-world problems and are not specifically tailored for handling medical images. In this thesis, we propose an end-to-end deep learning-based system, MedFuseNet, for predicting the answer for the input query associated with the image. We cater to close-ended as well as open-ended type question-answer pairs. We conduct an extensive analysis to evaluate the performance of MedFuseNet. Our results conclude that MedFuseNet outperforms other state-of-the-art methods available in the literature for these tasks.
26

Advancing Chart Question Answering with Robust Chart Component Recognition

Zheng, Hanwen 13 August 2024 (has links)
The task of comprehending charts [1, 2, 3] presents significant challenges for machine learning models due to the diverse and intricate shapes of charts. The chart extraction task ensures the precise identification of key components, while the chart question answering (ChartQA) task integrates visual and textual information, facilitating accurate responses to queries based on the chart's content. To approach ChartQA, this research focuses on two main aspects. Firstly, we introduce ChartFormer, an integrated framework that simultaneously identifies and classifies every chart element. ChartFormer extends beyond traditional data visualization by identifying descriptive components such as the chart title, legend, and axes, providing a comprehensive understanding of the chart's content. ChartFormer is particularly effective for complex instance segmentation tasks that involve a wide variety of class objects with unique visual structures. It utilizes an end-to-end transformer architecture, which enhances its ability to handle the intricacies of diverse and distinct object features. Secondly, we present Question-guided Deformable Co-Attention (QDCAt), which facilitates multimodal fusion by incorporating question information into a deformable offset network and enhancing visual representation from ChartFormer through a deformable co-attention block. / Master of Science / Real-world data often encompasses multimodal information, blending textual descriptions with visual representations. Charts, in particular, pose a significant challenge for machine learning models due to their condensed and complex structure. Existing multimodal methods often neglect these graphics, failing to integrate them effectively. To address this gap, we introduce ChartFormer, a unified framework designed to enhance chart understanding through instance segmentation, and a novel Question-guided Deformable Co-Attention (QDCAt) mechanism. This approach seamlessly integrates visual and textual features for chart question answering (ChartQA), allowing for more comprehensive reasoning. ChartFormer excels at identifying and classifying chart components such as bars, lines, pies, titles, legends, and axes. The QDCAt mechanism further enhances multimodal fusion by aligning textual information with visual cues, thereby improving answer accuracy. By dynamically adjusting attention based on the question context, QDCAt ensures that the model focuses on the most relevant parts of the chart. Extensive experiments demonstrate that ChartFormer and QDChart significantly outperform their baseline models in chart component recognition and ChartQA tasks by 3.2% in mAP and 15.4% in accuracy, respectively, providing a robust solution for detailed visual data interpretation across various applications. These results highlight the efficacy of our approach in providing a robust solution for detailed visual data interpretation, making it applicable to a wide range of domains, from scientific research to financial analysis and beyond.
27

Leveraging Structure for Effective Question Answering

Bonadiman, Daniele 25 September 2020 (has links)
In this thesis, we focus on Answer Sentence Selection (A2S) that is the core task of retrieval based question answering. A2S consists of selecting the sentences that answer user queries from a collection of documents retrieved by a search engine. Over more than two decades, several solutions based on machine learning have been proposed to solve this task, starting from simple approaches based on manual feature engineering to more complex Structural Tree Kernels models, and recently Neural Network architectures. In particular, the latter requires little human effort as they can automatically extract relevant features from plain text. The development of neural architectures brought improvements in many areas of A2S, reaching unprecedented results. They substantially increase accuracy on almost all benchmark datasets for A2S. However, this has come with the cost of a huge increase in the number of parameters and computational costs of the models. A large number of parameters has led to two drawbacks. The model requires a massive amount of data to train effectively, and huge computational power to maintain an acceptable transaction per second in a production environment. Current state-of-the-art techniques for A2S use huge Transformer architectures, having up to 340 million parameters, pre-trained on a massive amount of data, e.g., BERT. The latter and related models in the same family, such as RoBERTa, are general architectures, i.e., they can be applied to many tasks of NLP without any architectural change. In contrast to the trend above, we focus on specialized architectures for A2S that can effectively encode the local structure of the question and answer candidate and global information, i.e., the structure of the task and the context in which the answer candidate appears. In particular, we propose solutions to effectively encode both the local and the global structure of A2S in efficient neural network models. (i) We encode syntactic information in a fast CNN architecture exploiting the capabilities of Structural Tree Kernel to encode the syntactic structure. (ii) We propose an efficient model that can use semantic relational information between question and answer candidates by pretraining word representations on a relational knowledge base. (iii) This efficient approach is further extended to encode each answer candidate's contextual information, encoding all answer candidates in the original context. Lastly, (iv) we propose a solution to encode task-specific structure that is available, for example, available on the community Question Answering task. The final model, which encodes different aspects of the task, achieves state-of-the-art performance on A2S compared with other efficient architectures. The proposed model is more efficient than attention based architectures and outperforms BERT by two orders of magnitude in terms of transaction per second during training and testing, i.e., it processes 700 questions per second compared to 6 questions per second for BERT when training on a single GPU.
28

Experimental analysis of database query languages

Fritchman, Barry Lynn January 1983 (has links)
The increasing prominence of large interactive database systems and the increasing numbers of non-computer professionals who use these systems have made human factors in query languages an area of major importance. Whereas early systems emphasized the power and efficiency of the retrieval mechanism, it is now generally acknowledged that the query language itself should be easy to learn and use, resistant to user errors, and satisfying to the user. Many also believe that query languages can and should be subjected to controlled experimentation to determine the degree to which they embody these features. This paper attempts to analyze the issue of language syntax as applied to human factors in relational query languages. It proposes three query languages which are designed to represent points along a continuum from formal, artificial language to natural language. The importance of scientific testing of relevant hypotheses is emphasized throughout, and experimental guidelines are proposed as a means of evaluating the languages. The purpose of the paper is twofold: to develop design principles for the empirical investigation of query languages, and to apply these principles to the evaluation of specific languages. / M.S.
29

Solving University entrance assessment using information retrieval / Resolvendo Vestibular utilizando recuperação de informação

Silveira, Igor Cataneo 05 July 2018 (has links)
Answering questions posed in natural language is a key task in Artificial Intelligence. However, producing a successful Question Answering (QA) system is challenging, since it requires text understanding, information retrieval, information extraction and text production. This task is made even harder by the difficulties in collecting reliable datasets and in evaluating techniques, two pivotal points for machine learning approaches. This has led many researchers to focus on Multiple-Choice Question Answering (MCQA), a special case of QA where systems must select the correct answers from a small set of alternatives. One particularly interesting type of MCQA is solving Standardized Tests, such as Foreign Language Proficiency exams, Elementary School Science exams and University Entrance exams. These exams provide easy-to-evaluate challenging multiple-choice questions of varying difficulties about large, but limited, domains. The Exame Nacional do Ensino Médio (ENEM) is a High School level exam taken every year by students all over Brazil. It is widely used by Brazilian universities as an entrance exam and is the world\'s second biggest university entrance examination in number of registered candidates. This exam consists in writing an essay and solving a multiple-choice test comprising questions on four major topics: Humanities, Language, Science and Mathematics. Questions inside each major topic are not segmented by standard scholar disciplines (e.g. Geography, Biology, etc.) and often require interdisciplinary reasoning. Moreover, the previous editions of the exam and their solutions are freely available online, making it a suitable benchmark for MCQA. In this work we automate solving the ENEM focusing, for simplicity, on purely textual questions that do not require mathematical thinking. We formulate the problem of answering multiple-choice questions as finding the candidate-answer most similar to the statement. We investigate two approaches for measuring textual similarity of candidate-answer and statement. The first approach addresses this as a Text Information Retrieval (IR) problem, that is, as a problem of finding in a database the most relevant document to a query. Our queries are made of statement plus candidate-answer and we use three different corpora as database: the first comprises plain-text articles extracted from a dump of the Wikipedia in Portuguese language; the second contains only the text given in the question\'s header and the third is composed by pairs of question and correct answer extracted from ENEM assessments. The second approach is based on Word Embedding (WE), a method to learn vectorial representation of words in a way such that semantically similar words have close vectors. WE is used in two manners: to augment IR\'s queries by adding related words to those on the query according to the WE model, and to create vectorial representations for statement and candidate-answers. Using these vectorial representations we answer questions either directly, by selecting the candidate-answer that maximizes the cosine similarity to the statement, or indirectly, by extracting features from the representations and then feeding them into a classifier that decides which alternative is the answer. Along with the two mentioned approaches we investigate how to enhance them using WordNet, a structured lexical database where words are connected according to some relations like synonymy and hypernymy. Finally, we combine different configurations of the two approaches and their WordNet variations by creating an ensemble of algorithms found by a greedy search. This ensemble chooses an answer by the majority voting of its components. The first approach achieved an average of 24% accuracy using the headers, 25% using the pairs database and 26.9% using Wikipedia. The second approach achieved 26.6% using WE indirectly and 28% directly. The ensemble achieved 29.3% accuracy. These results, slightly above random guessing (20%), suggest that these techniques can capture some of the necessary skills to solve standardized tests. However, more sophisticated techniques that perform text understanding and common sense reasoning might be required to achieve human-level performance. / Responder perguntas feitas em linguagem natural é uma capacidade há muito desejada pela Inteligência Artificial. Porém, produzir um sistema de Question Answering (QA) é uma tarefa desafiadora, uma vez que ela requer entendimento de texto, recuperação de informação, extração de informação e produção de texto. Além disso, a tarefa se torna ainda mais difícil dada a dificuldade em coletar datasets confiáveis e em avaliar as técnicas utilizadas, sendo estes pontos de suma importância para abordagens baseadas em aprendizado de máquina. Isto tem levado muitos pesquisadores a focar em Multiple-Choice Question Answering (MCQA), um caso especial de QA no qual os sistemas devem escolher a resposta correta dentro de um grupo de possíveis respostas. Um caso particularmente interessante de MCQA é o de resolver testes padronizados, tal como testes de proficiência linguística, teste de ciências para ensino fundamental e vestibulares. Estes exames fornecem perguntas de múltipla escolha de fácil avaliação sobre diferentes domínios e de diferentes dificuldades. O Exame Nacional do Ensino Médio (ENEM) é um exame realizado anualmente por estudantes de todo Brasil. Ele é utilizado amplamente por universidades brasileiras como vestibular e é o segundo maior vestibular do mundo em número de candidatos inscritos. Este exame consiste em escrever uma redação e resolver uma parte de múltipla escolha sobre questões de: Ciências Humanas, Linguagens, Matemática e Ciências Naturais. As questões nestes tópicos não são divididas por matérias escolares (Geografia, Biologia, etc.) e normalmente requerem raciocínio interdisciplinar. Ademais, edições passadas do exame e suas soluções estão disponíveis online, tornando-o um benchmark adequado para MCQA. Neste trabalho nós automatizamos a resolução do ENEM focando, por simplicidade, em questões puramente textuais que não requerem raciocínio matemático. Nós formulamos o problema de responder perguntas de múltipla escolha como um problema de identificar a alternativa mais similar à pergunta. Nós investigamos duas abordagens para medir a similaridade textual entre pergunta e alternativa. A primeira abordagem trata a tarefa como um problema de Recuperação de Informação Textual (IR), isto é, como um problema de identificar em uma base de dados qualquer qual é o documento mais relevante dado uma consulta. Nossas consultas são feitas utilizando a pergunta mais alternativa e utilizamos três diferentes conjuntos de texto como base de dados: o primeiro é um conjunto de artigos em texto simples extraídos da Wikipedia em português; o segundo contém apenas o texto dado no cabeçalho da pergunta e o terceiro é composto por pares de questão-alternativa correta extraídos de provas do ENEM. A segunda abordagem é baseada em Word Embedding (WE), um método para aprender representações vetoriais de palavras de tal modo que palavras semanticamente próximas possuam vetores próximos. WE é usado de dois modos: para aumentar o texto das consultas de IR e para criar representações vetoriais para a pergunta e alternativas. Usando essas representações vetoriais nós respondemos questões diretamente, selecionando a alternativa que maximiza a semelhança de cosseno em relação à pergunta, ou indiretamente, extraindo features das representações e dando como entrada para um classificador que decidirá qual alternativa é a correta. Junto com as duas abordagens nós investigamos como melhorá-las utilizando a WordNet, uma base estruturada de dados lexicais onde palavras são conectadas de acordo com algumas relações, tais como sinonímia e hiperonímia. Por fim, combinamos diferentes configurações das duas abordagens e suas variações usando WordNet através da criação de um comitê de resolvedores encontrado através de uma busca gulosa. O comitê escolhe uma alternativa através de voto majoritário de seus constituintes. A primeira abordagem teve 24% de acurácia utilizando o cabeçalho, 25% usando a base de dados de pares e 26.9% usando Wikipedia. A segunda abordagem conseguiu 26.6% de acurácia usando WE indiretamente e 28% diretamente. O comitê conseguiu 29.3%. Estes resultados, pouco acima do aleatório (20%), sugerem que essas técnicas conseguem captar algumas das habilidades necessárias para resolver testes padronizados. Entretanto, técnicas mais sofisticadas, capazes de entender texto e de executar raciocínio de senso comum talvez sejam necessárias para alcançar uma performance humana.
30

Probabilistic Modeling in Community-based Question Answering Services

Zolaktaf Zadeh, Zeinab 29 February 2012 (has links)
Community-based Question Answering (CQA) services enable members to ask questions and have them answered by the community. These services have the potential of rapidly creating large archives of questions and answers. However, their information is rarely exploited. This thesis presents a new statistical topic model for modeling Question-Answering archives. The model explicitly captures topic dependency and correlation between questions and answers, and models differences in their vocabulary. The proposed model is applied for the task of Question Answering and its performance is evaluated using a dataset extracted from the programming website Stack Overflow. Experimental results show that it achieves improved performance in retrieving the correct answer for a query question compared to the LDA model. The model has also been applied for Automatic Tagging and comparisons with LDA show that the new model achieves better clustering performance for larger numbers of topics.

Page generated in 0.1245 seconds