• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 8
  • 8
  • 7
  • 7
  • 6
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Methods for measuring semantic similarity of texts

Gaona, Miguel Angel Rios January 2014 (has links)
Measuring semantic similarity is a task needed in many Natural Language Processing (NLP) applications. For example, in Machine Translation evaluation, semantic similarity is used to assess the quality of the machine translation output by measuring the degree of equivalence between a reference translation and the machine translation output. The problem of semantic similarity (Corley and Mihalcea, 2005) is de ned as measuring and recognising semantic relations between two texts. Semantic similarity covers di erent types of semantic relations, mainly bidirectional and directional. This thesis proposes new methods to address the limitations of existing work on both types of semantic relations. Recognising Textual Entailment (RTE) is a directional relation where a text T entails the hypothesis H (entailment pair) if the meaning of H can be inferred from the meaning of T (Dagan and Glickman, 2005; Dagan et al., 2013). Most of the RTE methods rely on machine learning algorithms. de Marne e et al. (2006) propose a multi-stage architecture where a rst stage determines an alignment between the T-H pairs to be followed by an entailment decision stage. A limitation of such approaches is that instead of recognising a non-entailment, an alignment that ts an optimisation criterion will be returned, but the alignment by itself is a poor predictor for iii non-entailment. We propose an RTE method following a multi-stage architecture, where both stages are based on semantic representations. Furthermore, instead of using simple similarity metrics to predict the entailment decision, we use a Markov Logic Network (MLN). The MLN is based on rich relational features extracted from the output of the predicate-argument alignment structures between T-H pairs. This MLN learns to reward pairs with similar predicates and similar arguments, and penalise pairs otherwise. The proposed methods show promising results. A source of errors was found to be the alignment step, which has low coverage. However, we show that when an alignment is found, the relational features improve the nal entailment decision. The task of Semantic Textual Similarity (STS) (Agirre et al., 2012) is de- ned as measuring the degree of bidirectional semantic equivalence between a pair of texts. The STS evaluation campaigns use datasets that consist of pairs of texts from NLP tasks such as Paraphrasing and Machine Translation evaluation. Methods for STS are commonly based on computing similarity metrics between the pair of sentences, where the similarity scores are used as features to train regression algorithms. Existing methods for STS achieve high performances over certain tasks, but poor results over others, particularly on unknown (surprise) tasks. Our solution to alleviate this unbalanced performances is to model STS in the context of Multi-task Learning using Gaussian Processes (MTL-GP) ( Alvarez et al., 2012) and state-of-the-art iv STS features ( Sari c et al., 2012). We show that the MTL-GP outperforms previous work on the same datasets.
2

Validation de réponses dans un système de questions réponses / Answer validation in question answering system

Grappy, Arnaud 08 November 2011 (has links)
Avec l'augmentation des connaissances disponibles sur Internet est apparue la difficulté d'obtenir une information. Les moteurs de recherche permettent de retourner des pages Web censés contenir l'information désirée à partir de mots clés. Toutefois il est encore nécessaire de trouver la bonne requête et d'examiner les documents retournés. Les systèmes de questions réponses ont pour but de renvoyer directement une réponse concise à partir d'une question posée en langue naturelle. La réponse est généralement accompagnée d'un passage de texte censé la justifier. Par exemple, pour la question « Quel est le réalisateur d'Avatar ? » la réponse « James Cameron » peut être renvoyée accompagnée de « James Cameron a réalisé Avatar. ». Cette thèse se focalise sur la validation de réponses qui permet de déterminer automatiquement si la réponse est valide. Une réponse est valide si elle est correcte (répond bien à la question) et justifiée par le passage textuel. Cette validation permet d'améliorer les systèmes de questions réponses en ne renvoyant à l'utilisateur que les réponses valides. Les approches permettant de reconnaître les réponses valides peuvent se décomposer en deux grandes catégories : -les approches utilisant un formalisme de représentation particulier de la question et du passage dans lequel les structures sont comparées ;-les approches suivant une approche par apprentissage qui combinent différents critères d'ordres lexicaux ou syntaxiques. Dans le but d'identifier les différents phénomènes sous tendant la validation de réponses, nous avons participé à la création d'un corpus annoté manuellement. Ces phénomènes sont de différentes natures telle que la paraphrase ou la coréférence. On peut aussi remarquer que les différentes informations sont réparties sur plusieurs phrases, voire sont manquantes dans les passages contenant la réponse. Une deuxième étude de corpus de questions a porté sur les différentes informations à vérifier afin de détecter qu'une réponse est valide. Cette étude a montré que les trois phénomènes les plus fréquents sont la vérification du type de la réponse, la date et le lieu contenus dans la question. Ces différentes études ont permis de mettre au point notre système de validation de réponses qui s'appuie sur une combinaison de critères. Certains critères traitent de la présence dans le passage des mots de la question ce qui permet de pointer la présence des informations de la question. Un traitement particulier a été effectué pour les informations de date en détectant une réponse comme n'étant pas valide si le passage ne contient pas la date contenue dans la question. D'autres critères, dont la proximité dans le passage des mots de la question et de la réponse, portent sur le lien entre les différents mots de la question dans le passage. Le second grand type de vérification permet de mesurer la compatibilité entre la réponse et la question. Un certain nombre de questions attendent une réponse étant d'un type particulier. La question de l'exemple précédent attend ainsi un réalisateur en réponse. Si la réponse n'est pas de ce type alors elle est incorrecte. Comme cette information peut ne pas se trouver dans le passage justificatif, elle est recherchée dans des documents autres à l'aide de la structure des pages Wikipédia, en utilisant des patrons syntaxiques ou grâce à des fréquences d'apparitions du type et de la réponse dans des documents. La vérification du type est particulièrement efficace puisqu'elle effectue 80 % de bonnes détections. La vérification de la validité des réponses est également pertinente puisque lors de la participation à une campagne d'évaluation, AVE 2008, le système s'est placé parmi les meilleurs toutes langues confondues. La dernière contribution a consisté à intégrer le module de validation dans un système de questions réponses, QAVAL. Dans ce cadre de nombreuses réponses sont extraites par QAVAL et ordonnées grâce au module de validation de réponses. Le système n'est plus utilisé afin de détecter les réponses valides mais pour fournir un score de confiance à chaque réponse. Le système QAVAL peut ainsi aussi bien être utilisé en effectuant des recherches dans des articles de journaux que dans des articles issus du Web. Les résultats sont assez bons puisqu'ils dépassent ceux obtenus par un simple ordonnancement des réponses de près de 50 %. / Question answering systems extract precise answers from a set of documents, and return the answers along with text snippets which justify them. For example, to the question "Who is the director of Avatar?" The answer "James Cameron" may be returned with "Avatar by James Cameron.".The answer validation detect automatically if the answer is valid ie. if it is correct (responds to the question) and justified by the text passage. This validation allows to improve the question answering systems by producing only valid answers.Two kind of methods can be used to detect right answers : -approaches using specific representation formalism of the question and the passage in which the structures are compared;-learning approaches that combines lexical and syntactic features.To identify the phenomena that characterize the answer validation, we built a manually annotated corpus. Differents phenomena can be seen like paraphrasing, coreference or that the information is spread in different sentences or documents. A second corpus aims to identify the different informations to be checked to valid an answer. This study showed that the three mains phenomena are the answer type, the date and place of the question.These studies have helped to develop our answer validation system which is based on a combination of features. The first one estimates the proportion of common terms in the snippet and the question, the second one measures the proximity of these terms and the answer. The second kind of features measure the compatibility between the answer and the question. Numerous questions wait for answers of an explicit type. For example, the question “Which president succeeded to Jacques Chirac?” requires an instance of president as answer.If the answer is not of this type then it is incorrect. The method aims at verifying that an answer given by a system corresponds to the given type. This verification is done by combining features provided by different methods. The first types of feature are statistical and compute the presence rate of both the answer and the type in documents, other features rely on named entity recognizers and the last criteria are based on the use of Wikipedia. Type checking is particularly effective because it makes 80 % correct detections. The final contribution was to integrate the validation module in a question answering system, QAVAL. Many answers are retrieved by QAVAL and ordered through the answers validation module. The module provide a confidence score to each response. QAVAL can be used both by researching the information in newspaper articles and in articles from the Web. The results are good, exceeding those obtained by a simple answer ranking from nearly 50%.
3

Reconhecimento de implicação textual em português / Recognizing textual entailment in Portuguese

Fonseca, Erick Rocha 03 May 2018 (has links)
O reconhecimento de implicação textual (RIT) consiste em identificar automaticamente se um trecho de texto em língua natural é verdadeiro baseado no conteúdo de outro. Este problema vem sendo estudado por pesquisadores da área de Processamento de Línguas Naturais (PLN) há alguns anos, e ganhou certo destaque mais recentemente, com a maior disponibilidade de dados anotados e desenvolvimento de métodos baseados em deep learning. Esta pesquisa de doutorado teve como objetivo o desenvolvimento de recursos e métodos computacionais para o RIT, com especial foco em língua portuguesa. Durante sua realização, foi compilado o corpus ASSIN, o primeiro a fornecer dados para treinamento e avaliação de sistemas de RIT em português, e foi organizado o workshop de mesmo nome, que reuniu pesquisadores interessados no tema. Além disso, foram feitos experimentos computacionais com diferentes tipos de estratégias para o RIT, com dados em inglês e em português. Foi desenvolvido um novo modelo para o RIT, o TEDIN (Tree Edit Distance Network). O modelo é baseado no conceito de distância de edição entre árvores sintáticas, já explorado em outros trabalhos de RIT. Seu diferencial é combinar a representação de conhecimento linguístico explícito com a flexibilidade e capacidade representativa de redes neurais. Foi também desenvolvido o Infernal, um modelo para RIT que usa técnicas clássicas de aprendizado de máquina com engenharia de atributos. Os resultados experimentais do TEDIN ficaram abaixo de outros modelos da literatura, e uma análise cuidadosa de seu comportamento indica a dificuldade de se modelar as diferenças entre árvores sintáticas. Por outro lado, o Infernal teve resultados positivos no ASSIN, definindo o novo estado-da-arte para o RIT em português. / Recognizing Textual Entailment (RTE) consists in automatically identifying whether a text passage in natural language is true based on the content of another one. This problem has been studied in Natural Language Processing (NLP) for some years, and gained some prominence recently, with the availability of annotated data in larger quantities and the development of deep learning methods. This doctoral research had the goal of developing resources and methods for RTE, especially for Portuguese. During its execution, the ASSIN corpus was compiled, which is the first to provide data for training and evaluating RTE systems in Portuguese, and the workshop with the same name was organized, gathering researchers interested in this theme. Moreover, computational experiments were carried out with different techniques for RTE, with English and Portuguese data. A new RTE model, TEDIN (Tree Edit Distance Network), was developed. This model is based on the concept of syntactic tree edit distance, already explored in other RTE works. Its differential is to combine explicit linguistic knowledge representation with the flexibility and representative capacity of neural networks. An RTE model based on classical machine learning and feature engineering, Infernal, was also developed. TEDIN had experimental results below other models from the literature, and a careful analysis of its behavior shows the difficulty of modelling differences between syntactic trees. On the other hand, Infernal had positive results on ASSIN, setting the new stateof- the-art for RTE in Portuguese.
4

Reconhecimento de implicação textual em português / Recognizing textual entailment in Portuguese

Erick Rocha Fonseca 03 May 2018 (has links)
O reconhecimento de implicação textual (RIT) consiste em identificar automaticamente se um trecho de texto em língua natural é verdadeiro baseado no conteúdo de outro. Este problema vem sendo estudado por pesquisadores da área de Processamento de Línguas Naturais (PLN) há alguns anos, e ganhou certo destaque mais recentemente, com a maior disponibilidade de dados anotados e desenvolvimento de métodos baseados em deep learning. Esta pesquisa de doutorado teve como objetivo o desenvolvimento de recursos e métodos computacionais para o RIT, com especial foco em língua portuguesa. Durante sua realização, foi compilado o corpus ASSIN, o primeiro a fornecer dados para treinamento e avaliação de sistemas de RIT em português, e foi organizado o workshop de mesmo nome, que reuniu pesquisadores interessados no tema. Além disso, foram feitos experimentos computacionais com diferentes tipos de estratégias para o RIT, com dados em inglês e em português. Foi desenvolvido um novo modelo para o RIT, o TEDIN (Tree Edit Distance Network). O modelo é baseado no conceito de distância de edição entre árvores sintáticas, já explorado em outros trabalhos de RIT. Seu diferencial é combinar a representação de conhecimento linguístico explícito com a flexibilidade e capacidade representativa de redes neurais. Foi também desenvolvido o Infernal, um modelo para RIT que usa técnicas clássicas de aprendizado de máquina com engenharia de atributos. Os resultados experimentais do TEDIN ficaram abaixo de outros modelos da literatura, e uma análise cuidadosa de seu comportamento indica a dificuldade de se modelar as diferenças entre árvores sintáticas. Por outro lado, o Infernal teve resultados positivos no ASSIN, definindo o novo estado-da-arte para o RIT em português. / Recognizing Textual Entailment (RTE) consists in automatically identifying whether a text passage in natural language is true based on the content of another one. This problem has been studied in Natural Language Processing (NLP) for some years, and gained some prominence recently, with the availability of annotated data in larger quantities and the development of deep learning methods. This doctoral research had the goal of developing resources and methods for RTE, especially for Portuguese. During its execution, the ASSIN corpus was compiled, which is the first to provide data for training and evaluating RTE systems in Portuguese, and the workshop with the same name was organized, gathering researchers interested in this theme. Moreover, computational experiments were carried out with different techniques for RTE, with English and Portuguese data. A new RTE model, TEDIN (Tree Edit Distance Network), was developed. This model is based on the concept of syntactic tree edit distance, already explored in other RTE works. Its differential is to combine explicit linguistic knowledge representation with the flexibility and representative capacity of neural networks. An RTE model based on classical machine learning and feature engineering, Infernal, was also developed. TEDIN had experimental results below other models from the literature, and a careful analysis of its behavior shows the difficulty of modelling differences between syntactic trees. On the other hand, Infernal had positive results on ASSIN, setting the new stateof- the-art for RTE in Portuguese.
5

Textual entailment for modern standard Arabic

Alabbas, Maytham Abualhail Shahed January 2013 (has links)
This thesis explores a range of approaches to the task of recognising textual entailment (RTE), i.e. determining whether one text snippet entails another, for Arabic, where we are faced with an exceptional level of lexical and structural ambiguity. To the best of our knowledge, this is the first attempt to carry out this task for Arabic. Tree edit distance (TED) has been widely used as a component of natural language processing (NLP) systems that attempt to achieve the goal above, with the distance between pairs of dependency trees being taken as a measure of the likelihood that one entails the other. Such a technique relies on having accurate linguistic analyses. Obtaining such analyses for Arabic is notoriously difficult. To overcome these problems we have investigated strategies for improving tagging and parsing depending on system combination techniques. These strategies lead to substantially better performance than any of the contributing tools. We describe also a semi-automatic technique for creating a first dataset for RTE for Arabic using an extension of the ‘headline-lead paragraph’ technique because there are, again to the best of our knowledge, no such datasets available. We sketch the difficulties inherent in volunteer annotators-based judgment, and describe a regime to ameliorate some of these. The major contribution of this thesis is the introduction of two ways of improving the standard TED: (i) we present a novel approach, extended TED (ETED), for extending the standard TED algorithm for calculating the distance between two trees by allowing operations to apply to subtrees, rather than just to single nodes. This leads to useful improvements over the performance of the standard TED for determining entailment. The key here is that subtrees tend to correspond to single information units. By treating operations on subtrees as less costly than the corresponding set of individual node operations, ETED concentrates on entire information units, which are a more appropriate granularity than individual words for considering entailment relations; and (ii) we use the artificial bee colony (ABC) algorithm to automatically estimate the cost of edit operations for single nodes and subtrees and to determine thresholds, since assigning an appropriate cost to each edit operation manually can become a tricky task.The current findings are encouraging. These extensions can substantially affect the F-score and accuracy and achieve a better RTE model when compared with a number of string-based algorithms and the standard TED approaches. The relative performance of the standard techniques on our Arabic test set replicates the results reported for these techniques for English test sets. We have also applied ETED with ABC to the English RTE2 test set, where it again outperforms the standard TED.
6

Mesures de similarité distributionnelle asymétrique pour la détection de l’implication textuelle par généralité / Asymmetric Distributional Similarity Measures to Recognize Textual Entailment by Generality

Pais, Sebastião 06 December 2013 (has links)
Textual Entailment vise à capturer les principaux besoins d'inférence sémantique dans les applications de Traitement du Langage Naturel. Depuis 2005, dans la Textual Entailment reconnaissance tâche (RTE), les systèmes sont appelés à juger automatiquement si le sens d'une portion de texte, le texte - T, implique le sens d'un autre texte, l'hypothèse - H. Cette thèse nous nous intéressons au cas particulier de l'implication, l'implication de généralité. Pour nous, il ya différents types d'implication, nous introduisons le paradigme de l'implication textuelle en généralité, qui peut être définie comme l'implication d'une peine spécifique pour une phrase plus générale, dans ce contexte, le texte T implication Hypothèse H, car H est plus générale que T.Nous proposons des méthodes sans surveillance indépendante de la langue de reconnaissance de l'implication textuelle par la généralité, pour cela, nous présentons une mesure asymétrique informatif appelée Asymmetric simplifié InfoSimba, que nous combinons avec différentes mesures d'association asymétriques à reconnaître le cas spécifique de l'implication textuelle par la généralité.Cette thèse, nous introduisons un nouveau concept d'implication, les implications de généralité, en conséquence, le nouveau concept d'implications de la reconnaissance par la généralité, une nouvelle orientation de la recherche en Traitement du Langage Naturel. / Textual Entailment aims at capturing major semantic inference needs across applications in Natural Language Processing. Since 2005, in the Textual Entailment recognition (RTE) task, systems are asked to automatically judge whether the meaning of a portion of text, the Text - T, entails the meaning of another text, the Hypothesis - H. This thesis we focus a particular case of entailment, entailment by generality. For us, there are various types of implication, we introduce the paradigm of Textual Entailment by Generality, which can be defined as the entailment from a specific sentence towards a more general sentence, in this context, the Text T entailment Hypothesis H, because H is more general than T. We propose methods unsupervised language-independent for Recognizing Textual Entailment by Generality, for this we present an Informative Asymmetric Measure called the Simplified Asymmetric InfoSimba, which we combine with different asymmetric association measures to recognizingthe specific case of Textual Entailment by Generality.This thesis, we introduce the new concept of implication, implications by generality, in consequence, the new concept of recognition implications by generality, a new direction of research in Natural Language Processing.
7

Textual Inference for Machine Comprehension / Inférence textuelle pour la compréhension automatique

Gleize, Martin 07 January 2016 (has links)
Étant donnée la masse toujours croissante de texte publié, la compréhension automatique des langues naturelles est à présent l'un des principaux enjeux de l'intelligence artificielle. En langue naturelle, les faits exprimés dans le texte ne sont pas nécessairement tous explicites : le lecteur humain infère les éléments manquants grâce à ses compétences linguistiques, ses connaissances de sens commun ou sur un domaine spécifique, et son expérience. Les systèmes de Traitement Automatique des Langues (TAL) ne possèdent naturellement pas ces capacités. Incapables de combler les défauts d'information du texte, ils ne peuvent donc pas le comprendre vraiment. Cette thèse porte sur ce problème et présente notre travail sur la résolution d'inférences pour la compréhension automatique de texte. Une inférence textuelle est définie comme une relation entre deux fragments de texte : un humain lisant le premier peut raisonnablement inférer que le second est vrai. Beaucoup de tâches de TAL évaluent plus ou moins directement la capacité des systèmes à reconnaître l'inférence textuelle. Au sein de cette multiplicité de l'évaluation, les inférences elles-mêmes présentent une grande variété de types. Nous nous interrogeons sur les inférences en TAL d'un point de vue théorique et présentons deux contributions répondant à ces niveaux de diversité : une tâche abstraite contextualisée qui englobe les tâches d'inférence du TAL, et une taxonomie hiérarchique des inférences textuelles en fonction de leur difficulté. La reconnaissance automatique d'inférence textuelle repose aujourd'hui presque toujours sur un modèle d'apprentissage, entraîné à l'usage de traits linguistiques variés sur un jeu d'inférences textuelles étiquetées. Cependant, les données spécifiques aux phénomènes d'inférence complexes ne sont pour le moment pas assez abondantes pour espérer apprendre automatiquement la connaissance du monde et le raisonnement de sens commun nécessaires. Les systèmes actuels se concentrent plutôt sur l'apprentissage d'alignements entre les mots de phrases reliées sémantiquement, souvent en utilisant leur structure syntaxique. Pour étendre leur connaissance du monde, ils incluent des connaissances tirées de ressources externes, ce qui améliore souvent les performances. Mais cette connaissance est souvent ajoutée par dessus les fonctionnalités existantes, et rarement bien intégrée à la structure de la phrase.Nos principales contributions dans cette thèse répondent au problème précédent. En partant de l'hypothèse qu'un lexique plus simple devrait rendre plus facile la comparaison du sens de deux phrases, nous décrivons une méthode de récupération de passage fondée sur une expansion lexicale structurée et un dictionnaire de simplifications. Cette hypothèse est testée à nouveau dans une de nos contributions sur la reconnaissance d'implication textuelle : des paraphrases syntaxiques sont extraites du dictionnaire et appliquées récursivement sur la première phrase pour la transformer en la seconde. Nous présentons ensuite une méthode d'apprentissage par noyaux de réécriture de phrases, avec une notion de types permettant d'encoder des connaissances lexico-sémantiques. Cette approche est efficace sur trois tâches : la reconnaissance de paraphrases, d'implication textuelle, et le question-réponses. Nous résolvons son problème de passage à l'échelle dans une dernière contribution. Des tests de compréhension sont utilisés pour son évaluation, sous la forme de questions à choix multiples sur des textes courts, qui permettent de tester la résolution d'inférences en contexte. Notre système est fondé sur un algorithme efficace d'édition d'arbres, et les traits extraits des séquences d'édition sont utilisés pour construire deux classifieurs pour la validation et l'invalidation des choix de réponses. Cette approche a obtenu la deuxième place du challenge "Entrance Exams" à CLEF 2015. / With the ever-growing mass of published text, natural language understanding stands as one of the most sought-after goal of artificial intelligence. In natural language, not every fact expressed in the text is necessarily explicit: human readers naturally infer what is missing through various intuitive linguistic skills, common sense or domain-specific knowledge, and life experiences. Natural Language Processing (NLP) systems do not have these initial capabilities. Unable to draw inferences to fill the gaps in the text, they cannot truly understand it. This dissertation focuses on this problem and presents our work on the automatic resolution of textual inferences in the context of machine reading. A textual inference is simply defined as a relation between two fragments of text: a human reading the first can reasonably infer that the second is true. A lot of different NLP tasks more or less directly evaluate systems on their ability to recognize textual inference. Among this multiplicity of evaluation frameworks, inferences themselves are not one and the same and also present a wide variety of different types. We reflect on inferences for NLP from a theoretical standpoint and present two contributions addressing these levels of diversity: an abstract contextualized inference task encompassing most NLP inference-related tasks, and a novel hierchical taxonomy of textual inferences based on their difficulty.Automatically recognizing textual inference currently almost always involves a machine learning model, trained to use various linguistic features on a labeled dataset of samples of textual inference. However, specific data on complex inference phenomena is not currently abundant enough that systems can directly learn world knowledge and commonsense reasoning. Instead, systems focus on learning how to use the syntactic structure of sentences to align the words of two semantically related sentences. To extend what systems know of the world, they include external background knowledge, often improving their results. But this addition is often made on top of other features, and rarely well integrated to sentence structure. The main contributions of our thesis address the previous concern, with the aim of solving complex natural language understanding tasks. With the hypothesis that a simpler lexicon should make easier to compare the sense of two sentences, we present a passage retrieval method using structured lexical expansion backed up by a simplifying dictionary. This simplification hypothesis is tested again in a contribution on textual entailment: syntactical paraphrases are extracted from the same dictionary and repeatedly applied on the first sentence to turn it into the second. We then present a machine learning kernel-based method recognizing sentence rewritings, with a notion of types able to encode lexical-semantic knowledge. This approach is effective on three tasks: paraphrase identification, textual entailment and question answering. We address its lack of scalability while keeping most of its strengths in our last contribution. Reading comprehension tests are used for evaluation: these multiple-choice questions on short text constitute the most practical way to assess textual inference within a complete context. Our system is founded on a efficient tree edit algorithm, and the features extracted from edit sequences are used to build two classifiers for the validation and invalidation of answer candidates. This approach reaches second place at the "Entrance Exams" CLEF 2015 challenge.
8

Modelo de reconhecimento de vinculação textual baseado em regras linguísticas e informações morfossintáticas voltado para ambientes virtuais de ensino e aprendizagem

Flores, Evandro Metz January 2014 (has links)
Submitted by Fabricia Fialho Reginato (fabriciar) on 2015-07-01T23:00:34Z No. of bitstreams: 1 EvandroFlores.pdf: 1289007 bytes, checksum: 44450c63dc59c23ca86b3e4fdbdcea30 (MD5) / Made available in DSpace on 2015-07-01T23:00:34Z (GMT). No. of bitstreams: 1 EvandroFlores.pdf: 1289007 bytes, checksum: 44450c63dc59c23ca86b3e4fdbdcea30 (MD5) Previous issue date: 2014 / CNPQ – Conselho Nacional de Desenvolvimento Científico e Tecnológico / GVDASA / A rápida evolução das tecnologias de informação e comunicação tem possibilitado o desenvolvimento de modalidades de ensino e educação, tais como a Educação a Distância, capazes de alcançar pessoas anteriormente impossibilitadas de frequentar o ensino superior. Um aspecto importante destas modalidades é o amplo uso de recursos de mediação digital, sendo que estes podem gerar um grande volume de dados o qual, por vezes, não é viável para utilização proveitosa de forma manual pelos professores envolvidos nesta interação. Este contexto gera a necessidade e oportunidade de definição de ferramentas que possam atuar para automatizar parte deste trabalho. Uma destas possibilidades é a verificação de correção de respostas textuais, onde o objetivo é identificar vinculações entre amostras textuais que podem ser, por exemplo, diferentes respostas textuais a uma pergunta. Embora sejam utilizadas com bons resultados, as técnicas atualmente aplicadas a este problema apresentam deficiências ou características que diminuem sua precisão ou adequação em diversos contextos. Poucos trabalhos são capazes de realizar a vinculação textual caso seja alterada a flexão verbal, outros não são capazes de identificar informações importantes ou em que posição na frase as informações se encontram. Além disso, poucos trabalhos são adaptados para a língua portuguesa. Este trabalho propõe um modelo de reconhecimento de vinculação textual baseado em regras linguísticas e informações morfossintáticas voltado para ambientes virtuais de ensino e aprendizagem, que busca contornar estes problemas apresentando uma nova abordagem através do uso combinado da análise sintática, morfológica, regras linguísticas, detecção da flexão de voz, tratamento de negação e do uso de sinônimos. O trabalho também apresenta um protótipo desenvolvido para avaliar o modelo proposto. Ao final são apresentados os resultados obtidos, que até o momento são promissores, permitindo a identificação da vinculação textual de diferentes amostras textuais com precisão e flexibilidade relevantes. / The fast evolution of information and communication technologies has enabled the development of modalities of teaching and learning, such as distance education, that allow to reach people previously unable to attend higher education. An important aspect of these modalities is the extensive use of digital mediation resources. These resources can generate a large volume of data that sometimes is not feasible for beneficial manual use by the teachers involved in this interaction. In this context there is a necessity and opportunity for defining tools and approaches that can act to automate part of this work. One of these possibilities is the verification of textual responses correctness, where the goal is to identify linkages between textual samples, which can be, for example, different textual answer to a question. Although presenting good results, techniques currently applied to this problem have deficiencies or characteristics that decrease their accuracy or suitability in several contexts. Few studies are able to perform textual entailment in case the verbal inflection was changed; others are not able to identify important information or position in the sentence where the information is found. Moreover, few works are adapted to Portuguese. This work proposes a model to recognition of textual entailment based on linguistic rules, which seeks to overcome these problems by presenting a new approach through the combined use of syntactic analysis, morphology, linguistic rules, detection of the bending voice, treatment of denial and the use of synonyms. This work also presents a prototype developed to evaluate the model proposed herein. The end results, which are promising, allow the identification of textual linking of different textual samples accurately and with flexibility.

Page generated in 0.0842 seconds