Spelling suggestions: "subject:"dependency parser"" "subject:"ependency parser""
1 |
Structured representation of images for language generation and image retrievalElliott, Desmond January 2015 (has links)
A photograph typically depicts an aspect of the real world, such as an outdoor landscape, a portrait, or an event. The task of creating abstract digital representations of images has received a great deal of attention in the computer vision literature because it is rarely useful to work directly with the raw pixel data. The challenge of working with raw pixel data is that small changes in lighting can result in different digital images, which is not typically useful for downstream tasks such as object detection. One approach to representing an image is automatically extracting and quantising visual features to create a bag-of-terms vector. The bag-of-terms vector helps overcome the problems with raw pixel data but this unstructured representation discards potentially useful information about the spatial and semantic relationships between the parts of the image. The central argument of this thesis is that capturing and encoding the relationships between parts of an image will improve the performance of extrinsic tasks, such as image description or search. We explore this claim in the restricted domain of images representing events, such as riding a bicycle or using a computer. The first major contribution of this thesis is the Visual Dependency Representation: a novel structured representation that captures the prominent region–region relationships in an image. The key idea is that images depicting the same events are likely to have similar spatial relationships between the regions contributing to the event. This representation is inspired by dependency syntax for natural language, which directly captures the relationships between the words in a sentence. We also contribute a data set of images annotated with multiple human-written descriptions, labelled image regions, and gold-standard Visual Dependency Representations, and explain how the gold-standard representations can be constructed by trained human annotators. The second major contribution of this thesis is an approach to automatically predicting Visual Dependency Representations using a graph-based statistical dependency parser. A dependency parser is typically used in Natural Language Processing to automatically predict the dependency structure of a sentence. In this thesis we use a dependency parser to predict the Visual Dependency Representation of an image because we are working with a discrete image representation – that of image regions. Our approach can exploit features from the region annotations and the description to predict the relationships between objects in an image. In a series of experiments using gold-standard region annotations, we report significant improvements in labelled and unlabelled directed attachment accuracy over a baseline that assumes there are no relationships between objects in an image. Finally, we find significant improvements in two extrinsic tasks when we represent images as Visual Dependency Representations predicted from gold-standard region annotations. In an image description task, we show significant improvements in automatic evaluation measures and human judgements compared to state-of-the-art models that use either external text corpora or region proximity to guide the generation process. In the query-by-example image retrieval task, we show a significant improvement in Mean Average Precision and the precision of the top 10 images compared to a bag-of-terms approach. We also perform a correlation analysis of human judgements against automatic evaluation measures for the image description task. The automatic measures are standard measures adopted from the machine translation and summarization literature. The main finding of the analysis is that unigram BLEU is less correlated with human judgements than Smoothed BLEU, Meteor, or skip-bigram ROUGE.
|
2 |
Um analisador sintático neural multilíngue baseado em transiçõesCosta, Pablo Botton da 24 January 2017 (has links)
Submitted by Ronildo Prado (ronisp@ufscar.br) on 2017-08-23T18:26:08Z
No. of bitstreams: 1
DissPBC.pdf: 1229668 bytes, checksum: 806b06dd0fbdd6a4076384a7d0f90456 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-23T18:26:15Z (GMT) No. of bitstreams: 1
DissPBC.pdf: 1229668 bytes, checksum: 806b06dd0fbdd6a4076384a7d0f90456 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-23T18:26:21Z (GMT) No. of bitstreams: 1
DissPBC.pdf: 1229668 bytes, checksum: 806b06dd0fbdd6a4076384a7d0f90456 (MD5) / Made available in DSpace on 2017-08-23T18:26:28Z (GMT). No. of bitstreams: 1
DissPBC.pdf: 1229668 bytes, checksum: 806b06dd0fbdd6a4076384a7d0f90456 (MD5)
Previous issue date: 2017-01-24 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / A dependency parser consists in inducing a model that is capable of extracting the right
dependency tree from an input natural language sentence. Nowadays, the multilingual techniques
are being used more and more in Natural Language Processing (NLP) (BROWN
et al., 1995; COHEN; DAS; SMITH, 2011), especially in the dependency parsing task.
Intuitively, a multilingual parser can be seen as vector of different parsers, in which each
one is individually trained on one language. However, this approach can be a really pain
in the neck in terms of processing time and resources. As an alternative, many parsing
techniques have been developed in order to solve this problem (MCDONALD; PETROV;
HALL, 2011; TACKSTROM; MCDONALD; USZKOREIT, 2012; TITOV; HENDERSON,
2007) but all of them depends on word alignment (TACKSTROM; MCDONALD;
USZKOREIT, 2012) or word clustering, which increases the complexity since it is difficult
to induce alignments between words and syntactic resources (TSARFATY et al., 2013;
BOHNET et al., 2013a). A simple solution proposed recently (NIVRE et al., 2016a)
uses an universal annotated corpus in order to reduce the complexity associated with the
construction of a multilingual parser. In this context, this work presents an universal
model for dependency parsing: the NNParser. Our model is a modification of Chen e
Manning (2014) with a more greedy and accurate model to capture distributional representations
(MIKOLOV et al., 2011). The NNparser reached 93.08% UAS in English
Penn Treebank (WSJ) and better results than the state of the art Stack LSTM parser for
Portuguese (87.93% × 86.2% LAS) and Spanish (86.95% × 85.7% LAS) on the universal
dependencies corpus. / Um analisador sintático de dependência consiste em um modelo capaz de extrair a estrutura
de dependência de uma sentença em língua natural. No Processamento de Linguagem
Natural (PLN), os métodos multilíngues tem sido cada vez mais utilizados (BROWN et
al., 1995; COHEN; DAS; SMITH, 2011), inclusive na tarefa de análise de dependência.
Intuitivamente, um analisador sintático multilíngue pode ser visto como um vetor de analisadores
sintáticos treinados individualmente em cada língua. Contudo, a tarefa realizada
com base neste vetor torna-se inviável devido a sua alta demanda por recursos. Como
alternativa, diversos métodos de análise sintática foram propostos (MCDONALD; PETROV;
HALL, 2011; TACKSTROM; MCDONALD; USZKOREIT, 2012; TITOV; HENDERSON,
2007), mas todos dependentes de alinhamento entre palavras (TACKSTROM;
MCDONALD; USZKOREIT, 2012) ou de técnicas de agrupamento, o que também aumenta
a complexidade associada ao modelo (TSARFATY et al., 2013; BOHNET et al.,
2013a). Uma solução simples surgiu recentemente com a construção de recursos universais
(NIVRE et al., 2016a). Estes recursos universais têm o potencial de diminuir a complexidade
associada à construção de um modelo multilíngue, uma vez que não é necessário
um mapeamento entre as diferentes notações das línguas. Nesta linha, este trabalho apresenta
um modelo para análise sintática universal de dependência: o NNParser. O modelo
em questão é uma modificação da proposta de Chen e Manning (2014) com um modelo
mais guloso e preciso na captura de representações distribuídas (MIKOLOV et al., 2011).
Nos experimentos aqui apresentados o NNParser atingiu 93, 08% de UAS para o inglês
no córpus Penn Treebank e resultados melhores do que o estado da arte, o Stack LSTM,
para o português (87,93% × 86,2% LAS) e o espanhol (86,95% × 85,7% LAS) no córpus
UD 1.2.
|
3 |
DEEP LEARNING BASED METHODS FOR AUTOMATIC EXTRACTION OF SYNTACTIC PATTERNS AND THEIR APPLICATION FOR KNOWLEDGE DISCOVERYMdahsanul Kabir (16501281) 03 January 2024 (has links)
<p dir="ltr">Semantic pairs, which consist of related entities or concepts, serve as the foundation for comprehending the meaning of language in both written and spoken forms. These pairs enable to grasp the nuances of relationships between words, phrases, or ideas, forming the basis for more advanced language tasks like entity recognition, sentiment analysis, machine translation, and question answering. They allow to infer causality, identify hierarchies, and connect ideas within a text, ultimately enhancing the depth and accuracy of automated language processing.</p><p dir="ltr">Nevertheless, the task of extracting semantic pairs from sentences poses a significant challenge, necessitating the relevance of syntactic dependency patterns (SDPs). Thankfully, semantic relationships exhibit adherence to distinct SDPs when connecting pairs of entities. Recognizing this fact underscores the critical importance of extracting these SDPs, particularly for specific semantic relationships like hyponym-hypernym, meronym-holonym, and cause-effect associations. The automated extraction of such SDPs carries substantial advantages for various downstream applications, including entity extraction, ontology development, and question answering. Unfortunately, this pivotal facet of pattern extraction has remained relatively overlooked by researchers in the domains of natural language processing (NLP) and information retrieval.</p><p dir="ltr">To address this gap, I introduce an attention-based supervised deep learning model, ASPER. ASPER is designed to extract SDPs that denote semantic relationships between entities within a given sentential context. I rigorously evaluate the performance of ASPER across three distinct semantic relations: hyponym-hypernym, cause-effect, and meronym-holonym, utilizing six datasets. My experimental findings demonstrate ASPER's ability to automatically identify an array of SDPs that mirror the presence of these semantic relationships within sentences, outperforming existing pattern extraction methods by a substantial margin.</p><p dir="ltr">Second, I want to use the SDPs to extract semantic pairs from sentences. I choose to extract cause-effect entities from medical literature. This task is instrumental in compiling various causality relationships, such as those between diseases and symptoms, medications and side effects, and genes and diseases. Existing solutions excel in sentences where cause and effect phrases are straightforward, such as named entities, single-word nouns, or short noun phrases. However, in the complex landscape of medical literature, cause and effect expressions often extend over several words, stumping existing methods, resulting in incomplete extractions that provide low-quality, non-informative, and at times, conflicting information. To overcome this challenge, I introduce an innovative unsupervised method for extracting cause and effect phrases, PatternCausality tailored explicitly for medical literature. PatternCausality employs a set of cause-effect dependency patterns as templates to identify the key terms within cause and effect phrases. It then utilizes a novel phrase extraction technique to produce comprehensive and meaningful cause and effect expressions from sentences. Experiments conducted on a dataset constructed from PubMed articles reveal that PatternCausality significantly outperforms existing methods, achieving a remarkable order of magnitude improvement in the F-score metric over the best-performing alternatives. I also develop various PatternCausality variants that utilize diverse phrase extraction methods, all of which surpass existing approaches. PatternCausality and its variants exhibit notable performance improvements in extracting cause and effect entities in a domain-neutral benchmark dataset, wherein cause and effect entities are confined to single-word nouns or noun phrases of one to two words.</p><p dir="ltr">Nevertheless, PatternCausality operates within an unsupervised framework and relies heavily on SDPs, motivating me to explore the development of a supervised approach. Although SDPs play a pivotal role in semantic relation extraction, pattern-based methodologies remain unsupervised, and the multitude of potential patterns within a language can be overwhelming. Furthermore, patterns do not consistently capture the broader context of a sentence, leading to the extraction of false-positive semantic pairs. As an illustration, consider the hyponym-hypernym pattern <i>the w of u</i> which can correctly extract semantic pairs for a sentence like <i>the village of Aasu</i> but fails to do so for the phrase <i>the moment of impact</i>. The root cause of this limitation lies in the pattern's inability to capture the nuanced meaning of words and phrases in a sentence and their contextual significance. These observations have spurred my exploration of a third model, DepBERT which constitutes a dependency-aware supervised transformer model. DepBERT's primary contribution lies in introducing the underlying dependency structure of sentences to a language model with the aim of enhancing token classification performance. To achieve this, I must first reframe the task of semantic pair extraction as a token classification problem. The DepBERT model can harness both the tree-like structure of dependency patterns and the masked language architecture of transformers, marking a significant milestone, as most large language models (LLMs) predominantly focus on semantics and word co-occurrence while neglecting the crucial role of dependency architecture.</p><p dir="ltr">In summary, my overarching contributions in this thesis are threefold. First, I validate the significance of the dependency architecture within various components of sentences and publish SDPs that incorporate these dependency relationships. Subsequently, I employ these SDPs in a practical medical domain to extract vital cause-effect pairs from sentences. Finally, my third contribution distinguishes this thesis by integrating dependency relations into a deep learning model, enhancing the understanding of language and the extraction of valuable semantic associations.</p>
|
Page generated in 0.0576 seconds