• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 752
  • 151
  • 74
  • 45
  • 27
  • 21
  • 13
  • 12
  • 10
  • 9
  • 8
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1366
  • 1366
  • 1366
  • 498
  • 491
  • 414
  • 323
  • 314
  • 217
  • 206
  • 195
  • 195
  • 193
  • 190
  • 182
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Ontology Learning and Information Extraction for the Semantic Web

Kavalec, Martin January 2006 (has links)
The work gives overview of its three main topics: semantic web, information extraction and ontology learning. A method for identification relevant information on web pages is described and experimentally tested on pages of companies offering products and services. The method is based on analysis of a sample web pages and their position in the Open Directory catalogue. Furthermore, a modfication of association rules mining algorithm is proposed and experimentally tested. In addition to an identification of a relation between ontology concepts, it suggest possible naming of the relation.
262

Investigação de métodos de desambiguação lexical de sentidos de verbos do português do Brasil / Research of word sense disambiguation methods for verbs in brazilian portuguese

Marco Antonio Sobrevilla Cabezudo 28 August 2015 (has links)
A Desambiguação Lexical de Sentido (DLS) consiste em determinar o sentido mais apropriado da palavra em um contexto determinado, utilizando-se um repositório de sentidos pré-especificado. Esta tarefa é importante para outras aplicações, por exemplo, a tradução automática. Para o inglês, a DLS tem sido amplamente explorada, utilizando diferentes abordagens e técnicas, contudo, esta tarefa ainda é um desafio para os pesquisadores em semântica. Analisando os resultados dos métodos por classes gramaticais, nota-se que todas as classes não apresentam os mesmos resultados, sendo que os verbos são os que apresentam os piores resultados. Estudos ressaltam que os métodos de DLS usam informações superficiais e os verbos precisam de informação mais profunda para sua desambiguação, como frames sintáticos ou restrições seletivas. Para o português, existem poucos trabalhos nesta área e só recentemente tem-se investigado métodos de uso geral. Além disso, salienta-se que, nos últimos anos, têm sido desenvolvidos recursos lexicais focados nos verbos. Nesse contexto, neste trabalho de mestrado, visou-se investigar métodos de DLS de verbos em textos escritos em português do Brasil. Em particular, foram explorados alguns métodos tradicionais da área e, posteriormente, foi incorporado conhecimento linguístico proveniente da Verbnet.Br. Para subsidiar esta investigação, o córpus CSTNews foi anotado com sentidos de verbos usando a WordNet-Pr como repositório de sentidos. Os resultados obtidos mostraram que os métodos de DLS investigados não conseguiram superar o baseline mais forte e que a incorporação de conhecimento da VerbNet.Br produziu melhorias nos métodos, porém, estas melhorias não foram estatisticamente significantes. Algumas contribuições deste trabalho de mestrado foram um córpus anotado com sentidos de verbos, a criação de uma ferramenta que auxilie a anotação de sentidos, a investigação de métodos de DLS e o uso de informações especificas de verbos (provenientes da VerbNet.Br) na DLS de verbos. / Word Sense Disambiguation (WSD) aims at identifying the appropriate sense of a word in a given context, using a pre-specified sense-repository. This task is important to other applications as Machine Translation. For English, WSD has been widely studied, using different approaches and techniques, however, this task is still a challenge for researchers in Semantics. Analyzing the performance of different methods by the morphosyntactic class, note that not all classes have the same results, and the worst results are obtained for Verbs. Studies highlight that WSD methods use shallow information and Verbs need deeper information for its disambiguation, like syntactic frames or selectional restrictions. For Portuguese, there are few works in WSD and, recently, some works for general purpose. In addition, it is noted that, recently, have been developed lexical resources focused on Verbs. In this context, this master work aimed at researching WSD methods for verbs in texts written in Brazilian Portuguese. In particular, traditional WSD methods were explored and, subsequently, linguistic knowledge of VerbNet.Br was incorporated in these methods. To support this research, CSTNews corpus was annotated with verb senses using the WordNet-Pr as a sense-repository. The results showed that explored WSD methods did not outperform the hard baseline and the incorporation of VerbNet.Br knowledge yielded improvements in the methods, however, these improvements were not statistically significant. Some contributions of this work were the sense-annotated corpus, the creation of a tool for support the sense-annotation, the research of WSD methods for verbs and the use of specific information of verbs (from VerbNet.Br) in the WSD of verbs.
263

Sumarização automática de opiniões baseada em aspectos / Automatic aspect-based opinion summarization

Roque Enrique López Condori 24 August 2015 (has links)
A sumarização de opiniões, também conhecida como sumarização de sentimentos, é a tarefa que consiste em gerar automaticamente sumários para um conjunto de opiniões sobre uma entidade específica. Uma das principais abordagens para gerar sumários de opiniões é a sumarização baseada em aspectos. A sumarização baseada em aspectos produz sumários das opiniões para os principais aspectos de uma entidade. As entidades normalmente referem-se a produtos, serviços, organizações, entre outros, e os aspectos são atributos ou componentes das entidades. Nos últimos anos, essa tarefa tem ganhado muita relevância diante da grande quantidade de informação online disponível na web e do interesse cada vez maior em conhecer a avaliação dos usuários sobre produtos, empresas, pessoas e outros. Infelizmente, para o Português do Brasil, pouco se tem pesquisado nessa área. Nesse cenário, neste projeto de mestrado, investigou-se o desenvolvimento de alguns métodos de sumarização de opiniões com base em aspectos. Em particular, foram implementados quatro métodos clássicos da literatura, extrativos e abstrativos. Esses métodos foram analisados em cada uma de suas fases e, como consequência dessa análise, produziram-se duas propostas para gerar sumários de opiniões. Essas duas propostas tentam utilizar as principais vantagens dos métodos clássicos para gerar melhores sumários. A fim de analisar o desempenho dos métodos implementados, foram realizados experimentos em função de três medidas de avaliação tradicionais da área: informatividade, qualidade linguística e utilidade do sumário. Os resultados obtidos mostram que os métodos propostos neste trabalho são competitivos com os métodos da literatura e, em vários casos, os superam. / Opinion summarization, also known as sentiment summarization, is the task of automatically generating summaries for a set of opinions about a specific entity. One of the main approaches to generate opinion summaries is aspect-based opinion summarization. Aspect-based opinion summarization generates summaries of opinions for the main aspects of an entity. Entities could be products, services, organizations or others, and aspects are attributes or components of them. In the last years, this task has gained much importance because of the large amount of online information available on the web and the increasing interest in learning the user evaluation about products, companies, people and others. Unfortunately, for Brazilian Portuguese language, there are few researches in that area. In this scenario, this master\'s project investigated the development of some aspect-based opinion summarization methods. In particular, it was implemented four classical methods of the literature, extractive and abstractive ones. These methods were analyzed in each of its phases and, as a result of this analysis, it was produced two proposals to generate summaries of opinions. Both proposals attempt to use the main advantages of the classical methods to generate better summaries. In order to analyze the performance of the implemented methods, experiments were carried out according to three traditional evaluation measures: informativeness, linguistic quality and usefulness of the summary. The results show that the proposed methods in this work are competitive with the classical methods and, in many cases, they got the best performance.
264

Método semi-automático de construção de ontologias parciais de domínio com base em textos. / Semi-automatic method for the construction of partial domain ontologies based on texts.

Luiz Carlos da Cruz Carvalheira 31 August 2007 (has links)
Os recentes desenvolvimentos relacionados à gestão do conhecimento, à web semântica e à troca de informações eletrônicas por meio de agentes têm suscitado a necessidade de ontologias para descrever de modo formal conceituações compartilhadas à respeito dos mais variados domínios. Para que computadores e pessoas possam trabalhar em cooperação é necessário que as informações por eles utilizadas tenham significados bem definidos e compartilhados. Ontologias são instrumentos viabilizadores dessa cooperação. Entretanto, a construção de ontologias envolve um processo complexo e longo de aquisição de conhecimento, o que tem dificultado a utilização desse tipo de solução em mais larga escala. Este trabalho apresenta um método de criação semi-automática de ontologias a partir do uso de textos de um domínio qualquer para a extração dos conceitos e relações presentes nesses textos. Baseando-se na comparação da freqüência relativa dos termos extraídos com os escritos típicos da língua e na extração de padrões lingüísticos específicos, este método identifica termos candidatos a conceitos e relações existentes entre eles, apresenta-os a um ontologista para validação e, ao final, disponibiliza a ontologia ratificada para publicação e uso especificando-a na linguagem OWL. / The recent developments related to knowledge management, the semantic web and the exchange of electronic information through the use of agents have increased the need for ontologies to describe, in a formal way, shared understanding of a given domain. For computers and people to work in cooperation it is necessary that information have well defined and shared definitions. Ontologies are enablers of that cooperation. However, ontology construction remains a very complex and costly process, which has hindered its use in a wider scale. This work presents a method for the semi-automatic construction of ontologies using texts of any domain for the extraction of concepts and relations. By comparing the relative frequency of terms in the text with their expected use and extracting specific linguistic patterns, the method identifies concepts and relations and specifies the corresponding ontology using OWL for further use by other applications.
265

A verb learning model driven by syntactic constructions / Um modelo de aquisição de verbos guiado por construções sintáticas

Machado, Mario Lúcio Mesquita January 2008 (has links)
Desde a segunda metade do último século, as teorias cognitivas têm trazido algumas visões interessantes em relação ao aprendizado de linguagem. A aplicação destas teorias em modelos computacionais tem duplo benefício: por um lado, implementações computacionais podem ser usaas como uma forma de validação destas teorias; por outro lado, modelos computacionais podem alcançar uma performance melhorada a partir da adoção de estratégias de aprendizado cognitivamente plausíveis. Estruturas sintáticas são ditas fornecer uma pista importante para a aquisição do significado de verbos. Ainda, para um subconjunto particular de verbos muito frequentes e gerais - os assim-chamados light verbs - há uma forte ligação entre as estruturas sintáticas nas quais eles aparecem e seus significados. Neste trabalho, empregamos um modelo computacional para investigar estas propostas, em particular, considerando a tarefa de aquisição como um mapeamento entre um verbo desconhecido e referentes prototípicos para eventos verbais, com base na estrutura sintática na qual o verbo aparece. Os experimentos conduzidos ressaltaram alguns requerimentos para um aprendizado bem-sucedido, em termos de níveis de informação disponível para o aprendiz e da estratégia de aprendizado adotada. / Cognitive theories have been, since the second half of the last century, bringing some interesting views about language learning. The application of these theories on computational models has double benefits: in the one hand, computational implementations can be used as a form of validation of these theories; on the other hand, computational models can earn an improved performance from adopting some cognitively plausible learning strategies. Syntactic structures are said to provide an important cue for the acquisition of verb meaning. Yet, for a particular subset of very frequent and general verbs – the so-called light verbs – there is a strong link between the syntactic structures in which they appear and their meanings. In this work, we used a computational model, to further investigate these proposals, in particular looking at the acquisition task as a mapping between an unknown verb and prototypical referents for verbal events, on the basis of the syntactic structure in which the verb appears. The experiments conducted have highlighted some requirements for a successful learning, both in terms of the levels of information available to the learner and the learning strategies adopted.
266

Toponym resolution in text

Leidner, Jochen Lothar January 2007 (has links)
Background. In the area of Geographic Information Systems (GIS), a shared discipline between informatics and geography, the term geo-parsing is used to describe the process of identifying names in text, which in computational linguistics is known as named entity recognition and classification (NERC). The term geo-coding is used for the task of mapping from implicitly geo-referenced datasets (such as structured address records) to explicitly geo-referenced representations (e.g., using latitude and longitude). However, present-day GIS systems provide no automatic geo-coding functionality for unstructured text. In Information Extraction (IE), processing of named entities in text has traditionally been seen as a two-step process comprising a flat text span recognition sub-task and an atomic classification sub-task; relating the text span to a model of the world has been ignored by evaluations such as MUC or ACE (Chinchor (1998); U.S. NIST (2003)). However, spatial and temporal expressions refer to events in space-time, and the grounding of events is a precondition for accurate reasoning. Thus, automatic grounding can improve many applications such as automatic map drawing (e.g. for choosing a focus) and question answering (e.g. for questions like How far is London from Edinburgh?, given a story in which both occur and can be resolved). Whereas temporal grounding has received considerable attention in the recent past (Mani and Wilson (2000); Setzer (2001)), robust spatial grounding has long been neglected. Concentrating on geographic names for populated places, I define the task of automatic Toponym Resolution (TR) as computing the mapping from occurrences of names for places as found in a text to a representation of the extensional semantics of the location referred to (its referent), such as a geographic latitude/longitude footprint. The task of mapping from names to locations is hard due to insufficient and noisy databases, and a large degree of ambiguity: common words need to be distinguished from proper names (geo/non-geo ambiguity), and the mapping between names and locations is ambiguous (London can refer to the capital of the UK or to London, Ontario, Canada, or to about forty other Londons on earth). In addition, names of places and the boundaries referred to change over time, and databases are incomplete. Objective. I investigate how referentially ambiguous spatial named entities can be grounded, or resolved, with respect to an extensional coordinate model robustly on open-domain news text. I begin by comparing the few algorithms proposed in the literature, and, comparing semiformal, reconstructed descriptions of them, I factor out a shared repertoire of linguistic heuristics (e.g. rules, patterns) and extra-linguistic knowledge sources (e.g. population sizes). I then investigate how to combine these sources of evidence to obtain a superior method. I also investigate the noise effect introduced by the named entity tagging step that toponym resolution relies on in a sequential system pipeline architecture. Scope. In this thesis, I investigate a present-day snapshot of terrestrial geography as represented in the gazetteer defined and, accordingly, a collection of present-day news text. I limit the investigation to populated places; geo-coding of artifact names (e.g. airports or bridges), compositional geographic descriptions (e.g. 40 miles SW of London, near Berlin), for instance, is not attempted. Historic change is a major factor affecting gazetteer construction and ultimately toponym resolution. However, this is beyond the scope of this thesis. Method. While a small number of previous attempts have been made to solve the toponym resolution problem, these were either not evaluated, or evaluation was done by manual inspection of system output instead of curating a reusable reference corpus. Since the relevant literature is scattered across several disciplines (GIS, digital libraries, information retrieval, natural language processing) and descriptions of algorithms are mostly given in informal prose, I attempt to systematically describe them and aim at a reconstruction in a uniform, semi-formal pseudo-code notation for easier re-implementation. A systematic comparison leads to an inventory of heuristics and other sources of evidence. In order to carry out a comparative evaluation procedure, an evaluation resource is required. Unfortunately, to date no gold standard has been curated in the research community. To this end, a reference gazetteer and an associated novel reference corpus with human-labeled referent annotation are created. These are subsequently used to benchmark a selection of the reconstructed algorithms and a novel re-combination of the heuristics catalogued in the inventory. I then compare the performance of the same TR algorithms under three different conditions, namely applying it to the (i) output of human named entity annotation, (ii) automatic annotation using an existing Maximum Entropy sequence tagging model, and (iii) a na¨ıve toponym lookup procedure in a gazetteer. Evaluation. The algorithms implemented in this thesis are evaluated in an intrinsic or component evaluation. To this end, we define a task-specific matching criterion to be used with traditional Precision (P) and Recall (R) evaluation metrics. This matching criterion is lenient with respect to numerical gazetteer imprecision in situations where one toponym instance is marked up with different gazetteer entries in the gold standard and the test set, respectively, but where these refer to the same candidate referent, caused by multiple near-duplicate entries in the reference gazetteer. Main Contributions. The major contributions of this thesis are as follows: • A new reference corpus in which instances of location named entities have been manually annotated with spatial grounding information for populated places, and an associated reference gazetteer, from which the assigned candidate referents are chosen. This reference gazetteer provides numerical latitude/longitude coordinates (such as 51320 North, 0 50 West) as well as hierarchical path descriptions (such as London > UK) with respect to a world wide-coverage, geographic taxonomy constructed by combining several large, but noisy gazetteers. This corpus contains news stories and comprises two sub-corpora, a subset of the REUTERS RCV1 news corpus used for the CoNLL shared task (Tjong Kim Sang and De Meulder (2003)), and a subset of the Fourth Message Understanding Contest (MUC-4; Chinchor (1995)), both available pre-annotated with gold-standard. This corpus will be made available as a reference evaluation resource; • a new method and implemented system to resolve toponyms that is capable of robustly processing unseen text (open-domain online newswire text) and grounding toponym instances in an extensional model using longitude and latitude coordinates and hierarchical path descriptions, using internal (textual) and external (gazetteer) evidence; • an empirical analysis of the relative utility of various heuristic biases and other sources of evidence with respect to the toponym resolution task when analysing free news genre text; • a comparison between a replicated method as described in the literature, which functions as a baseline, and a novel algorithm based on minimality heuristics; and • several exemplary prototypical applications to show how the resulting toponym resolution methods can be used to create visual surrogates for news stories, a geographic exploration tool for news browsing, geographically-aware document retrieval and to answer spatial questions (How far...?) in an open-domain question answering system. These applications only have demonstrative character, as a thorough quantitative, task-based (extrinsic) evaluation of the utility of automatic toponym resolution is beyond the scope of this thesis and left for future work.
267

Ontology learning from folksonomies.

January 2010 (has links)
Chen, Wenhao. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 63-70). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Ontologies and Folksonomies --- p.1 / Chapter 1.2 --- Motivation --- p.3 / Chapter 1.2.1 --- Semantics in Folksonomies --- p.3 / Chapter 1.2.2 --- Ontologies with basic level concepts --- p.5 / Chapter 1.2.3 --- Context and Context Effect --- p.6 / Chapter 1.3 --- Contributions --- p.6 / Chapter 1.4 --- Structure of the Thesis --- p.8 / Chapter 2 --- Background Study --- p.10 / Chapter 2.1 --- Semantic Web --- p.10 / Chapter 2.2 --- Ontology --- p.12 / Chapter 2.3 --- Folksonomy --- p.14 / Chapter 2.4 --- Cognitive Psychology --- p.17 / Chapter 2.4.1 --- Category (Concept) --- p.17 / Chapter 2.4.2 --- Basic Level Categories (Concepts) --- p.17 / Chapter 2.4.3 --- Context and Context Effect --- p.20 / Chapter 2.5 --- F1 Evaluation Metric --- p.21 / Chapter 2.6 --- State of the Art --- p.23 / Chapter 2.6.1 --- Ontology Learning --- p.23 / Chapter 2.6.2 --- Semantics in Folksonomy --- p.26 / Chapter 3 --- Ontology Learning from Folksonomies --- p.28 / Chapter 3.1 --- Generating Ontologies with Basic Level Concepts from Folksonomies --- p.29 / Chapter 3.1.1 --- Modeling Instances and Concepts in Folksonomies --- p.29 / Chapter 3.1.2 --- The Metric of Basic Level Categories (Concepts) --- p.30 / Chapter 3.1.3 --- Basic Level Concepts Detection Algorithm --- p.31 / Chapter 3.1.4 --- Ontology Generation Algorithm --- p.34 / Chapter 3.2 --- Evaluation --- p.35 / Chapter 3.2.1 --- Data Set and Experiment Setup --- p.35 / Chapter 3.2.2 --- Quantitative Analysis --- p.36 / Chapter 3.2.3 --- Qualitative Analysis --- p.39 / Chapter 4 --- Context Effect on Ontology Learning from Folksonomies --- p.43 / Chapter 4.1 --- Context-aware Basic Level Concepts Detection --- p.44 / Chapter 4.1.1 --- Modeling Context in Folksonomies --- p.44 / Chapter 4.1.2 --- Context Effect on Category Utility --- p.45 / Chapter 4.1.3 --- Context-aware Basic Level Concepts Detection Algorithm --- p.46 / Chapter 4.2 --- Evaluation --- p.47 / Chapter 4.2.1 --- Data Set and Experiment Setup --- p.47 / Chapter 4.2.2 --- Result Analysis --- p.49 / Chapter 5 --- Potential Applications --- p.54 / Chapter 5.1 --- Categorization of Web Resources --- p.54 / Chapter 5.2 --- Applications of Ontologies --- p.55 / Chapter 6 --- Conclusion and Future Work --- p.57 / Chapter 6.1 --- Conclusion --- p.57 / Chapter 6.2 --- Future Work --- p.59 / Bibliography --- p.63
268

Anotação automática de papéis semânticos de textos jornalísticos e de opinião sobre árvores sintáticas não revisadas / Automatic semantic role labeling on non-revised syntactic trees of journalistic and opinion texts

Hartmann, Nathan Siegle 25 June 2015 (has links)
Contexto: A Anotação de Papéis Semânticos (APS) é uma tarefa da área de Processamento de Línguas Naturais (PLN) que permite detectar os eventos descritos nas sentenças e os participantes destes eventos (Palmer et al., 2010). A APS responde perguntas como Quem?, Quando?, Onde?, O quê?, e Por quê?, dentre outras e, sendo assim, é importante para várias aplicações de PLN. Para anotar automaticamente um texto com papéis semânticos, a maioria dos sistemas atuais emprega técnicas de Aprendizagem de Máquina (AM). Porém, alguns papéis semânticos são previsíveis e, portanto, não necessitam ser tratados via AM. Além disso, a grande maioria das pesquisas desenvolvidas em APS tem dado foco ao inglês, considerando as particularidades gramaticais e semânticas dessa língua, o que impede que essas ferramentas e resultados sejam diretamente transportados para outras línguas. Revisão da Literatura: Para o português do Brasil, há três trabalhos finalizados recentemente que lidam com textos jornalísticos, porém com performance inferior ao estado da arte para o inglês. O primeiro (Alva- Manchego, 2013) obteve 79,6 de F1 na APS sobre o córpus PropBank.Br; o segundo (Fonseca, 2013), sem fazer uso de um treebank para treinamento, obteve 68,0 de F1 sobre o córpus PropBank.Br; o terceiro (Sequeira et al., 2012) realizou anotação apenas dos papéis Arg0 (sujeito prototípico) e Arg1 (paciente prototípico) no córpus CETEMPúblico, com performance de 31,3 pontos de F1 para o primeiro papel e de 19,0 de F1 para o segundo. Objetivos: O objetivo desse trabalho de mestrado é avançar o estado da arte na APS do português brasileiro no gênero jornalístico, avaliando o desempenho de um sistema de APS treinado com árvores sintáticas geradas por um parser automático (Bick, 2000), sem revisão humana, usando uma amostragem do córpus PLN-Br. Como objetivo adicional, foi avaliada a robustez da tarefa de APS frente a gêneros diferentes, testando o sistema de APS, treinado no gênero jornalístico, em uma amostra de revisões de produtos da web. Esse gênero não foi explorado até então na área de APS e poucas de suas características foram formalizadas. Resultados: Foi compilado o primeiro córpus de opiniões sobre produtos da web, o córpus Buscapé (Hartmann et al., 2014). A diferença de performance entre um sistema treinado sobre árvores revisadas e outro sobre árvores não revisadas ambos no gênero jornalístico foi de 10,48 pontos de F1. A troca de gênero entre as fases de treinamento e teste, em APS, é possível, com perda de performance de 3,78 pontos de F1 (córpus PLN-Br e Buscapé, respectivamente). Foi desenvolvido um sistema de inserção de sujeitos não expressos no texto, com precisão de 87,8% no córpus PLN-Br e de 94,5% no córpus Buscapé. Foi desenvolvido um sistema, baseado em regras, para anotar verbos auxiliares com papéis semânticos modificadores, com confiança de 96,76% no córpus PLN-Br. Conclusões: Foi mostrado que o sistema de Alva-Manchego (2013), baseado em árvores sintáticas, desempenha melhor APS do que o sistema de Fonseca (2013), independente de árvores sintáticas. Foi mostrado que sistemas de APS treinados sobre árvores sintáticas não revisadas desempenham melhor APS sobre árvores não revisadas do que um sistema treinado sobre dados gold-standard. Mostramos que a explicitação de sujeitos não expressos nos textos do Buscapé, um córpus do gênero de opinião de produtos na web, melhora a performance da sua APS. Também mostramos que é possível anotar verbos auxiliares com papéis semânticos modificadores, utilizando um sistema baseado em regras, com alta confiança. Por fim, mostramos que o uso do sentido do verbo, como feature de AM, para APS, não melhora a perfomance dos sistemas treinados sobre o PLN-Br e o Buscapé, por serem córpus pequenos. / Background: Semantic Role Labeling (SRL) is a Natural Language Processing (NLP) task that enables the detection of events described in sentences and the participants of these events (Palmer et al., 2010). SRL answers questions such as Who?, When?, Where?, What? and Why? (and others), that are important for several NLP applications. In order to automatically annotate a text with semantic roles, most current systems use Machine Learning (ML) techniques. However, some semantic roles are predictable, and therefore, do not need to be classified through ML. In spite of SRL being well advanced in English, there are grammatical and semantic particularities that prevents full reuse of tools and results in other languages. Related work: For Brazilian Portuguese, there are three studies recently concluded that performs SRL in journalistic texts. The first one (Alva-Manchego, 2013) obtained 79.6 of F1 on the SRL of the PropBank.Br corpus; the second one (Fonseca, 2013), without using a treebank for training, obtained 68.0 of F1 for the same corpus; and the third one (Sequeira et al., 2012) annotated only the Arg0 (prototypical agent) and Arg1 (prototypical patient) roles on the CETEMPúblico corpus, with a perfomance of 31.3 of F1 for the first semantic role and 19.0 for the second one. None of them, however, reached the state of the art of the English language. Purpose: The goal of this masters dissertation was to advance the state of the art of SRL in Brazilian Portuguese. The training corpus used is from the journalistic genre, as previous works, but the SRL annotation is performed on non-revised syntactic trees, i.e., generated by an automatic parser (Bick, 2000) without human revision, using a sampling of the corpus PLN-Br. To evaluate the resulting SRL classifier in another text genre, a sample of product reviews from web was used. Until now, product reviews was a genre not explored in SRL research, and few of its characteristics are formalized. Results: The first corpus of web product reviews, the Buscapé corpus (Hartmann et al., 2014), was compiled. It is shown that the difference in the performance of a system trained on revised syntactic trees and another trained on non-revised trees both from the journalistic genre was of 10.48 of F1. The change of genres during the training and testing steps in SRL is possible, with a performance loss of 3.78 of F1 (corpus PLN-Br and Buscapé, respectively). A system to insert unexpressed subjects reached 87.8% of precision on the PLN-Br corpus and a 94.5% of precision on the Buscapé corpus. A rule-based system was developed to annotated auxiliary verbs with semantic roles of modifiers (ArgMs), achieving 96.76% confidence on the PLN-Br corpus. Conclusions: First we have shown that Alva-Manchego (2013) SRL system, that is based on syntactic trees, performs better annotation than Fonseca (2013)s system, that is nondependent on syntactic trees. Second the SRL system trained on non-revised syntactic trees performs better over non-revised trees than a system trained on gold-standard data. Third, the explicitation of unexpressed subjects on the Buscapé texts improves their SRL performance. Additionally, we show it is possible to annotate auxiliary verbs with semantic roles of modifiers, using a rule-based system. Last, we have shown that the use of the verb sense as a feature of ML, for SRL, does not improve the performance of the systems trained over PLN-Br and Buscapé corpus, since they are small.
269

Agrupamento semântico de aspectos para mineração de opinião / Semantic clustering of aspects for opinion mining

Vargas, Francielle Alves 29 November 2017 (has links)
Com o rápido crescimento do volume de informações opinativas na web, extrair e sintetizar conteúdo subjetivo e relevante da rede é uma tarefa prioritária e que perpassa vários domínios da sociedade: político, social, econômico, etc. A organização semântica desse tipo de conteúdo, é uma tarefa importante no contexto atual, pois possibilita um melhor aproveitamento desses dados, além de benefícios diretos tanto para consumidores quanto para organizações privadas e governamentais. A área responsável pela extração, processamento e apresentação de conteúdo subjetivo é a mineração de opinião, também chamada de análise de sentimentos. A mineração de opinião é dividida em níveis de granularidade de análise: o nível do documento, o nível da sentença e o nível de aspectos. Neste trabalho, atuou-se no nível mais fino de granularidade, a mineração de opinião baseada em aspectos, que consiste de três principais tarefas: o reconhecimento e agrupamento de aspectos, a extração de polaridade e a sumarização. Aspectos são propriedades do alvo da opinião e podem ser implícitos e explícitos. Reconhecer e agrupar aspectos são tarefas críticas para mineração de opinião, no entanto, também são desafiadoras. Por exemplo, em textos opinativos, usuários utilizam termos distintos para se referir a uma mesma propriedade do objeto. Portanto, neste trabalho, atuamos no problema de agrupamento de aspectos para mineração de opinião. Para resolução deste problema, optamos por uma abordagem baseada em conhecimento linguístico. Investigou-se os principais fenômenos intrínsecos e extrínsecos em textos opinativos a fim de encontrar padrões linguísticos e insumos acionáveis para proposição de métodos automáticos de agrupamento de aspectos correlatos para mineração de opinião. Nós propomos, implementamos e comparamos seis métodos automáticos baseados em conhecimento linguístico para a tarefa de agrupamento de aspectos explícitos e implícitos. Um método inédito foi proposto para essa tarefa que superou os demais métodos implementados, especialmente o método baseado em léxico de sinônimos (baseline) e o modelo estatístico com base em word embeddings. O método proposto também não é dependente de uma língua ou de um domínio, no entanto, focamos no Português do Brasil e no domínio de produtos da web. / With the growing volume of opinion information on the web, extracting and synthesizing subjective and relevant content from the web has to be shown a priority task that passes through different society domains, such as political, social, economical, etc. The semantic organization of this type of content is very important nowadays since it allows a better use of those data, as well as it benefits customers and both private and governmental organizations. The area responsible for extracting, processing and presenting the subjective content is opinion mining, also known as sentiment analysis. Opinion mining is divided into granularity levels: document, sentence and aspect levels. In this research, the deepest level of granularity was studied, the opinion mining based on aspects, which consists of three main tasks: aspect recognition and clustering, polarity extracting, and summarization. Aspects are the properties and parts of the evaluated object and it may be implicit or explicit. Recognizing and clustering aspects are critical tasks for opinion mining; nonetheless, they are also challenging. For example, in reviews, users use distinct terms to refer to the same object property. Therefore, in this work, the aspect clustering task was the focus. To solve this problem, a linguistic approach was chosen. The main intrinsic and extrinsic phenomena in reviews were investigated in order to find linguistic standards and actionable inputs, so it was possible to propose automatic methods of aspect clustering for opinion mining. In addition, six automatic linguistic-based methods for explicit and implicit aspect clustering were proposed, implemented and compared. Besides that, a new method was suggested for this task, which surpassed the other implemented methods, specially the synonym lexicon-based method (baseline) and a word embeddings approach. This suggested method is also language and domain independent and, in this work, was tailored for Brazilian Portuguese and products domain.
270

Automated spatiotemporal and semantic information extraction for hazards

Wang, Wei 01 July 2014 (has links)
This dissertation explores three research topics related to automated spatiotemporal and semantic information extraction about hazard events from Web news reports and other social media. The dissertation makes a unique contribution of bridging geographic information science, geographic information retrieval, and natural language processing. Geographic information retrieval and natural language processing techniques are applied to extract spatiotemporal and semantic information automatically from Web documents, to retrieve information about patterns of hazard events that are not explicitly described in the texts. Chapters 2, 3 and 4 can be regarded as three standalone journal papers. The research topics covered by the three chapters are related to each other, and are presented in a sequential way. Chapter 2 begins with an investigation of methods for automatically extracting spatial and temporal information about hazards from Web news reports. A set of rules is developed to combine the spatial and temporal information contained in the reports based on how this information is presented in text in order to capture the dynamics of hazard events (e.g., changes in event locations, new events occurring) as they occur over space and time. Chapter 3 presents an approach for retrieving semantic information about hazard events using ontologies and semantic gazetteers. With this work, information on the different kinds of events (e.g., impact, response, or recovery events) can be extracted as well as information about hazard events at different levels of detail. Using the methods presented in Chapter 2 and 3, an approach for automatically extracting spatial, temporal, and semantic information from tweets is discussed in Chapter 4. Four different elements of tweets are used for assigning appropriate spatial and temporal information to hazard events in tweets. Since tweets represent shorter, but more current information about hazards and how they are impacting a local area, key information about hazards can be retrieved through extracted spatiotemporal and semantic information from tweets.

Page generated in 0.1067 seconds