• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 3
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 32
  • 16
  • 16
  • 16
  • 16
  • 15
  • 14
  • 14
  • 11
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

[pt] APRENDIZADO ESTRUTURADO COM INDUÇÃO E SELEÇÃO INCREMENTAIS DE ATRIBUTOS PARA ANÁLISE DE DEPENDÊNCIA EM PORTUGUÊS / [en] STRUCTURED LEARNING WITH INCREMENTAL FEATURE INDUCTION AND SELECTION FOR PORTUGUESE DEPENDENCY PARSING

YANELY MILANES BARROSO 09 November 2016 (has links)
[pt] O processamento de linguagem natural busca resolver várias tarefas de complexidade crescente que envolvem o aprendizado de estruturas complexas, como grafos e sequências, para um determinado texto. Por exemplo, a análise de dependência envolve o aprendizado de uma árvore que descreve a estrutura sintática de uma sentença dada. Um método amplamente utilizado para melhorar a representação do conhecimento de domínio em esta tarefa é considerar combinações de atributos usando conjunções lógicas que codificam informação útil com um padrão não-linear. O número total de todas as combinações possíveis para uma conjunção dada cresce exponencialmente no número de atributos e pode resultar em intratabilidade computacional. Também, pode levar a overfitting. Neste cenário, uma técnica para evitar o superajuste e reduzir o conjunto de atributos faz-se necessário. Uma abordagem comum para esta tarefa baseia-se em atribuir uma pontuação a uma árvore de dependência, usando uma função linear do conjunto de atributos. Sabe-se que os modelos lineares esparsos resolvem simultaneamente o problema de seleção de atributos e a estimativa de um modelo linear, através da combinação de um pequeno conjunto de atributos. Neste caso, promover a esparsidade ajuda no controle do superajuste e na compactação do conjunto de atributos. Devido a sua exibilidade, robustez e simplicidade, o algoritmo de perceptron é um método linear discriminante amplamente usado que pode ser modificado para produzir modelos esparsos e para lidar com atributos não-lineares. Propomos a aprendizagem incremental da combinação de um modelo linear esparso com um procedimento de indução de variáveis não-lineares, num cénario de predição estruturada. O modelo linear esparso é obtido através de uma modificação do algoritmo perceptron. O método de indução é Entropy-Guided Feature Generation. A avaliação empírica é realizada usando o conjunto de dados para português da CoNLL 2006 Shared Task. O analisador resultante alcança 92,98 por cento de precisão, que é um desempenho competitivo quando comparado com os sistemas de estado- da-arte. Em sua versão regularizada, o analizador alcança uma precisão de 92,83 por cento , também mostra uma redução notável de 96,17 por cento do número de atributos binários e, reduz o tempo de aprendizagem em quase 90 por cento, quando comparado com a sua versão não regularizada. / [en] Natural language processing requires solving several tasks of increasing complexity, which involve learning to associate structures like graphs and sequences to a given text. For instance, dependency parsing involves learning of a tree that describes the dependency-based syntactic structure of a given sentence. A widely used method to improve domain knowledge representation in this task is to consider combinations of features, called templates, which are used to encode useful information with nonlinear pattern. The total number of all possible feature combinations for a given template grows exponentialy in the number of features and can result in computational intractability. Also, from an statistical point of view, it can lead to overfitting. In this scenario, it is required a technique that avoids overfitting and that reduces the feature set. A very common approach to solve this task is based on scoring a parse tree, using a linear function of a defined set of features. It is well known that sparse linear models simultaneously address the feature selection problem and the estimation of a linear model, by combining a small subset of available features. In this case, sparseness helps control overfitting and performs the selection of the most informative features, which reduces the feature set. Due to its exibility, robustness and simplicity, the perceptron algorithm is one of the most popular linear discriminant methods used to learn such complex representations. This algorithm can be modified to produce sparse models and to handle nonlinear features. We propose the incremental learning of the combination of a sparse linear model with an induction procedure of non-linear variables in a structured prediction scenario. The sparse linear model is obtained through a modifications of the perceptron algorithm. The induction method is the Entropy-Guided Feature Generation. The empirical evaluation is performed using the Portuguese Dependency Parsing data set from the CoNLL 2006 Shared Task. The resulting parser attains 92.98 per cent of accuracy, which is a competitive performance when compared against the state-of-art systems. On its regularized version, it accomplishes an accuracy of 92.83 per cent, shows a striking reduction of 96.17 per cent in the number of binary features and reduces the learning time in almost 90 per cent, when compared to its non regularized version.
52

Aspektbaserad Sentimentanalys för Business Intelligence inom E-handeln / Aspect-Based Sentiment Analysis for Business Intelligence in E-commerce

Eriksson, Albin, Mauritzon, Anton January 2022 (has links)
Many companies strive to make data-driven decisions. To achieve this, they need to explore new tools for Business Intelligence. The aim of this study was to examine the performance and usability of aspect-based sentiment analysis as a tool for Business Intelligence in E-commerce. The study was conducted in collaboration with Ellos Group AB which supplied anonymous customer feedback data. The implementation consists of two parts, aspect extraction and sentiment classification. The f irst part, aspect extraction, was implemented using dependency parsing and various aspect grouping techniques. The second part, sentiment classification, was implemented using the language model KB-BERT, a Swedish version of the BERT model. The method for aspect extraction achieved a satisfactory precision of 79,5% but only a recall of 27,2%. Moreover, the result for sentiment classification was unsatisfactory with an accuracy of 68,2%. Although the results underperform expectations, we conclude that aspect-based sentiment analysis in general is a great tool for Business Intelligence. Both as a means of generating customer insights from previously unused data and to increase productivity. However, it should only be used as a supportive tool and not to replace existing processes for decision-making. / Många företag strävar efter att fatta datadrivna beslut. För att åstadkomma detta behöver de utforska nya metoder för Business Intelligence. Syftet med denna studie var att undersöka prestandan och användbarheten av aspektbaserad sentimentanalys som ett verktyg för Business Intelligence inom e-handeln. Studien genomfördes i samarbete med Ellos Group AB som tillhandahöll data bestående av anonym kundfeedback. Implementationen består av två delar, aspektextraktion och sentimentklassificering. Aspektextraktion implementerades med hjälp av dependensparsning och olika aspektgrupperingstekniker. Sentimentklassificering implementerades med hjälp av språkmodellen KB-BERT, en svensk version av BERT. Metoden för aspektextraktion uppnådde en tillfredsställande precision på 79,5% men endast en recall på 27,2%. Resultatet för sentimentklassificering var otillfredsställande med en accuracy på 68,2%. Även om resultaten underpresterar förväntningarna drar vi slutsatsen att aspektbaserad sentimentanalys i allmänhet är ett bra verktyg för Business Intelligence. Både som ett sätt att generera kundinsikter från tidigare oanvända data och som ett sätt att öka produktiviteten. Det bör dock endast användas som ett stödjande verktyg och inte ersätta befintliga processer för beslutsfattande.
53

Dynamic Programming Algorithms for Semantic Dependency Parsing / Algoritmer för semantisk dependensparsning baserade på dynamisk programmering

Axelsson, Nils January 2017 (has links)
Dependency parsing can be a useful tool to allow computers to parse text. In 2015, Kuhlmann and Jonsson proposed a logical deduction system that parsed to non-crossing dependency graphs with an asymptotic time complexity of O(n3), where “n” is the length of the sentence to parse. This thesis extends the deduction system by Kuhlmann and Jonsson; the extended deduction system introduces certain crossing edges, while maintaining an asymptotic time complexity of O(n4). In order to extend the deduction system by Kuhlmann and Jonsson, fifteen logical item types are added to the five proposed by Kuhlmann and Jonsson. These item types allow the deduction system to intro-duce crossing edges while acyclicity can be guaranteed. The number of inference rules in the deduction system is increased from the 19 proposed by Kuhlmann and Jonsson to 172, mainly because of the larger number of combinations of the 20 item types. The results are a modest increase in coverage on test data (by roughly 10% absolutely, i.e. approx. from 70% to 80%), and a comparable placement to that of Kuhlmann and Jonsson by the SemEval 2015 task 18 metrics. By the method employed to introduce crossing edges, derivational uniqueness is impossible to maintain. It is hard to defien the graph class to which the extended algorithm, QAC, parses, and it is therefore empirically compared to 1-endpoint crossing and graphs with a page number of two or less, compared to which it achieves lower coverage on test data. The QAC graph class is not limited by page number or crossings. The takeaway of the thesis is that extending a very minimal deduction system is not necessarily the best approach, and that it may be better to start off with a strong idea of to which graph class the extended algorithm should parse. Additionally, several alternative ways of extending Kuhlmann and Jonsson are proposed. / Dependensparsning kan vara ett användbart verktyg för att få datorer att kunna läsa text. Kuhlmann och Jonsson kom 2015 fram till ett logiskt deduktionssystem som kan parsa till ickekorsande grafer med en asymptotisk tidskomplexitet O(n3), där "n" är meningens som parsas längd. Detta arbete utökar Kuhlmann och Jonssons deduktionssystem så att det kan introducera vissa korsande bågar, medan en asymptotisk tidskomplexitet O(n4) uppnås. För att tillåta deduktionssystemet att introducera korsande bågar, introduceras 15 nya logiska delgrafstyper, eller item. Dessa item-typer tillåter deduktionssystemet att introducera korsande bågar på ett sådant sätt att acyklicitet bibehålls. Antalet logiska inferensregler tags från Kuhlmanns och Jonssons 19 till 172, på grund av den större mängden kombinationer av de nu 20 item-typerna. Resultatet är en mindre ökning av täckning på testdata (ungefär 10 procentenheter, d v s från cirka 70% till 80%), och jämförbar placering med Kuhlmann och Jonsson enligt måtten från uppgift 18 från SemEval 2015. Härledningsunikhet kan inte garanteras på grund av hur bågar introduceras i det nya deduktionssystemet. Den utökade algoritmen, QAC, parsar till en svårdefinierad grafklass, som jämförs empiriskt med 1-endpoint-crossing-grafer och grafer med pagenumber 2 eller mindre. QAC:s grafklass har lägre täckning än båda dessa, och har ingen högre gräns i pagenumber eller antal korsningar. Slutsatsen är att det inte nödvändigtvis är optimalt att utöka ett mycket minimalt och specifikt deduktionssystem, och att det kan vara bättre att inleda processen med en specifik grafklass i åtanke. Dessutom föreslås flera alternativa metoder för att utöka Kuhlmann och Jonsson.

Page generated in 0.0821 seconds