• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sentiment analysis and transfer learning using recurrent neural networks : an investigation of the power of transfer learning / Sentimentanalys och överföringslärande med neuronnät

Pettersson, Harald January 2019 (has links)
In the field of data mining, transfer learning is the method of transferring knowledge from one domain into another. Using reviews from prisjakt.se, a Swedish price comparison site, and hotels.com this work investigate how the similarities between domains affect the results of transfer learning when using recurrent neural networks. We test several different domains with different characteristics, e.g. size and lexical similarity. In this work only relatively similar domains were used, the same target function was sought and all reviews were in Swedish. Regardless, the results are conclusive; transfer learning is often beneficial, but is highly dependent on the features of the domains and how they compare with each other’s.
2

Sentiment analysis of Swedish reviews and transfer learning using Convolutional Neural Networks

Sundström, Johan January 2018 (has links)
Sentiment analysis is a field within machine learning that focus on determine the contextual polarity of subjective information. It is a technique that can be used to analyze the "voice of the customer" and has been applied with success for the English language for opinionated information such as customer reviews, political opinions and social media data. A major problem regarding machine learning models is that they are domain dependent and will therefore not perform well for other domains. Transfer learning or domain adaption is a research field that study a model's ability of transferring knowledge across domains. In the extreme case a model will train on data from one domain, the source domain, and try to make accurate predictions on data from another domain, the target domain. The deep machine learning model Convolutional Neural Network (CNN) has in recent years gained much attention due to its performance in computer vision both for in-domain classification and transfer learning. It has also performed well for natural language processing problems but has not been investigated to the same extent for transfer learning within this area. The purpose of this thesis has been to investigate how well suited the CNN is for cross-domain sentiment analysis of Swedish reviews. The research has been conducted by investigating how the model perform when trained with data from different domains with varying amount of source and target data. Additionally, the impact on the model’s transferability when using different text representation has also been studied. This study has shown that a CNN without pre-trained word embedding is not that well suited for transfer learning since it performs worse than a traditional logistic regression model. Substituting 20% of source training data with target data can in many of the test cases boost the performance with 7-8% both for the logistic regression and the CNN model. Using pre-trained word embedding produced by a word2vec model increases the CNN's transferability as well as the in-domain performance and outperform the logistic regression model and the CNN model without pre-trained word embedding in the majority of test cases.
3

Bidirectional Encoder Representations from Transformers (BERT) for Question Answering in the Telecom Domain. : Adapting a BERT-like language model to the telecom domain using the ELECTRA pre-training approach / BERT för frågebesvaring inom telekomdomänen : Anpassning till telekomdomänen av en BERT-baserad språkmodell genom ELECTRA-förträningsmetoden

Holm, Henrik January 2021 (has links)
The Natural Language Processing (NLP) research area has seen notable advancements in recent years, one being the ELECTRA model which improves the sample efficiency of BERT pre-training by introducing a discriminative pre-training approach. Most publicly available language models are trained on general-domain datasets. Thus, research is lacking for niche domains with domain-specific vocabulary. In this paper, the process of adapting a BERT-like model to the telecom domain is investigated. For efficiency in training the model, the ELECTRA approach is selected. For measuring target- domain performance, the Question Answering (QA) downstream task within the telecom domain is used. Three domain adaption approaches are considered: (1) continued pre- training on telecom-domain text starting from a general-domain checkpoint, (2) pre-training on telecom-domain text from scratch, and (3) pre-training from scratch on a combination of general-domain and telecom-domain text. Findings indicate that approach 1 is both inexpensive and effective, as target- domain performance increases are seen already after small amounts of training, while generalizability is retained. Approach 2 shows the highest performance on the target-domain QA task by a wide margin, albeit at the expense of generalizability. Approach 3 combines the benefits of the former two by achieving good performance on QA both in the general domain and the telecom domain. At the same time, it allows for a tokenization vocabulary well-suited for both domains. In conclusion, the suitability of a given domain adaption approach is shown to depend on the available data and computational budget. Results highlight the clear benefits of domain adaption, even when the QA task is learned through behavioral fine-tuning on a general-domain QA dataset due to insufficient amounts of labeled target-domain data being available. / Dubbelriktade språkmodeller som BERT har på senare år nått stora framgångar inom språkteknologiområdet. Flertalet vidareutvecklingar av BERT har tagits fram, bland andra ELECTRA, vars nyskapande diskriminativa träningsprocess förkortar träningstiden. Majoriteten av forskningen inom området utförs på data från den allmänna domänen. Med andra ord finns det utrymme för kunskapsbildning inom domäner med områdesspecifikt språk. I detta arbete utforskas metoder för att anpassa en dubbelriktad språkmodell till telekomdomänen. För att säkerställa hög effektivitet i förträningsstadiet används ELECTRA-modellen. Uppnådd prestanda i måldomänen mäts med hjälp av ett frågebesvaringsdataset för telekom-området. Tre metoder för domänanpassning undersöks: (1) fortsatt förträning på text från telekom-området av en modell förtränad på den allmänna domänen; (2) förträning från grunden på telekom-text; samt (3) förträning från grunden på en kombination av text från telekom-området och den allmänna domänen. Experimenten visar att metod 1 är både kostnadseffektiv och fördelaktig ur ett prestanda-perspektiv. Redan efter kort fortsatt förträning kan tydliga förbättringar inom frågebesvaring inom måldomänen urskiljas, samtidigt som generaliserbarhet kvarhålls. Tillvägagångssätt 2 uppvisar högst prestanda inom måldomänen, om än med markant sämre förmåga att generalisera. Metod 3 kombinerar fördelarna från de tidigare två metoderna genom hög prestanda dels inom måldomänen, dels inom den allmänna domänen. Samtidigt tillåter metoden användandet av ett tokenizer-vokabulär väl anpassat för båda domäner. Sammanfattningsvis bestäms en domänanpassningsmetods lämplighet av den respektive situationen och datan som tillhandahålls, samt de tillgängliga beräkningsresurserna. Resultaten påvisar de tydliga vinningar som domänanpassning kan ge upphov till, även då frågebesvaringsuppgiften lärs genom träning på ett dataset hämtat ur den allmänna domänen på grund av otillräckliga mängder frågebesvaringsdata inom måldomänen.

Page generated in 0.0595 seconds