• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 338
  • 49
  • Tagged with
  • 387
  • 378
  • 345
  • 331
  • 327
  • 320
  • 320
  • 105
  • 94
  • 89
  • 86
  • 83
  • 78
  • 67
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Text and Speech Alignment Methods for Speech Translation Corpora Creation : Augmenting English LibriVox Recordings with Italian Textual Translations

Della Corte, Giuseppe January 2020 (has links)
The recent uprise of end-to-end speech translation models requires a new generation of parallel corpora, composed of a large amount of source language speech utterances aligned with their target language textual translations. We hereby show a pipeline and a set of methods to collect hundreds of hours of English audio-book recordings and align them with their Italian textual translations, using exclusively public domain resources gathered semi-automatically from the web. The pipeline consists in three main areas: text collection, bilingual text alignment, and forced alignment. For the text collection task, we show how to automatically find e-book titles in a target language by using machine translation, web information retrieval, and named entity recognition and translation techniques. For the bilingual text alignment task, we investigated three methods: the Gale–Church algorithm in conjunction with a small-size hand-crafted bilingual dictionary, the Gale–Church algorithm in conjunction with a bigger bilingual dictionary automatically inferred through statistical machine translation, and bilingual text alignment by computing the vector similarity of multilingual embeddings of concatenation of consecutive sentences. Our findings seem to indicate that the consecutive-sentence-embeddings similarity computation approach manages to improve the alignment of difficult sentences by indirectly performing sentence re-segmentation. For the forced alignment task, we give a theoretical overview of the preferred method depending on the properties of the text to be aligned with the audio, suggesting and using a TTS-DTW (text-to-speech and dynamic time warping) based approach in our pipeline. The result of our experiments is a publicly available multi-modal corpus composed of about 130 hours of English speech aligned with its Italian textual translation and split in 60561 triplets of English audio, English transcript, and Italian textual translation. We also post-processed the corpus so as to extract 40-MFCCs features from the audio segments and released them as a data-set.
262

A Probabilistic Tagging Module Based on Surface Pattern Matching

Eklund, Robert January 1993 (has links)
A problem with automatic tagging and lexical analysis is that it is never 100 % accurate. In order to arrive at better figures, one needs to study the character of what is left untagged by automatic taggers. In this paper untagged residue outputted by the automatic analyser SWETWOL (Karlsson 1992) at Helsinki is studied. SWETWOL assigns tags to words in Swedish texts mainly through dictionary lookup. The contents of the untagged residue files are described and discussed, and possible ways of solving different problems are proposed. One method of tagging residual output is proposed and implemented: the left-stripping method, through which untagged words are bereaved their left-most letters, searched in a dictionary, and if found, tagged according to the information found in the said dictionary. If the stripped word is not found in the dictionary, a match is searched in ending lexica containing statistical information about word classes associated with that particular word form (i.e., final letter cluster, be this a grammatical suffix or not), and the relative frequency of each word class. If a match is found, the word is given graduated tagging according to the statistical information in the ending lexicon. If a match is not found, the word is stripped of what is now its left-most letter and is recursively searched in a dictionary and ending lexica (in that order). The ending lexica employed in this paper are retrieved from a reversed version of Nusvensk Frekvensordbok (Allén 1970), and contain endings of between one and seven letters. The contents of the ending lexica are to a certain degree described and discussed. The programs working according to the principles described are run on files of untagged residual output. Appendices include, among other things, LISP source code, untagged and tagged files, the ending lexica containing one and two letter endings and excerpts from ending lexica containing three to seven letters.
263

Toward an on-line preprocessor for Swedish / Mot en on-line preprocessor för svenska

Wemmert, Oscar January 2017 (has links)
This bachelor thesis presents OPT (Open Parse Tool), a java program allowing for independent parsers/taggers to be run in sequence. For this thesis the existing java versions of Stagger and Maltparser has been adapted for use as modules in this program, and OPT's performance has then been compared to an existing, in use, alternative (Språkbanken's Korp Corpus Pipeline, henceforth KCP). Execution speed has been compared, and OPT's accuracy has been coarsly tested as either comparable or divergent to that of KCP. The same collection of documents containing natural text has been fed through OPT and KCP in sequence, and execution time was recorded. The tagged output of OPT and KCP was then run through SCREAM (Sjöholm, 2012) and if SCREAM produced comparable results between the two, the accuracy of OPT was considered as comparable to KCP. The results show that OPT completes its tagging and parsing of the documents in around 35 minutes, while KCP took over four hours to complete. SCREAM performed almost exactly the same using the outputs of either program, except for one case in which OPT's output gave better results than KCP's. The accuracy of OPT was thus considered comparable to KCP. The one divergent example can not fully be understood or explained in this thesis, given that the thesis considers SCREAM's internals as mostly that of a black box.
264

Den svenska callcenterbranschen och de tekniska lösningar som används : Branschanalys samt identifiering av problematiska dialogsystemsyttranden med hjälp av maskininlärning / The Swedish call center industry and the technologies it utilizes : Industry analysis and identification of problematic system utterances using machine learning

Wirström, Li, Huledal, Mattias January 2015 (has links)
Detta arbete består av två delar. Den första delen syftar till att beskriva och analysera callcenterbranschen i Sverige samt vilka faktorer som påverkar branschen och dess utveckling. Analysen grundar sig i två modeller: Porters fempunktsmodell och PEST. Fokus ligger på den del av branschen som består av kundtjänstverksamhet för att koppla till arbetets andra del. Analysen visar att branschen främst påverkas av hög konkurrens och företagens, som behöver tillhandahålla kundtjänst, val mellan interna eller externa kundtjänstlösningar. Analysen indikerar även att branschen kommer fortsätta växa och att det finns en trend att företag i större utsträckning väljer att outsourca sin kundtjänst. Utvecklingen hos de tekniska lösningar som används i callcenter, till exempel dialogsystem, är efterfrågade av företagen då dessa är viktiga verktyg för att skapa en väl fungerande kundtjänst. Dagens digitala system har uppenbara utvecklingsområden. Det är ofta stora internationella företag eller internationella arbetslag som utvecklar de digitala systemen. Dock sträcker sig användningsområdet för dessa system långt utanför endast callcenterbranschen. Den andra delen handlar om att identifiera problematiska dialogsystemyttranden med hjälp av maskininlärning och inspireras av SpeDial, ett EU-projekt med syfte att förbättra dialogsystem. Yttranden från dialogsystemet kan anses problematiska beroende på till exempel att systemet missuppfattat användarens avsikt. Syftet med arbetets andra del är att undersöka vilken eller vilka maskininlärningsmetoder i verktyget WEKA som lämpar sig bäst för att identifiera problematiska dialogsystemyttranden. De data som använts i arbetet kommer från en kundtjänstentré baserad på fritt tal, vilket innebär att användaren själv uppmanas beskriva sitt ärende för att kunna kopplas vidare till rätt avdelning inom kundtjänsten. Våra data har tillhandahållits av företaget Voice Provider som utvecklar, implementerar och underhåller kundtjänstsystem. Voice Provider kom vi i kontakt med via Institutionen för tal, musik och hörsel (TMH), vid Kungliga Tekniska högskolan, som deltar i SpeDial-projektet. Arbetet gick initialt ut på att förbereda tillhandahållen data för att kunna användas av maskininlärningsverktyget WEKAs inbyggda klassificerare, varefter sex klassificerare valdes ut för vidare utvärdering. Resultaten visar att ingen av klassificerarna lyckades utföra uppgiften på ett fullt ut tillfredsställande sätt. Den som lyckades bäst var dock metoden Random Forest. Det är svårt att dra några ytterligare slutsatser från resultaten. / This work consists of two parts. The first part aims to describe and analyze the call center industry in Sweden and the factors that affect the industry and its development. The analysis is based on two models: Porter’s five forces and PEST. The focus is mainly on the part of the industry that consists of customer service operations. The analysis shows that the industry is mainly affected by high competition and businesses’, that need to provide customer service, choice between internal or external customer service operations. The analysis also indicates that the industry will continue to grow and that there is a trend that companies increasingly choose to outsource their customer service. The development of  the technological solutions used in call centers, for example, dialogue systems, are requested by companies as these are important tools to create a well-functioning customer service. Digital systems today have obvious development areas. It is often large international companies or international teams that develop the digital systems used. However, extends the area of ​​use for these systems far beyond the call center industry. The second part involves identifying problematic dialogue system utterances using machine learning and is inspired by SpeDial, an EU project aimed at improving dialogue systems. Problematic dialogue system utterances can be considered problematic depending on, for example, that the system misinterprets the user's intention. The aim of the work done in the second part is to investigate what or which machine learning methods in the WEKA tool that are best suited to identify problematic dialogue system utterances. The data used in this work comes from a customer service entrance based on free speech, which means that the user is asked to describe their case to be transferred to the right department within the customer service. Our data has been provided by the company Voice Provider that develops, implements and maintains customer service systems. We came in contact with Voice Provider through the Department of Speech, Music and Hearing (TMH), at the Royal Institute of Technology, that are involved in the SpeDial project. The work initially consisted of preparing the supplied data to enable it to me used by the machine learning tool WEKA’s built-in classifiers, after which six classifiers were selected for further evaluation. The results show that none of the classifiers managed to accomplish the task in a fully satisfactory manner.  Whoever the method that was most successful was the Random Forest method. It is difficult to draw any further conclusions from the results.
265

Comparative analysis for filtering toxic messages using machine learning models / Jämförande analys för filtrering av olämpliga meddelanden med maskininlärningsmodeller

Murman, Mats-Hjalmar, Lundin, Jacob January 2022 (has links)
Online communication has become prevalent within today’s society. The issue with such platforms is that people are allowed to express what they want without repercussion. Consequently, toxicity on these platforms becomes common. One approach to limit such inappropriate messages could be using a filtering method. The thesis will discuss how to create a toxicity filter using machine learning along with an API for filtering messages by using the models created. The study also analyse which models perform the best in terms of three metrics: accuracy, precision and recall. The results indicate that KNN had the best result for predicting multiple variables while SVC and Logistic Regression worked best on single variable. Thus, making machine learning a viable method for filtering toxic messages. / Online kommunikation har blivit allmänt förekommande i dagens sammhälle. Ett problem som har uppstått är att man kan säga vad som helst utan åtanke. En konsekvens av detta blir att opassande medelanden förekommer i stor grad. För att begränsa olämpliga meddelanden kan ett filter användas. Rapporten kommer att disktuera hur ett sådant filter kan göras med hjälp av maskininlärning och sedan implementera till ett API. Denna rapport kommer även att analysera vilken model som fungerar bäst inom noggrannhet, precision, och återkallelse. Resultaten av denna rapport visar att KNN hade bästa resultat för flera variabler men Logistic Regression var bäst på en enskild variabel.
266

Exploring Transformer-Based Contextual Knowledge Graph Embeddings : How the Design of the Attention Mask and the Input Structure Affect Learning in Transformer Models

Holmström, Oskar January 2021 (has links)
The availability and use of knowledge graphs have become commonplace as a compact storage of information and for lookup of facts. However, the discrete representation makes the knowledge graph unavailable for tasks that need a continuous representation, such as predicting relationships between entities, where the most probable relationship needs to be found. The need for a continuous representation has spurred the development of knowledge graph embeddings. The idea is to position the entities of the graph relative to each other in a continuous low-dimensional vector space, so that their relationships are preserved, and ideally leading to clusters of entities with similar characteristics. Several methods to produce knowledge graph embeddings have been created, from simple models that minimize the distance between related entities to complex neural models. Almost all of these embedding methods attempt to create an accurate static representation of each entity and relation. However, as with words in natural language, both entities and relations in a knowledge graph hold different meanings in different local contexts.  With the recent development of Transformer models, and their success in creating contextual representations of natural language, work has been done to apply them to graphs. Initial results show great promise, but there are significant differences in archi- tecture design across papers. There is no clear direction on how Transformer models can be best applied to create contextual knowledge graph embeddings. Two of the main differences in previous work is how the attention mask is applied in the model and what input graph structures the model is trained on.  This report explores how different attention masking methods and graph inputs affect a Transformer model (in this report, BERT) on a link prediction task for triples. Models are trained with five different attention masking methods, which to varying degrees restrict attention, and on three different input graph structures (triples, paths, and interconnected triples).  The results indicate that a Transformer model trained with a masked language model objective has the strongest performance on the link prediction task when there are no restrictions on how attention is directed, and when it is trained on graph structures that are sequential. This is similar to how models like BERT learn sentence structure after being exposed to a large number of training samples. For more complex graph structures it is beneficial to encode information of the graph structure through how the attention mask is applied. There also seems to be some indications that the input graph structure affects the models’ capabilities to learn underlying characteristics in the knowledge graph that is trained upon.
267

Multilingual Zero-Shot and Few-Shot Causality Detection

Reimann, Sebastian Michael January 2021 (has links)
Relations that hold between causes and their effects are fundamental for a wide range of different sectors. Automatically finding sentences that express such relations may for example be of great interest for the economy or political institutions. However, for many languages other than English, a lack of training resources for this task needs to be dealt with. In recent years, large, pretrained transformer-based model architectures have proven to be very effective for tasks involving cross-lingual transfer such as cross-lingual language inference, as well as multilingual named entity recognition, POS-tagging and dependency parsing, which may hint at similar potentials for causality detection. In this thesis, we define causality detection as a binary labelling problem and use cross-lingual transfer to alleviate data scarcity for German and Swedish by using three different classifiers that make either use of multilingual sentence embeddings obtained from a pretrained encoder or pretrained multilingual language models. The source languages in most of our experiments will be English, for Swedish we however also use a small German training set and a combination of English and German training data.  We try out zero-shot transfer as well as making use of limited amounts of target language data either as a development set or as additional training data in a few-shot setting. In the latter scenario, we explore the impact of varying sizes of training data. Moreover, the problem of data scarcity in our situation also makes it necessary to work with data from different annotation projects. We also explore how much this would impact our result. For German as a target language, our results in a zero-shot scenario expectedly fall short in comparison with monolingual experiments, but F1-macro scores between 60 and 65 in cases where annotation did not differ drastically still signal that it was possible to transfer at least some knowledge. When introducing only small amounts of target language data, already notable improvements were observed and with the full German training data of about 3,000 sentences combined with the most suitable English data set, the performance for German in some scenarios even almost matches the state of the art for monolingual experiments on English. The best zero-shot performance on the Swedish data was even outperforming the scores achieved for German. However, due to problems with the additional Swedish training data, we were not able to improve upon the zero-shot performance in a few-shot setting in a similar manner as it was the case for German.
268

The past, present or future? : A comparative NLP study of Naive Bayes, LSTM and BERT for classifying Swedish sentences based on their tense

Navér, Norah January 2021 (has links)
Natural language processing is a field in computer science that is becoming increasingly important. One important part of NLP is the ability to sort text to the past, present or future, depending on when the event came or will come about. The objective of this thesis was to use text classification to classify Swedish sentences based on their tense, either past, present or future. Furthermore, the objective was also to compare how lemmatisation would affect the performance of the models. The problem was tackled by implementing three machine learning models on both lemmatised and not lemmatised data. The machine learning models were Naive Bayes, LSTM and BERT. The result showed that the overall performance was affected negatively when the data was lemmatised. The best performing model was BERT with an accuracy of 96.3\%. The result was useful as the best performing model had very high accuracy and performed well on newly constructed sentences. / Språkteknologi är område inom datavetenskap som som har blivit allt viktigare. En viktig del av språkteknologi är förmågan att sortera texter till det förflutna, nuet eller framtiden, beroende på när en händelse skedde eller kommer att ske. Syftet med denna avhandling var att använda textklassificering för att klassificera svenska meningar baserat på deras tempus, antingen dåtid, nutid eller framtid. Vidare var syftet även att jämföra hur lemmatisering skulle påverka modellernas prestanda. Problemet hanterades genom att implementera tre maskininlärningsmodeller på både lemmatiserade och icke lemmatiserade data. Maskininlärningsmodellerna var Naive Bayes, LSTM och BERT. Resultatet var att den övergripande prestandan påverkades negativt när datan lemmatiserade. Den bäst presterande modellen var BERT med en träffsäkerhet på 96,3 \%. Resultatet var användbart eftersom den bäst presterande modellen hade mycket hög träffsäkerhet och fungerade bra på nybyggda meningar.
269

Grapheme-to-phoneme transcription of English words in Icelandic text

Ármannsson, Bjarki January 2021 (has links)
Foreign words, such as names, locations or sometimes entire phrases, are a problem for any system that is meant to convert graphemes to phonemes (g2p; i.e.converting written text into phonetic transcription). In this thesis, we investigate both rule-based and neural methods of phonetically transcribing English words found in Icelandic text, taking into account the rules and constraints of how foreign phonemes can be mapped into Icelandic phonology. We implement a rule-based system by compiling grammars into finite-state transducers. In deciding on which rules to include, and evaluating their coverage, we use a list of the most frequently-found English words in a corpus of Icelandic text. The output of the rule-based system is then manually evaluated and corrected (when needed) and subsequently used as data to train a simple bidirectional LSTM g2p model. We train models both with and without length and stress labels included in the gold annotated data. Although the scores for neither model are close to the state-of-the-art for either Icelandic or English, both our rule-based system and LSTM model show promising initial results and improve on the baseline of simply using an Icelandic g2p model, rule-based or neural, on English words. We find that the greater flexibility of the LSTM model seems to give it an advantage over our rule-based system when it comes to modeling certain phenomena. Most notable is the LSTM’s ability to more accurately transcribe relations between graphemes and phonemes for English vowel sounds. Given there does not exist much previous work on g2p transcription specifically handling English words within the Icelandic phonological constraints and it remains an unsolved task, our findings present a foundation for the development of further research, and contribute to improving g2p systems for Icelandic as a whole.
270

Exploring Hybrid Topic Based Sentiment Analysis as Author Identification Method on Swedish Documents

Jakob, Bremer January 2021 (has links)
The Swedish national bank has had shifting policies when it comes to publicity and confidentiality concerning publishing of texts within the bank. For some time, texts written by commissioners within the bank were decided to be published anonymously. Later they revoked the confidentiality policy, publishing all documents publicly again. This led to emerged interests in possible shifting attitudes toward topics discussed by the commissioners when writing anonymously versus publicly. On a request, based on the interests, there are ongoing analyses being conducted with the help of language technology where topics are extracted from the anonymous and public documents respectively. The aim is to find topics related to individual commissioners with the purpose of, as accurately as possible, identifying which of the anonymous documents is written by who. To discover unique relations between the commissioners and the generated topics, this thesis proposes hybrid topic based sentiment analysis as an author identification method to be able to use sentiments of topics as identifying features of commissioners. The results showed promise in the proposed approach. Though, further research is substantial, conducting comparisons with other acknowledged author identification methods, to confirm some level of efficacy, especially on documents containing close similarities among topics.

Page generated in 0.0328 seconds