• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 338
  • 49
  • Tagged with
  • 387
  • 378
  • 345
  • 331
  • 327
  • 320
  • 320
  • 105
  • 94
  • 89
  • 86
  • 83
  • 78
  • 67
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Classifying Urgency : A Study in Machine Learning for Classifying the Level of Medical Emergency of an Animal’s Situation

Strallhofer, Daniel, Ahlqvist, Jonatan January 2018 (has links)
This paper explores the use of Naive Bayes as well a Linear Support Vector Machines in order to classify a text based on the level of medical emergency. The primary source of testing will be an online veterinarian service’s customer data. The aspects explored are whether a single text gives enough information for a medical decision to be made and if there are alternative data gathering processes that would be preferred. Past research has proven that text classifiers based on Naive Bayes and SVMs can often give good results. We show how to optimize the results so that important decisions can be made with these classifications as a basis. Optimal data gathering procedures will be a part of this optimization process. The business applications of such a venture will also be discussed since implementing such a system in an online medical service will possibly affect customer flow, goodwill, cost/revenue, and online competitiveness. / Denna studie utforskar användandet av Naive Bayes samt Linear Support Vector Machines för att klassificera en text på en medicinsk skala. Den huvudsakliga datamängden som kommer att användas för att göra detta är kundinformation från en online veterinär. Aspekter som utforskas är om en enda text kan innehålla tillräckligt med information för att göra ett medicinskt beslut och om det finns alternativa metoder för att samla in mer anpassade datamängder i framtiden. Tidigare studier har bevisat att både Naive Bayes och SVMs ofta kan nå väldigt bra resultat. Vi visar hur man kan optimera resultat för att främja framtida studier. Optimala metoder för att samla in datamängder diskuteras som en del av optimeringsprocessen. Slutligen utforskas även de affärsmässiga aspekterna utigenom implementationen av ett datalogiskt system och hur detta kommer påverka kundflödet, goodwill, intäkter/kostnader och konkurrenskraft.
292

Towards a Model of General Text Complexity for Swedish

Falkenjack, Johan January 2018 (has links)
In an increasingly networked world, where the amount of written information is growing at a rate never before seen, the ability to read and absorb written information is of utmost importance for anything but a superficial understanding of life's complexities. That is an example of a sentence which is not very easy to read. It can be said to have a relatively high degree of text complexity. Nevertheless, the sentence is also true. It is important to be able to read and understand written materials. While not everyone might have a job where they have to read a lot, access to written material is necessary in order to participate in modern society. Most information, from news reporting, to medical information, to governmental information, come primarily in a written form. But what makes the sentence at the start of this abstract so complex? We can probably all agree that the length is part of it. But then what? Researches in the field of readability and text complexity analysis have been studying this question for almost 100 years. That research has over time come to include many computational and data driven methods within the field of computational linguistics. This thesis cover some of my contributions to this field of research, though with a main focus on Swedish rather than English text. It aims to explore two primary questions (1) Which linguistic features are most important when assessing text complexity in Swedish? and (2) How can we deal with the problem of data sparsity with regards to complexity annotated texts in Swedish? The first issue is tackled by exploring the task of identifying easy-to-read ("lättläst") text using classification with Support Vector Machines. A large set of linguistic features is evaluated with regards to predictive performance and is shown to separate easy-to-read texts from regular texts with a very high accuracy. Meanwhile, using a genetic algorithm for variable selection, we find that almost the same accuracy can be reached with only 8 features. This implies that this classification problem is not very hard and that results might not generalize to comparing less easy-to-read texts. This, in turn, brings us to the second question. Except for easy-to-read labeled texts, the data with text complexity annotations is very sparse. It consist of multiple small corpora using different scales to label documents. To deal with this problem, we propose a novel statistical model. The model belongs to the larger family of Probit models and is implemented in a Bayesian fashion and estimated using a Gibbs sampler based on extending a well established Gibbs sampler for the Ordered Probit model. This model is evaluated using both simulated and real world readability data with very promising results.
293

Predictive maintenance using NLP and clustering support messages

Yilmaz, Ugur January 2022 (has links)
Communication with customers is a major part of customer experience as well as a great source of data mining. More businesses are engaging with consumers via text messages. Before 2020, 39% of businesses already use some form of text messaging to communicate with their consumers. Many more were expected to adopt the technology after 2020[1]. Email response rates are merely 8%, compared to a response rate of 45% for text messaging[2]. A significant portion of this communication involves customer enquiries or support messages sent in both directions. According to estimates, more than 80% of today’s data is stored in an unstructured format (suchas text, image, audio, or video) [3], with a significant portion of it being stated in ambiguous natural language. When analyzing such data, qualitative data analysis techniques are usually employed. In order to facilitate the automated examination of huge corpora of textual material, researchers have turned to natural language processing techniques[4]. Under the light of shared statistics above, Billogram[5] has decided that support messages between creditors and recipients can be mined for predictive maintenance purposes, such as early identification of an outlier like a bug, defect, or wrongly built feature. As one sentence goal definition, Billogram is looking for an answer to ”why are people reaching out to begin with?” This thesis project discusses implementing unsupervised clustering of support messages by benefiting from natural language processing methods as well as performance metrics of results to answer Billogram’s question. The research also contains intent recognition of clustered messages in two different ways, one automatic and one semi-manual, the results have been discussed and compared. LDA and manual intent assignment approach of the first research has 100 topics and a 0.293 coherence score. On the other hand, the second approach produced 158 clusters with UMAP and HDBSCAN while intent recognition was automatic. Creating clusters will help identifying issues which can be subjects of increased focus, automation, or even down-prioritizing. Therefore, this research lands in the predictive maintenance[9] area. This study, which will get better over time with more iterations in the company, also contains the preliminary work for ”labeling” or ”describing”clusters and their intents.
294

Text Simplification and Keyphrase Extraction for Swedish

Lindqvist, Ellinor January 2019 (has links)
Attempts have been made in Sweden to increase readability for texts addressed to the public, and ongoing projects are still being conducted by disability associations, private companies and Swedish authorities. In this thesis project, we explore automatic approaches to increase readability trough text simplification and keyphrase extraction, with the goal of facilitating text comprehension and readability for people with reading difficulties. A combination of handwritten rules and monolingual machine translation was used to simplify the syntactic and lexical content of Swedish texts, and noun phrases were extracted to provide the reader with a short summary of the textual content. A user evaluation was conducted to compare the original and the simplified version of the same text. Several texts and their simplified versions were also evaluated using established readability metrics. Although a manual evaluation of the result showed that the implemented rules generally worked as intended on the sentences that were targeted, the results from the user evaluation and readability metrics did not show improvements. We believe that further additions to the rule set, targeting a wider range of linguistic structures, have the potential to improve the results.
295

Keeping tabs on GPT-SWE : Classifying toxic output from generative language models for Swedish text generation / Monitorering av GPT-SWE : Klassificering av toxisk text från svenska generativa språkmodeller

Pettersson, Isak January 2022 (has links)
Disclaimer: This paper contains content that can be perceived as offensive or upsetting. Considerable progress has been made in Artificial intelligence (AI) and Natural language processing (NLP) in the last years. Neural language models (LM) like Generative pre-trained transformer 3 (GPT-3) show impressive results, generating high-quality text seemingly written by a human. Neural language models are already applied in society for example in creating chatbots or assisting with writing documents. As generative LMs are trained on large amounts of data from all kinds of sources, they can pick up toxic traits. GPT-3 has for instance been shown to generate text with social biases, racism, sexism and toxic language. Therefore, filtering for toxic content is necessary to safely deploy models like GPT-3. GPT-3 is trained on and can generate English text data, but similar models for smaller languages have recently emerged. GPT-SWE is a novel model based on the same technical principles as GPT-3, able to generate Swedish text. Much like GPT-3, GPT-SWE has issues with generating toxic text. A promising approach for addressing this problem is to train a separate toxicity classification model for classifying the generated text as either toxic or safe. However, there is a substantial need for more research on toxicity classification for lower resource languages and previous studies for the Swedish language are sparse. This study explores the use of toxicity classifiers to filter Swedish text generated from GPT-SWE. This is investigated by creating and annotating a small Swedish toxicity dataset which is used to fine-tune a Swedish BERT model. The best performing toxicity classifier created in this work cannot be considered useful in an applied scenario. Nevertheless, the results encourage continued studies on BERT models that are pre-trained and fine-tuned in Swedish to create toxicity classifiers. The results also highlight the importance of qualitative datasets for fine-tuning and demonstrate the difficulties of toxicity annotation. Furthermore, expert annotators, distinctive well-defined guidelines for annotation and fine-grained labels are recommended. The study also provides insights into the potential for active learning methods in creating datasets in languages with lower resources. Implications and potential solutions regarding toxicity in generative LMs are also discussed. / Varning: Denna studie omfattar innehåll som kan uppfattas som stötande eller upprörande. Betydande framsteg har gjorts inom Artificiell intelligens (AI) och Språkteknologi (NLP) de senaste åren. Utvecklingen av Neurala språkmodeller har fört med sig framgångsrika modeller likt Generative pre-trained transformer 3 (GPT-3) som visat på imponerande resultat i att generera högkvalitativ text, till synes skriven av en människa. Språkmodeller tillämpas redan på flera platser i samhället till exempel för att hjälpa till med att skriva dokument eller för att skapa chatbots. Eftersom språkmodeller tränas på stora mängder data från många typer av källor kan de fånga upp toxiska egenskaper. GPT-3 har till exempel visat sig generera text med sociala fördomar, rasism, sexism och toxiskt språk. En nödvändighet för att säkert distribuera modeller som GPT-3 inkluderar således filtrering av toxiskt innehåll. GPT-3 är tränad på och kan generera engelsk textdata men liknande modeller för mindre språk har nyligen börjat dyka upp. GPT-SWE är en ny modell som bygger på samma tekniska principer som GPT-3 men kan generera svensk text. Likt GPT-3 så har GPT-SWE problem med genererad toxisk text. För att lösa problemen med toxicitet är ett lovande tillvägagångssätt att träna en separat toxicitetsklassificeringsmodell för att klassificera genererad text som toxisk eller säker. Det finns dock en brist på tidigare studier om detta för det svenska språket och det finns ett stort behov av mer forskning kring toxicitetsklassificering för språk med lägre resurser. Följaktligen undersöker detta projekt möjligheterna att använda toxicitetsklassificerare för att filtrera genererad text från svenska språkmodeller. Detta undersöks genom att skapa och annotera ett litet svenskt toxicitets-dataset som används för att finjustera en svensk BERT-modell. Den bäst presterande toxicitetsklassificeraren som skapades inom detta arbete kan inte anses användbar i ett tillämpat scenario. Resultaten uppmuntrar dock fortsatta studier på BERT-modeller förtränade och finjusterade på svenska för att skapa toxicitetsklassificerare. Resultatet skiftar också ytterligare fokus mot vikten av ett kvalitativt dataset för finjustering och påvisar svårigheterna med toxicitets-annotering. Vidare rekommenderas expert-annoterare, distinkta väldefinierade riktlinjer för annotering samt användandet av fler och mer specificerade kategorier för toxicitet. Arbetet ger dessutom insikter om potentialen för metoder som aktiv inlärning för att skapa dataset inom språk med lägre resurser. Fortsättningsvis diskuteras också implikationer och potentiella lösningar angående toxicitet i språkmodeller.
296

Automatic Podcast Chapter Segmentation : A Framework for Implementing and Evaluating Chapter Boundary Models for Transcribed Audio Documents / Automatisk kapitelindelning för podcasts : Ett ramverk för att implementera och utvärdera segmenteringsmodeller för ljuddokument

Feldstein Jacobs, Adam January 2022 (has links)
Podcasts are an exponentially growing audio medium where useful and relevant content should be served, which requires new methods of information sorting. This thesis is the first to look into the state-of-art problem of segmenting podcasts into chapters (structurally and topically coherent sections). Podcast segmentation is a more difficult problem than segmenting structured text due to spontaneous speech and transcription errors from automatic speech recognition systems. This thesis used author-provided timestamps from podcast descriptions as labels to perform supervised learning. Binary classification is performed on sentences from podcast transcripts. A general framework is delivered for creating a dataset with 21 436 podcast episodes, training a supervised model, and for evaluation. The framework managed to address technical challenges such as a high data imbalance (there are few chapter transitions per episode), and finding an appropriate context size (how many sentences are shown to the model during inference). The proposed model outperformed a baseline model in quantitative metrics and in a human evaluation with 100 transitions. The solution provided in this thesis can be used to chapterize podcasts, which has many downstream applications, such as segment sorting, summarization, and information retrieval. / Podcasts är ett exponentiellt växande ljudmedium där användbart och relevant innehåll är viktigt, vilket kräver nya metoder för sortering av information. Detta examensarbete är det första projektet som antar utmaningen att segmentera podcasts in i kapitel (strukturellt och tematiskt sammanhängande avsnitt). Podcastsegmentering är ett svårare problem än att segmentera strukturerad text på grund av spontant tal och fel i transkriberingssystem. Detta projekt använde kapiteltider från podcastbeskrivningar som signaler för att kunna göra supervised learning. Binär klassificering görs på meningar från podcast-transkript. Denna uppsats levererar ett ramverk för att skapa ett dataset med 21 436 podcasts, träna en supervised maskininlärningsmodell samt för utvärdering. Ramverket lyckades lösa tekniska utmaningar såsom obalanserad data (det är få kapitelövergångar i varje podcast) och att hitta en rimlig kontextstorlek (hur många meningar som modellen ser för varje inferens). Den tränade modellen var bättre än en slumpmässig referensmodell i både kvantitativa mätningar samt i en mänsklig utvärdering för 100 kapitelövergångar. Slutligen, detta examensarbete har resulterat i en lösning som kan kapitelindela podcasts, vilket har många applikationer såsom sortering av segment, summering, och informationssökning.
297

An Automated Discharge Summary System Built for Multiple Clinical English Texts by Pre-trained DistilBART Model

Alaei, Sahel January 2023 (has links)
The discharge summary is an important document, summarizing a patient’s medical information during their hospital stay. It is crucial for communication between clinicians and primary care physicians. Creating a discharge sum- mary is a necessary task. However, it is time-consuming for physicians. Using technology to automatically generate discharge summaries can be helpful for physicians and assist them in concentrating more on the patients than writing clinical summarization notes and discharge summaries. This master’s thesis aims to contribute to the research of building a transformer-based model for an automated discharge summary with a pre-trained DistilBART language model. This study plans to answer this main research question: How e↵ective is the pre-trained DistilBART language model in predicting an automated discharge summary for multiple clinical texts? The research strategy used in this study is experimental. the dataset is MIMIC- III. To evaluate the e↵ectiveness of the model, ROUGE scores are selected. The result of this model is compared with the result of the baseline BART model, which is implemented on the same dataset in the other recent research. This study regards multiple document summarization as the process of combining multiple inputs into a single input, which is then summarized. The findings indicate an improvement in ROUGE-2 and ROUGE-Lsum in the DistilBART model in comparison with the baseline BART model. However, one important limitation was computational resource constraint. The study also provides eth- ical considerations and some recommendations for future works.
298

Monolingual and Cross-Lingual Survey Response Annotation

Zhao, Yahui January 2023 (has links)
Multilingual natural language processing (NLP) is increasingly recognized for its potential in processing diverse text-type data, including those from social media, reviews, and technical reports. Multilingual language models like mBERT and XLM-RoBERTa (XLM-R) play a pivotal role in multilingual NLP. Notwithstanding their capabilities, the performance of these models largely relies on the availability of annotated training data. This thesis employs the multilingual pre-trained model XLM-R to examine its efficacy in sequence labelling to open-ended questions on democracy across multilingual surveys. Traditional annotation practices have been labour-intensive and time-consuming, with limited automation attempts. Previous studies often translated multilingual data into English, bypassing the challenges and nuances of native languages. Our study explores automatic multilingual annotation at the token level for democracy survey responses in five languages: Hungarian, Italian, Polish, Russian, and Spanish. The results reveal promising F1 scores, indicating the feasibility of using multilingual models for such tasks. However, the performance of these models is closely tied to the quality and nature of the training set. This research paves the way for future experiments and model adjustments, underscoring the importance of refining training data and optimizing model techniques for enhanced classification accuracy.
299

Domain Knowledge and Representation Learning for Centroid Initialization in Text Clustering with k-Means : An exploratory study / Domänkunskap och Representationsinlärning för Centroidinitialisering vid Textklustering med k-Means : En utforskande studie

Yu, David January 2023 (has links)
Text clustering is a problem where texts are partitioned into homogeneous clusters, such as partitioning them based on their sentiment value. Two techniques to address the problem are representation learning, in particular language representation models, and clustering algorithms. The state-ofthe-art language models are based on neural networks, in particular the Transformer architecture, and the models are used to transform a text into a point in a high dimensional vector space. The texts are then clustered using a clustering algorithm, and a recognized partitional clustering algorithm is k-Means. Its goal is to find centroids that represent the clusters (partitions) by minimizing a distance measure. Two influential parameters of k-Means are the number of clusters and the initial centroids. Multiple heuristics exist to decide how the parameters are selected. The heuristic of using domain knowledge is commonly used when it is available, e.g., the number of clusters is set to the number of dataset labels. This project further explores this idea. The main contribution of the thesis is an investigation of domain knowledge and representation learning as a heuristic in centroid initialization applied to k-Means. Initial centroids were obtained by applying a representation learning technique on the dataset labels. The project analyzed a Swedish dataset with views towards different aspects of Swedish immigration and a Swedish translated movie review dataset using six Swedish compatible language models and two versions of k-Means. Clustering evaluation was measured using eight metrics related to cohesion, separation, external entropy and accuracy. The results show the proposed heuristic made a positive impact on the metrics. By employing the proposed heuristic, six out of eight metrics were improved compared to the baseline. The improvements were observed using six language models and k-Means on two datasets. Additionally, the evaluation metrics indicated that the proposed heuristic has opportunities for future improvements. / Textklustering är ett problem där texter partitioneras i homogena kluster, till exempel genom att gruppera dem baserat på dess sentimentala värde. Två tekniker för att undersöka problemet är representationsinlärning, i synnerhet språkrepresentationsmodeller, och klustringsalgoritmer. Moderna språkmodeller är baserade på neurala nätverk, framförallt på Transformer arkitekturen, och modellerna används för att omvandla texter till punkter i ett högdimensionellt vektorrum. Därefter klustras texterna med hjälp av en klusteringsalgoritm, och en erkänd partition klusteringalgorithm är kMeans. Målet med algoritmen är att finna centroider som representerar klustren (partitioner) genom att minimera ett avståndsmått. Två inflytelserika parametrar i k-Means är antalet kluster och initiala centroider. Många heuristiker existerar för att bestämma hur dessa parametrar skall väljas. En vanligt förekommande heuristik är att använda domänkunskap om det är tillgängligt, e.g., antalet kluster väljs som antalet datamängdsetiketter. Detta projekt genomför ytterligare utforskningar av idén. Avhandlingens huvudsakliga bidrag är en undersökning av att använda kunskaper om domänen för datamängden och representationsinlärning som heuristik för centroid initialisering applicerat på k-Means. Initiala centroider erhölls genom att applicera en representationsinlärningsteknik på datamängdsetiketter. Projektet analyserar en svensk datamängd med åsikter gentemot olika aspekter av svensk immigration och en svensk översatt datamängd om filmrecensioner med hjälp av sex svenskkompatibla språkmodeller och kMeans. Utvärdering av klustringen uppmättes med hjälp av åtta mätetal relaterade till sammanhållning, separation, entropi och ackuratess. Den föreslagna heuristiken hade en positiv effekt på mätetalen. Genom att använda den föreslagna heuristiken förbättrades sex av åtta mätetal jämfört med baslinjen. Förbättringarna observerades med användning av sex språkmodeller och k-Means på två datamängder. Evalueringsmätetalen indikerar också på att heuristiken har möjligheter till framtida förbättringar.
300

Neural maskinöversättning av gawarbati / Neural machine translation for Gawarbati

Gillholm, Katarina January 2023 (has links)
Nya neurala modeller har lett till stora framsteg inom maskinöversättning, men fungerar fortfarande sämre på språk som saknar stora mängder parallella data, så kallade lågresursspråk. Gawarbati är ett litet, hotat lågresursspråk där endast 5000 parallella meningar finns tillgängligt. Denna uppsats använder överföringsinlärning och hyperparametrar optimerade för små datamängder för att undersöka möjligheter och begränsningar för neural maskinöversättning från gawarbati till engelska. Genom att använda överföringsinlärning där en föräldramodell först tränades på hindi-engelska förbättrades översättningar med 1.8 BLEU och 1.3 chrF. Hyperparametrar optimerade för små datamängder ökade BLEU med 0.6 men minskade chrF med 1. Att kombinera överföringsinlärning och hyperparametrar optimerade för små datamängder försämrade resultatet med 0.5 BLEU och 2.2 chrF. De neurala modellerna jämförs med och presterar bättre än ordbaserad statistisk maskinöversättning och GPT-3. Den bäst presterande modellen uppnådde endast 2.8 BLEU och 19 chrF, vilket belyser begränsningarna av maskinöversättning på lågresursspråk samt det kritiska behovet av mer data. / Recent neural models have led to huge improvements in machine translation, but performance is still suboptimal for languages without large parallel datasets, so called low resource languages. Gawarbati is a small, threatened low resource language with only 5000 parallel sentences. This thesis uses transfer learning and hyperparameters optimized for small datasets to explore possibilities and limitations for neural machine translation from Gawarbati to English. Transfer learning, where the parent model was trained on parallel data between Hindi and English, improved results by 1.8 BLEU and 1.3 chrF. Hyperparameters optimized for small datasets increased BLEU by 0.6 but decreased chrF by 1. Combining transfer learning and hyperparameters optimized for small datasets led to a decrease in performance by 0.5 BLEU and 2.2 chrF. The neural models outperform a word based statistical machine translation and GPT-3. The highest performing model only achieved 2.8 BLEU and 19 chrF, which illustrates the limitations of machine translation for low resource languages and the critical need for more data. / VR 2020-01500

Page generated in 0.1425 seconds