• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 338
  • 49
  • Tagged with
  • 387
  • 378
  • 345
  • 331
  • 327
  • 320
  • 320
  • 105
  • 94
  • 89
  • 86
  • 83
  • 78
  • 67
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Descriptive Labeling of Document Clusters / Deskriptiv märkning av dokumentkluster

Österberg, Adam January 2022 (has links)
Labeling is the process of giving a set of data a descriptive name. This thesis dealt with documents with no additional information and aimed at clustering them using topic modeling and labeling them using Wikipedia as a second source. Labeling documents is a new field with many potential solutions. This thesis examined one method in a practical setting. Unstructured data was preprocessed and clustered using a topic model. Frequent words from each cluster were used to generate a search query sent to Wikipedia, where titles and categories from the most relevant pages were stored as candidate labels. Each candidate label was evaluated based on the frequency of common cluster words among the candidate labels. The frequency was weighted proportional to the relevance of the original Wikipedia article. The relevance was based on the order of appearance in the search results. The five labels with the highest scores were chosen to describe the cluster. The clustered documents consisted of exam questions that students use to practice before a course exam. Each question in the cluster was scored by someone experienced in the relevant topic by evaluating if one of the five labels correctly described the content. The method proved unreliable, with only one course receiving labels considered descriptive for most of its questions. A significant problem was the closely related data with all documents belonging to one overarching category instead of a dataset containing independent topics. However, for one dataset, 80 % of the documents received a descriptive label, indicating that labeling using secondary sources has potential, but needs to be investigated further. / Märkning handlar om att ge okända data en beskrivning. I denna uppsats behandlas data i form av dokument som utan ytterligare information klustras med temamodellering samt märks med hjälp av Wikipedia som en sekundär källa. Märkning av dokument är ett nytt forskningsområde med flera tänkbara vägar framåt. I denna uppsats undersöks en möjlig metod i en praktisk miljö. Dokumenten förbehandlas och grupperas i kluster med hjälp av en temamodell. Vanliga ord från varje kluster används sedan för att generera en sökfråga som skickas till Wikipedia där titlar och kategorier från de mest relevanta sidorna lagras som kandidater. Varje kandidat utvärderas sedan baserat på frekvensen av kandidatordet bland titlarna i klustret och relevansen av den ursprungliga Wikipedia-artikeln. Relevansen av artiklarna baserades på i vilken ordning de dök upp i sökresultatet. De fem märkningarna med högst poäng valdes ut för att beskriva klustret. De klustrade dokumenten bestod av tentamensfrågor som studenter använder sig av för att träna inför ett prov. Varje fråga i klustret utvärderades av någon med erfarenhet av det i frågan behandlade ämnet. Utvärderingen baserades på om någon av de fem märkningarna ansågs beskriva innehållet. Metoden visade sig vara opålitlig med endast en kurs som erhöll märkningar som ansågs beskrivande för majoriteten av dess frågor. Ett stort problem var att data var nära relaterad med alla dokument tillhörande en övergripande kategori i stället för oberoende ämnen. För en datamängd fick dock 80 % av dokumenten en beskrivande etikett. Detta visar att märkning med hjälp av sekundära källor har potential, men behöver undersökas ytterligare.
162

The Influence of Language Models on Decryption of German Historical Ciphers

Sikora, Justyna January 2022 (has links)
This thesis assesses the influence of language models on decryption of historical German ciphers. Previous research on language identification and cleartext detection indicates that it is beneficial to use historical language models (LM) while dealing with historical ciphers as they can outperform models trained on present-day data. To date, no systematic investigation has considered the impact of choosing different LMs for the decryption of ciphers. Therefore, we conducted a series of experiments with the aim of exploring this assumption. Using historical data from the HistCorp collection and Project Gutenberg, we have created 3-gram, 4-gram and 5-gram models, as well as constructed substitution ciphers for testing of the models. The results show that in most cases language models trained on historical data perform better than the larger modern models, while the most consistent results for the tested ciphers gave the 4-gram models.
163

Italianising English words with G2P techniques in TTS voices. An evaluation of different models

Grassini, Francesco January 2024 (has links)
Text-to-speech voices have come a long way in terms of their naturalness, and they are getting closer to human-sounding than ever. However, among the problems that still persist, the pronunciation of foreign words is still one of them. The experiments conducted in this thesis focus on using grapheme-to-phoneme (G2P) models to tackle the just-mentioned issue and, more specifically, to adjust the erroneous pronunciation of English words to an Italian English accent in Italian-speaking voices. We curated a dataset of words collected during recording sessions with an Italian voice actor reading general conversational sentences. We then manually transcribed their pronunciation in Italian English. In the second stage, we augmented the dataset by collecting the most common surnames in Great Britain and the United States, phonetically transcribed them with a rule-based phoneme mapping algorithm previously deployed by the company, and then manually adjusted the pronunciations to Italian English. Thirdly, by using the massively multilingual ByT5 model, a Transformer G2P model pre-trained on 100 languages, as well as its tokenizer-dependent versions T5_base and T5_small, and an LSTM with attention based on OpenNMT, we performed 10-fold cross-validation with the curated dataset. The results show that augmenting the data benefitted every model. In terms of PER, WER and accuracy, the transformer-based ByT5_small strongly outperformed its T5_small and T5_base counterparts even with a third or two-thirds of the training data. The second best performing model, the LSTM with attention one built with the OpenNMT framework, outperformed as well the T5 models, showed the second-best accuracy of our experiments and was the 'lightest' in terms of trainable parameters (2M) in comparison to ByT5 (299M) and the T5 ones (60 and 200M).
164

Standardising Text Complexity : Applying and Optimising a Multi-Scale Probit Model for Cross-Source Classification and Ranking in Swedish

Andersson, Elsa January 2024 (has links)
The increasing accessibility of texts has highlighted the need to differentiate them based on complexity. For instance, individuals with reading disabilities such as dyslexia often face greater challenges with complex texts. Similarly, teachers may wish to use texts of varying complexity for different grade levels. These scenarios underscore the necessity of develop- ing a method for classifying texts by their difficulty level. Text complexity here refers to the characteristics of a text that determine its difficulty, independent of the reader. The scarcity of Swedish texts suitable for traditional text complexity classification methods poses a significant challenge that needs to be tackled.  The Multi-Scale Probit model employs a Bayesian approach to classify and rank texts of varying complexity from multiple sources. This thesis implements the Multi-Scale Probit model on linguistic features of Swedish easy-to-read books and investigates data augmentation and feature regularisation as optimisation methods for text complexity assessment. Multi-Scale and Single Scale Probit models are implemented using different ratios of training data, and then compared. The results indicate that the Multi-Scale Probit model outper- forms a baseline model and that the multi-scale approach generally surpasses the single scale approach. The first optimisation method demonstrates that data augmentation is a viable approach to enhance performance using available data. The second optimisation method reveals that a feature selection step can improve both the performance and computational efficiency of the model. Overall, the findings suggest that the Multi-Scale Probit model is an effective method for classifying and ranking new texts, though there is room for further performance improvements.
165

Finding structure in passwords : Using transformer models for password segmentation

Eneberg, Lina January 2024 (has links)
Passwords are common figures in everyone’s everyday life. One person has in average80 accounts for which they are supposed to use different passwords. Remembering allthese passwords is difficult and leads to people reusing, or reusing with slight modification,passwords on many accounts. Studies on memory show that information relating tosomething personal is more easily remembered. This is likely the reason as to why manypeople use passwords relating to either self, relatives, lovers, friends, or pets. Hackers will most often use either brute force or dictionary attacks to crack a password.These techniques can be quite time consuming so using machine learning could bea faster and easier approach. Segmenting someone’s previous passwords into meaningfulunits often reveals personal information about the creator and can thus be used as a basisfor password guessing. This report focuses on evaluating different sizes of the GPT-SW3model, which uses a transformer architecture, on password segmentation. The purposeis to find out if the GPT-SW3 model is suitable to use as a password segmenter and byextension if it can be used for password guessing. As training data, a list of passwords collected from a security breach on a platformcalled RockYou was used. The passwords were segmented by the author to provide themodel with a correct answer to learn from. The evaluation metric, Exact Match, checksif the model’s prediction is the same as that of the author. There were no positive resultswhen training GPT-SW3, most likely because of technical limitations. As the results arerather insufficient, future studies are required to prove or disprove the assumptions thisthesis is based on.
166

Debiasing a Corpus with Semi-Supervised Topic Modelling : Swapping Gendered Words to Reduce Potentially Harmful Associations

Müller, Sal R. January 2024 (has links)
Gender biases are present in many NLP models. Such biased models can have large negative consequences on individuals. This work is a case study where we attempt to reduce them in a corpus consisting of Wikipedia articles about persons, in order to reduce them in models that would be trained on the corpus. For this, we apply two methods of modifying the corpus’s documents. Both methods replace gendered words (such as ‘mother’, ‘father’ and ‘parent’) with each other to change the contexts in which they each appear. The analysis and comparison of those two methods show that one of them is indeed suited to reduce gender biases. By modifying 35% of the corpus’s documents, the context of gendered words seems equal between the three considered genders (feminine, masculine and non-binary). This is confirmed through the performance of coreference resolution models trained with word embeddings fine-tuned on the corpus before and after modifying it. Evaluating these models on schemas specifically designed to point out gender biases in coreference resolution models shows that the model using the modified corpus is indeed less gender biased than the original. Our analysis further shows that the method does not compromise the corpus’s overall quality.
167

Extraction of word senses from bilingual resources using graph-based semantic mirroring / Extraktion av ordbetydelser från tvåspråkiga resurser med grafbaserad semantisk spegling

Lilliehöök, Hampus January 2013 (has links)
In this thesis we retrieve semantic information that exists implicitly in bilingual data. We gather input data by repeatedly applying the semantic mirroring procedure. The data is then represented by vectors in a large vector space. A resource of synonym clusters is then constructed by performing K-means centroid-based clustering on the vectors. We evaluate the result manually, using dictionaries, and against WordNet, and discuss prospects and applications of this method. / I det här arbetet utvinner vi semantisk information som existerar implicit i tvåspråkig data. Vi samlar indata genom att upprepa proceduren semantisk spegling. Datan representeras som vektorer i en stor vektorrymd. Vi bygger sedan en resurs med synonymkluster genom att applicera K-means-algoritmen på vektorerna. Vi granskar resultatet för hand med hjälp av ordböcker, och mot WordNet, och diskuterar möjligheter och tillämpningar för metoden.
168

Den offentliga dagboken : Vilka uttrycksmedel använder sig gymnasieungdomar av på dagboksbloggar? / The public diary : What means of expression do high school students use in their diary blogs?

Karlsson, Jessica January 2008 (has links)
Internet har sedan starten öppnat nya portar för kommunikation. En av de allra populäraste just nu är att blogga. Att uttrycka sig språkligt har kommit att bli så mycket mer än bara att använda sig av ord. På bloggen ges möjlighet att tillföra bild, film, färg och att använda olika typografiska medel, såsom att kursivera eller göra text fetstilt. Element som alla bidrar till hur text tolkas. Utifrån fjorton dagboksbloggar och totalt 289 blogginlägg har min uppsats syftat till att undersöka hur framställning på dessa bloggar, tillhörande gymnasieelever, skett. Mina frågeställningar jag utgått ifrån lyder: <ul type="disc">Hur använder sig gymnasieungdomar av olika uttrycksmedel för att estetiskt och kreativt skapa ett blogginlägg på så kallade dagboksbloggar? -          Hur används rubriksättning, bild, film, färg och olika stilformat på texten för att skapa kommunikation och olika uttryck på blogginläggen? <ul type="disc">Hur förhåller sig gymnasieungdomars dagboksblogg till den traditionella dagboken vad det gäller utformning och kommunikationsmöjligheter? Genom en strukturalistisk analys, med utgångspunkt hos Jurij Lotman, har jag gripit mig an blogginläggen på olika plan där jag både undersökt detaljer i texten och övergripande utformning. Jag har funnit att dagboksbloggen och dagboken skiljer sig på flera plan. Främst i fråga om kommunikationen som sker öppet på dagboksbloggen. Språkligt utmärker sig bloggen främst genom att ord och meningar betonas genom fetstilt och kursiv text, både för att göra texten mer lättövergriplig men också för att betona uttryck. Smileys och andra känslouttryck visar i sin tur hur ungdomarna undviker missförstånd på ett sätt som inte kräver bearbetning av texten. Jag vill säga att uppsatsen visar på hur en vidgad syn på språklighet och kommunikation idag är nödvändig, i och med de nya medel som tillkommit i dagens IT-samhälle. / Internet has since the beginning widened the form of communication. In recent times one of the most popular form is via blogs. To express yourself has become more than words. The blogs give you the ability to add pictures, videos, colors and more. You are also able to use typological medium like italic and bold types. All these elements contribute to how the text is read and interpreted. From 14 different diary blogs written by high school students and 289 posts in total my thesis intend to study which method of fabrications these blogs use. The question formulations I have based my thesis on are: ·         How do high school students use different ways of expressions to esthetical and creatively create posts at the so called diary blogs? -          How does headlining, pictures, film, colour and different typological medium being used to create communication and different expression on the posts? ·         How does the diary blog relate to the traditional diary regarding the formation and forms of communication? Through a structuralistic analysis method based on Jurij Lotman’s analysis I’ve approached the posts on different levels, where I examine details in the text but also the structure. I’ve found that the diary blog and the diary separate from each other on several plans, foremost the way of communication which is overt in a diary blog. Linguistically the diary blog distinguish itself from diaries by the way to be able to emphasize words or a sentence with italic and bold types. Smileys and different kinds of emotional forms of expressions are used by the blogger to avoid misconceptions. The thesis has proven that a widening way of looking at linguistic and communications are necessary due to the new medium that comes with the IT.
169

Evaluating and comparing different key phrase-based web scraping methods for training domain-specific fasttext models / Utvärdering och jämförelse av olika nyckelfrasbaserade webbskrapningsmetoder för att träna domänspecifika fasttextmodeller

Book, Love January 2023 (has links)
The demand for automation of simple tasks is constantly increasing. While some tasks are easy to automate because the logic is fixed and the process is streamlined, other tasks are harder because the performance of the task is heavily reliant on the judgment of a human expert. Matching a consultant to an offer from a client is one such task, in which case the expert is either a manager to the consultants or someone within HR at the company. One way to approach this task is to model the specific domain of interest using natural language processing. If we can capture the relationships between relevant skills and phrases within the specific domain, we could potentially use the resulting embeddings in a consultant to offer matching scheme. In this paper, we propose a key phrase-based web scraping approach to collect the data we need for a domain-specific corpus. To retrieve the key phrases needed as prompts for web scraping, we propose using the transformer-based library KeyBERT on limited domain-specific in house data belonging to the consultant firm B3 Indes, in order to retrieve the most important phrases in their respective contexts. Facebook's Word2vec based language model fasttext is then used on the processed corpus to create the fixed word embeddings. We also investigate numerous different approaches for selecting the right key phrases for web scraping in a human similarity comparison scheme, as well as comparisons to a larger pretrained general domain fasttext model. We show that utilizing key phrases for a domain-specific fasttext model could be beneficial compared to using a larger pretrained model. The results are not consistently conclusive under the current analytical framework. The results also indicate that KeyBERT is beneficial when selecting the key phrases compared to the randomized sampling of relevant phrases; however, the results are not conclusive. / Efterfrågan för automatisering av enkla uppgifter efterfrågas alltmer. Medan vissa uppgifter är lätta att automatisera eftersom logiken är fast och processen är tydlig, är andra svårare eftersom utförandet av uppgiften starkt beror på en människas expertis. Att matcha en konsult till ett erbjudande från en klient är en sådan uppgift, där experten är antingen en chef för konsulterna eller någon inom HR på företaget. En metod för att hantera denna uppgift är att modellera det specifika området av intresse med hjälp av maskininlärningsbaserad språkteknologi. Om vi kan fånga relationerna mellan relevanta färdigheter och fraser inom det specifika området, skulle vi potentiellt kunna använda de resulterande inbäddningarna i ett matchningsprocess mellan konsulter och uppdrag. I denna rapport föreslås en nyckelordsbaserad webbskrapnings-metod för att samla in data som behövs för ett domänspecifikt korpus. För att hämta de nyckelord som behövs som input för webbskrapning, föreslår vi att använda transformator-baserade biblioteket KeyBERT på begränsad domänspecifik data från konsultbolaget B3 Indes, detta för att hämta de viktigaste fraserna i deras respektive sammanhang. Sedan används Facebooks Word2vec baserade språkmodell fasttext på det bearbetade korpuset för att skapa statiska inbäddningar. Vi undersöker också olika metoder för att välja rätt nyckelord för webbskrapning i en likhets-jämnförelse mot mänskliga experter, samt jämförelser med en större förtränad fasttext-modell som inte är domänspecifik. Vi visar att användning av nyckelord för webbskrapning för träning av en domänspecifik fasttext-modell skulle kunna vara fördelaktigt jämnfört med en förtränad modell, men resutaten är inte konsekvent signifikanta enligt det begränsade analytiska ramverket. Resultaten indikerar också att KeyBERT är fördelaktigt vid valet av nyckelord jämfört med slumpmässigt urval av relevanta fraser, men dessa resultat är inte heller helt entydiga.
170

Multi-modal Models for Product Similarity : Comparative evaluation of unimodal and multi-modal architectures for product similarity prediction and product retrieval / Multimodala modeller för produktlikhet

Frantzolas, Christos January 2023 (has links)
With the rapid growth of e-commerce, enabling effective product recommendation systems and improving product search for shoppers plays a crucial role in driving customer satisfaction. Traditional product retrieval approaches have mainly relied on unimodal models focusing on text data. However, to capture auxiliary context and improve the accuracy of similarity predictions, it is crucial to explore architectures that can leverage additional sources of information, such as images. This thesis compares the performance of multi- and unimodal methods for product similarity prediction and product retrieval. Both approaches are applied to two e-commerce datasets, one containing English and another containing Swedish product descriptions. A pre-trained multi-modal model called CLIP is used as a feature extractor. Different models are trained on CLIP embeddings using either text-only, image-only or image-text inputs. An extension of triplet loss with margins is tested, along with various training setups. Given the lack of similarity labels between products, product similarity prediction is studied by measuring the performance of a K-Nearest Neighbour classifier implemented on features extracted by the trained models. The thesis results demonstrate that multi-modal architectures outperform unimodal models in predicting product similarity. The same is true for product retrieval. Combining textual and visual information seems to lead to more accurate predictions than models relying on only one modality. The findings of this research have considerable implications for e-commerce platforms and recommendation systems, providing insights into the effectiveness of multi-modal models for product-related tasks. Overall, the study contributes to the existing body of knowledge by highlighting the advantages of leveraging multiple sources of information for deep learning. It also presents recommendations for designing and implementing effective multi-modal architectures. / I och med den snabba tillväxten av e-handel spelar att möjliggöra effektivare produktrekommendationssystem och att förbättra produktsök för konsumenter en viktig roll för att öka kundnöjdheten. Traditionella angreppsätt för produktsök har huvudsakligen tillförlitat sig på unimodala textmodeller. För att fånga ett bredare kontext och förbättra exaktheten av prediktioner av likhet mellan produkter är det viktigt att utforska arkitekturer som kan utnyttja fler informationskällor så som bilder. Den här avhandlingen jämför prestanda hos multimodala och unimodala metoder för produktlikhetsprediktioner och produktsök. Båda angreppsätten är tillämpade på två e-handelsdatamängder, en med engelska produktbeskrivningar och en med svenska. En förtränad multimodal modell kallad CLIP används för att skapa produktrepresentationer. Olika modeller har tränats på CLIPs representationer, antingen med enbart text, enbart bild eller både bild och text. En utökning av ett triplettmått med marginaler har testats som träningskriterium, i kombination med olika träningsinställningar. Givet en avsaknad av likhetsannoteringar mellan produkter så har produktlikhetsprediktion studerats genom att mäta prestandan av K-närmaste-grannar-klassificering genom att använda vektor-representationer från de tränade modellerna. Avhandlingens resultat visar att multimodala arkitekturer överträffar unimodala modeller för produktlikhetsprediktion. Att kombinera textuell och visuell information verkar leda till mer korrekta prediktioner jämfört med modeller som förlitar sig på endast en modalitet. Forskningsresultaten har markanta implikationer för e-handelsplattformar och rekommendationssystem, genom att tillhandahålla insikter i multimodala modellers effektivitet i produktrelaterade uppgifter. Överlag så bidrar studien till den existerande litteraturen genom att förtydliga fördelarna av att utnyttja flera informationskällor för djupinlärning. Den resulterar också i rekommendationer för att designa och implementera effektiva multimodala modellarkitekturer.

Page generated in 0.0405 seconds