• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 8
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 11
  • 9
  • 9
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparison of methods applied to job matching based on soft skills

Elm, Emilia January 2020 (has links)
The expression ''Hire for attitude, train for skills'' is used as a motive to create a matching program where companies and job seekers' soft qualities are measured and compared against each other. Are there better or worse methods for this purpose, and how do they compare with each other? By associating soft qualities with companies and job seekers, it is possible to generate a value for how well they match. Therefore, data has been collected on several companies and job seekers. Their associated qualities are then translated into numerical vectors that can be used for matching purposes, where vectors closer together are more equal than vectors with greater distances. When it comes to analyzing and comparing the qualities, several methods have been used and compared with a subsequent discussion about their suitability. One consequence of the lack of a proper standard for presenting the qualities of companies and job seekers is that the data is messy and varied. An expected conclusion from the result is that the most flexible method is the one that generates the most accurate results.
2

Deep face recognition using imperfect facial data

Elmahmudi, Ali A.M., Ugail, Hassan 27 April 2019 (has links)
Yes / Today, computer based face recognition is a mature and reliable mechanism which is being practically utilised for many access control scenarios. As such, face recognition or authentication is predominantly performed using ‘perfect’ data of full frontal facial images. Though that may be the case, in reality, there are numerous situations where full frontal faces may not be available — the imperfect face images that often come from CCTV cameras do demonstrate the case in point. Hence, the problem of computer based face recognition using partial facial data as probes is still largely an unexplored area of research. Given that humans and computers perform face recognition and authentication inherently differently, it must be interesting as well as intriguing to understand how a computer favours various parts of the face when presented to the challenges of face recognition. In this work, we explore the question that surrounds the idea of face recognition using partial facial data. We explore it by applying novel experiments to test the performance of machine learning using partial faces and other manipulations on face images such as rotation and zooming, which we use as training and recognition cues. In particular, we study the rate of recognition subject to the various parts of the face such as the eyes, mouth, nose and the cheek. We also study the effect of face recognition subject to facial rotation as well as the effect of recognition subject to zooming out of the facial images. Our experiments are based on using the state of the art convolutional neural network based architecture along with the pre-trained VGG-Face model through which we extract features for machine learning. We then use two classifiers namely the cosine similarity and the linear support vector machines to test the recognition rates. We ran our experiments on two publicly available datasets namely, the controlled Brazilian FEI and the uncontrolled LFW dataset. Our results show that individual parts of the face such as the eyes, nose and the cheeks have low recognition rates though the rate of recognition quickly goes up when individual parts of the face in combined form are presented as probes.
3

Evaluation of the correlation between test cases dependency and their semantic text similarity

Andersson, Filip January 2020 (has links)
An important step in developing software is to test the system thoroughly. Testing software requires a generation of test cases that can reach large numbers and is important to be performed in the correct order. Certain information is critical to know to schedule the test cases incorrectly order and isn’t always available. This leads to a lot of required manual work and valuable resources to get correct. By instead analyzing their test specification it could be possible to detect the functional dependencies between test cases. This study presents a natural language processing (NLP) based approach and performs cluster analysis on a set of test cases to evaluate the correlation between test case dependencies and their semantic similarities. After an initial feature selection, the test cases’ similarities are calculated through the Cosine distance function. The result of the similarity calculation is then clustered using the HDBSCAN clustering algorithm. The clusters would represent test cases’ relations where test cases with close similarities are put in the same cluster as they were expected to share dependencies. The clusters are then validated with a Ground Truth containing the correct dependencies. The result is an F-Score of 0.7741. The approach in this study is used on an industrial testing project at Bombardier Transportation in Sweden.
4

Changing a user’s search experience byincorporating preferences of metadata / Andra en användares sökupplevelse genom att inkorporera metadatapreferenser

Ali, Miran January 2014 (has links)
Implicit feedback is usually data that comes from users’ clicks, search queries and text highlights. It exists in abun- dance, but it is riddled with much noise and requires advanced algorithms to properly make good use of it. Several findings suggest that factors such as click-through data and reading time could be used to create user behaviour models in order to predict the users’ information need. This Master’s thesis aims to use click-through data and search queries together with heuristics to create a model that prioritises metadata-fields of the documents in order to predict the information need of a user. Simply put, implicit feedback will be used to improve the precision of a search engine. The Master’s thesis was carried out at Findwise AB - a search engine consultancy firm. Documents from the benchmark dataset INEX were indexed into a search engine. Two different heuristics were proposed that increment the priority of different metadata-fields based on the users’ search queries and clicks. It was assumed that the heuristics would be able to change the listing order of the search results. Evaluations were carried out for the two heuristics and the unmodified search engine was used as the baseline for the experiment. The evaluations were based on simulating a user that searches queries and clicks on documents. The queries and documents, with manually tagged relevance, used in the evaluation came from a data set given by INEX. It was expected that listing order would change in a way that was favourable for the user; the top-ranking results would be documents that truly were in the interest of the user. The evaluations revealed that the behaviour of the heuristics and the baseline have erratic behaviours and metrics never converged to any specific mean-relevance. A statistical test revealed that there is no difference in accuracy between the heuristics and the baseline. These results mean that the proposed heuristics do not improve the precision of the search engine and several factors, such as the indexing of too redundant metadata, could have been responsible for this outcome. / Implicit feedback är oftast data som kommer från användarnas klick, sökfrågor och textmarkeringar. Denna data finns i överflöd, men har för mycket brus och kräver avancerade algoritmer för att man ska kunna dra nytta av den. Flera rön föreslår att faktorer som klickdata och läsningstid kan användas för att skapa beteendemodeller för att förutse användarens informationsbehov. Detta examensarbete ämnar att använda klickdata och sökfrågor tillsammans med heuristiker för att skapa en modell som prioriterar metadata-fält i dokument så att användarens informationsbehov kan förutses. Alltså ska implicit feedback användas för att förbättra en sökmotors precision. Examensarbetet utfördes hos Findwise AB - en konsultfirma som specialiserar sig på söklösningar. Dokument från utvärderingsdatamängden INEX indexerades i en sökmotor. Två olika heuristiker skapades för att ändra prioriteten av metadata-fälten utifrån användarnas sök- och klickdata. Det antogs att heuristikerna skulle kunna förändra ordningen av sökresultaten. Evalueringar utfördes för båda heuristiker och den omodifierade sökmotorn användes som måttstock för experimentet. Evalueringarna gick ut på att simulera en användare som söker på frågor och klickar på dokument. Dessa frågor och dokument, med manuellt taggad relevansdata, kom från en datamängd som tillhandahölls av INEX. Evalueringarna visade att beteendet av heuristikerna och måttstocket är slumpmässiga och oberäkneliga. Ingen av heuristikerna konvergerar mot någon specifik medelrelevans. Ett statistiskt test visar att det inte är någon signifikant skillnad på uppmätt träffsäkerhet mellan heuristikerna och måttstocket. Dessa resultat innebär att heuristikerna inte förbättrar sökmotorns precision. Detta utfall kan bero på flera faktorer som t.ex. indexering av överflödig meta-data.
5

Termediator-II: Identification of Interdisciplinary Term Ambiguity Through Hierarchical Cluster Analysis

Riley, Owen G. 23 April 2014 (has links) (PDF)
Technical disciplines are evolving rapidly leading to changes in their associated vocabularies. Confusion in interdisciplinary communication occurs due to this evolving terminology. Two causes of confusion are multiple definitions (overloaded terms) and synonymous terms. The formal names for these two problems are polysemy and synonymy. Termediator-I, a web application built on top of a collection of glossaries, uses definition count as a measure of term confusion. This tool was an attempt to identify confusing cross-disciplinary terms. As more glossaries were added to the collection, this measure became ineffective. This thesis provides a measure of term polysemy. Term polysemy is effectively measured by semantically clustering the text concepts, or definitions, of each term and counting the number of resulting clusters. Hierarchical clustering uses a measure of proximity between the text concepts. Three such measures are evaluated: cosine similarity, latent semantic indexing, and latent Dirichlet allocation. Two linkage types, for determining cluster proximity during the hierarchical clustering process, are also evaluated: complete linkage and average linkage. Crowdsourcing through a web application was unsuccessfully attempted to obtain a viable clustering threshold by public consensus. An alternate metric of polysemy, convergence value, is identified and tested as a viable clustering threshold. Six resulting lists of terms ranked by cluster count based on convergence values are generated, one for each similarity measure and linkage type combination. Each combination produces a competitive list, and no clear combination can be determined as superior. Semantic clustering successfully identifies polysemous terms, but each similarity measure and linkage type combination provides slightly different results.
6

How natural language processing can be used to improve digital language learning / Hur natural language processing kan användas för att förbättra digital språkinlärning

Kakavandy, Hanna, Landeholt, John January 2020 (has links)
The world is facing globalization and with that, companies are growing and need to hire according their needs. A great obstacle for this is the language barrier between job applicants and employers who want to hire competent candidates. One spark of light in this challenge is Lingio, who provides a product that teaches digital profession-specific Swedish. Lingio intends to make their existing product more interactive and this research paper aims to research aspects involved in that. This study evaluates system utterances that are planned to be used in Lingio’s product for language learners to use in their practice and studies the feasibility of using the natural language model cosine similarity in classifying the correctness of answers to these utterances. This report also looks at whether it best to use crowd sourced material or a golden standard as benchmark for a correct answer. The results indicate that there are a number of improvements and developments that need to be made to the model in order for it to accurately classify answers due to its formulation and the complexity of human language. It is also concluded that the utterances by Lingio might need to be further developed in order to be efficient in their use for learning language and that crowd sourced material works better than a golden standard. The study makes several interesting observations from the collected data and analysis, aiming to contribute to further research in natural language engineering when it comes to text classification and digital language learning. / Globaliseringen medför flertal konsekvenser för växande företag. En av utmaningarna som företag står inför är anställandet av tillräckligt med kompentent personal. För många företag står språkbarriären mellan de och att anställa kompetens, arbetsökande har ofta inte tillräckligt med språkkunskaper för att klara av jobbet. Lingio är företag som arbetar med just detta, deras produkt är en digital applikation som undervisar yrkesspecific svenska, en effektiv lösning för den som vill fokusera sin inlärning av språket inför ett jobb. Syftet är att hjälpa Lingio i utvecklingen av deras produkt, närmare bestämt i arbetet med att göra den mer interaktiv. Detta görs genom att undersöka effektiviteten hos applikationens yttranden som används för inlärningssyfte och att använda en språkteknologisk modell för att klassificera en användares svar till ett yttrande. Vidare analyseras huruvida det är bäst att använda en golden standard eller insamlat material från enkäter som referenspunkt för ett korrekt yttrande. Resultatet visar att modellen har flertal svagheter och  behöver utvecklas för att kunna göra klassificeringen på ett korrekt sätt och att det finns utrymme för bättring när det kommer till yttrandena. Det visas även att insamlat material från enkäter fungerar bättre än en golden standard.
7

Experiments on deep face recognition using partial faces

Elmahmudi, Ali A.M., Ugail, Hassan January 2018 (has links)
Yes / Face recognition is a very current subject of great interest in the area of visual computing. In the past, numerous face recognition and authentication approaches have been proposed, though the great majority of them use full frontal faces both for training machine learning algorithms and for measuring the recognition rates. In this paper, we discuss some novel experiments to test the performance of machine learning, especially the performance of deep learning, using partial faces as training and recognition cues. Thus, this study sharply differs from the common approaches of using the full face for recognition tasks. In particular, we study the rate of recognition subject to the various parts of the face such as the eyes, mouth, nose and the forehead. In this study, we use a convolutional neural network based architecture along with the pre-trained VGG-Face model to extract features for training. We then use two classifiers namely the cosine similarity and the linear support vector machine to test the recognition rates. We ran our experiments on the Brazilian FEI dataset consisting of 200 subjects. Our results show that the cheek of the face has the lowest recognition rate with 15% while the (top, bottom and right) half and the 3/4 of the face have near 100% recognition rates. / Supported in part by the European Union's Horizon 2020 Programme H2020-MSCA-RISE-2017, under the project PDE-GIR with grant number 778035.
8

A comparison of different methods in their ability to compare semantic similarity between articles and press releases / En jämförelse av olika metoder i deras förmåga att jämföra semantisk likhet mellan artiklar och pressmeddelanden

Andersson, Julius January 2022 (has links)
The goal of a press release is to have the information spread as widely as possible. A suitable approach to distribute the information is to target journalists who are likely to distribute the information further. Deciding which journalists to target has traditionally been performed manually without intelligent digital assistance and therefore has been a time consuming task. Machine learning can be used to assist the user by predicting a ranking of journalists based on their most semantically similar written article to the press release. The purpose of this thesis was to compare different methods in their ability to compare semantic similarity between articles and press releases when used for the task of ranking journalists. Three methods were chosen for comparison: (1.) TF-IDF together with cosine similarity, (2.) TF-IDF together with soft-cosine similarity and (3.) sentence mover’s distance (SMD) together with SBERT. Based on the proposed heuristic success metric, both TF-IDF methods outperformed the SMD method. The best performing method was TF-IDF with soft-cosine similarity. / Målet med ett pressmeddelande är att få informationen att spriddas till så många som möjligt. Ett lämpligt tillvägagångssätt för att sprida informationen är att rikta in sig på journalister som sannolikt kommer att sprida informationen vidare. Beslutet om vilka journalister man ska rikta sig till har traditionellt utförts manuellt utan intelligent digital assistans och har därför varit en tidskrävande uppgift. Maskininlärning kan användas för att hjälpa användaren genom att förutsäga en rankning av journalister baserat på deras mest semantiskt liknande skrivna artikel till pressmeddelandet. Syftet med denna uppsats var att jämföra olika metoder i deras förmåga att jämföra semantisk likhet mellan artiklar och pressmeddelanden när de används för att rangordna journalister. Tre metoder valdes för jämförelse: (1.) TF-IDF tillsammans med cosinus likhet, (2.) TF-IDF tillsammans med mjuk-cosinus likhet och (3.) sentence mover’s distance (SMD) tillsammans med SBERT. Baserat på det föreslagna heuristiska framgångsmåttet överträffade båda TF-IDF-metoderna SMD-metoden. Den bäst presterande metoden var TF-IDF med mjuk-cosinus likhet.
9

Exploring State-of-the-Art Natural Language Processing Models with Regards to Matching Job Adverts and Resumes

Rückert, Lise, Sjögren, Henry January 2022 (has links)
The ability to automate the process of comparing and matching resumes with job adverts is a growing research field. This can be done through the use of the machine learning area Natural Language Processing (NLP), which enables a model to learn human language. This thesis explores and evaluates the application of the state-of-the-art NLP model, SBERT, on the task of comparing and calculating a measure of similarity between extracted text from resumes and adverts. This thesis also investigates what type of data that generates the best performing model on said task. The results show that SBERT quickly can be trained on unlabeled data from the HR domain with the usage of a Triplet network, and achieves high performance and good results when tested on various tasks. The models are shown to be bilingual, can tackle unseen vocabulary and understand the concept and descriptive context of entire sentences instead of solely single words. Thus, the conclusion is that the models have a neat understanding of semantic similarity and relatedness. However, in some cases the models are also shown to become binary in their calculations of similarity between inputs. Moreover, it is hard to tune a model that is exhaustively comprehensive of such diverse domain such as HR. A model fine-tuned on clean and generic data extracted from adverts shows the overall best performance in terms of loss and consistency.
10

Word2vec2syn : Synonymidentifiering med Word2vec / Word2vec2syn : Synonym Identification using Word2vec

Pettersson, Tove January 2019 (has links)
Inom NLP (eng. natural language processing) är synonymidentifiering en av de språkvetenskapliga utmaningarna som många antar. Fodina Language Technology AB är ett företag som skapat ett verktyg, Termograph, ämnad att samla termer inom företag och hålla den interna språkanvändningen konsekvent. En metodkombination bestående av språkteknologiska strategier utgör synonymidentifieringen och Fodina önskar ett större täckningsområde samt mer dynamik i framtagningsprocessen. Därav syftade detta arbete till att ta fram en ny metod, utöver metodkombinationen, för just synonymidentifiering. En färdigtränad Word2vec-modell användes och den inbyggda funktionen för cosinuslikheten användes för att få fram synonymer och skapa kluster. Modellen validerades, testades och utvärderades i förhållande till metodkombinationen. Valideringen visade att modellen skattade inom ett rimligt mänskligt spann i genomsnitt 60,30 % av gångerna och Spearmans korrelation visade på en signifikant stark korrelation. Testningen visade att 32 % av de bearbetade klustren innehöll matchande synonymförslag. Utvärderingen visade att i de fall som förslagen inte matchade så var modellens synonymförslag korrekta i 5,73 % av fallen jämfört med 3,07 % för metodkombinationen. Den interna reliabiliteten för utvärderarna visade på en befintlig men svag enighet, Fleiss Kappa = 0,19, CI(0,06, 0,33). Trots viss osäkerhet i resultaten påvisas ändå möjligheter för vidare användning av word2vec-modeller inom Fodinas synonymidentifiering. / One of the main challenges in the field of natural language processing (NLP) is synonym identification. Fodina Language Technology AB is the company behind the tool, Termograph, that aims to collect terms and provide a consistent language within companies. A combination of multiple methods from the field of language technology constitutes the synonym identification and Fodina would like to improve the area of coverage and increase the dynamics of the working process. The focus of this thesis was therefore to evaluate a new method for synonym identification beyond the already used combination. Initially a trained Word2vec model was used and for the synonym identification the built-in-function for cosine similarity was applied in order to create clusters. The model was validated, tested and evaluated relative to the combination. The validation implicated that the model made estimations within a fair human-based range in an average of 60.30% and Spearmans correlation indicated a strong significant correlation. The testing showed that 32% of the processed synonym clusters contained matching synonym suggestions. The evaluation showed that the synonym suggestions from the model was correct in 5.73% of all cases compared to 3.07% for the combination in the cases where the clusters did not match. The interrater reliability indicated a slight agreement, Fleiss’ Kappa = 0.19, CI(0.06, 0.33). Despite uncertainty in the results, opportunities for further use of Word2vec-models within Fodina’s synonym identification are nevertheless demonstrated.

Page generated in 0.0716 seconds