• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 180
  • 21
  • 18
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 309
  • 309
  • 120
  • 105
  • 79
  • 74
  • 73
  • 63
  • 62
  • 62
  • 57
  • 49
  • 46
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

A Confirmatory Analysis for Automating the Evaluation of Motivation Letters to Emulate Human Judgment

Mercado Salazar, Jorge Anibal, Rana, S M Masud January 2021 (has links)
Manually reading, evaluating, and scoring motivation letters as part of the admissions process is a time-consuming and tedious task for Dalarna University's program managers. An automated scoring system would provide them with relief as well as the ability to make much faster decisions when selecting applicants for admission. The aim of this thesis was to analyse current human judgment and attempt to emulate it using machine learning techniques. We used various topic modelling methods, such as Latent Dirichlet Allocation and Non-Negative Matrix Factorization, to find the most interpretable topics, build a bridge between topics and human-defined factors, and finally evaluate model performance by predicting scoring values and finding accuracy using logistic regression, discriminant analysis, and other classification algorithms. Despite the fact that we were able to discover the meaning of almost all human factors on our own, the topic models' accuracy in predicting overall score was unexpectedly low. Setting a threshold on overall score to select applicants for admission yielded a good overall accuracy result, but did not yield a good consistent precision or recall score. During our investigation, we attempted to determine the possible causes of these unexpected results and discovered that not only is topic modelling limitation to blame, but human bias also plays a role.
252

Data-driven methods for estimation of dynamic OD matrices

Eriksson, Ina, Fredriksson, Lina January 2021 (has links)
The idea behind this report is based on the fact that it is not only the number of users in the traffic network that is increasing, the number of connected devices such as probe vehicles and mobile sources has increased dramatically in the last decade. These connected devices provide large-scale mobility data and new opportunities to analyze the current traffic situation as they traverse through the network and continuously send out different types of information like Global Positioning System (GPS) data and Mobile Network Data (MND). Travel demand is often described in terms of an Origin Destination (OD) matrix which represents the number of trips from an origin zone to a destination zone in a geographic area. The aim of this master thesis is to develop and evaluate a data-driven method for estimation of dynamic OD matrices using unsupervised learning, sensor fusion and large-scale mobility data. Traditionally, OD matrices are estimated based on travel surveys and link counts. The problem is that these sources of information do not provide the quality required for online control of the traffic network. A method consisting of an offline process and an online process has therefore been developed. The offline process utilizes historical large-scale mobility data to improve an inaccurate prior OD matrix. The online process utilizes the results and tuning parameters from the offline estimation in combination with real-time observations to describe the current traffic situation. A simulation study on a toy network with synthetic data was used to evaluate the data-driven estimation method. Observations based on GPS data, MND and link counts were simulated via a traffic simulation tool. The results showed that the sensor fusion algorithms Kalman filter and Kalman filter smoothing can be used when estimating dynamic OD matrices. The results also showed that the quality of the data sources used for the estimation is of high importance. Aggregating large-scale mobility data as GPS data and MND by using the unsupervised learning method Principal Component Analysis (PCA) improves the quality of the large-scale mobility data and so the estimation results. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
253

Insurance Fraud Detection using Unsupervised Sequential Anomaly Detection / Detektion av försäkringsbedrägeri med oövervakad sekvensiell anomalitetsdetektion

Hansson, Anton, Cedervall, Hugo January 2022 (has links)
Fraud is a common crime within the insurance industry, and insurance companies want to quickly identify fraudulent claimants as they often result in higher premiums for honest customers. Due to the digital transformation where the sheer volume and complexity of available data has grown, manual fraud detection is no longer suitable. This work aims to automate the detection of fraudulent claimants and gain practical insights into fraudulent behavior using unsupervised anomaly detection, which, compared to supervised methods, allows for a more cost-efficient and practical application in the insurance industry. To obtain interpretable results and benefit from the temporal dependencies in human behavior, we propose two variations of LSTM based autoencoders to classify sequences of insurance claims. Autoencoders can provide feature importances that give insight into the models' predictions, which is essential when models are put to practice. This approach relies on the assumption that outliers in the data are fraudulent. The models were trained and evaluated on a dataset we engineered using data from a Swedish insurance company, where the few labeled frauds that existed were solely used for validation and testing. Experimental results show state-of-the-art performance, and further evaluation shows that the combination of autoencoders and LSTMs are efficient but have similar performance to the employed baselines. This thesis provides an entry point for interested practitioners to learn key aspects of anomaly detection within fraud detection by thoroughly discussing the subject at hand and the details of our work. / <p>Gjordes digitalt via Zoom. </p>
254

Cooperative security log analysis using machine learning : Analyzing different approaches to log featurization and classification / Kooperativ säkerhetslogganalys med maskininlärning

Malmfors, Fredrik January 2022 (has links)
This thesis evaluates the performance of different machine learning approaches to log classification based on a dataset derived from simulating intrusive behavior towards an enterprise web application. The first experiment consists of performing attacks towards the web app in correlation with the logs to create a labeled dataset. The second experiment consists of one unsupervised model based on a variational autoencoder and four super- vised models based on both conventional feature-engineering techniques with deep neural networks and embedding-based feature techniques followed by long-short-term memory architectures and convolutional neural networks. With this dataset, the embedding-based approaches performed much better than the conventional one. The autoencoder did not perform well compared to the supervised models. To conclude, embedding-based ap- proaches show promise even on datasets with different characteristics compared to natural language.
255

3D Object Detection based on Unsupervised Depth Estimation

Manoharan, Shanmugapriyan 25 January 2022 (has links)
Estimating depth and detection of object instances in 3D space is fundamental in autonomous navigation, localization, and mapping, robotic object manipulation, and augmented reality. RGB-D images and LiDAR point clouds are the most illustrative formats of depth information. However, depth sensors offer many shortcomings, such as low effective spatial resolutions and capturing of a scene from a single perspective. The thesis focuses on reproducing denser and comprehensive 3D scene structure for given monocular RGB images using depth and 3D object detection. The first contribution of this thesis is the pipeline for the depth estimation based on an unsupervised learning framework. This thesis proposes two architectures to analyze structure from motion and 3D geometric constraint methods. The proposed architectures trained and evaluated using only RGB images and no ground truth depth data. The architecture proposed in this thesis achieved better results than the state-of-the-art methods. The second contribution of this thesis is the application of the estimated depth map, which includes two algorithms: point cloud generation and collision avoidance. The predicted depth map and RGB image are used to generate the point cloud data using the proposed point cloud algorithm. The collision avoidance algorithm predicts the possibility of collision and provides the collision warning message based on decoding the color in the estimated depth map. This algorithm design is adaptable to different color map with slight changes and perceives collision information in the sequence of frames. Our third contribution is a two-stage pipeline to detect the 3D objects from a monocular image. The first stage pipeline used to detect the 2D objects and crop the patch of the image and the same provided as the input to the second stage. In the second stage, the 3D regression network train to estimate the 3D bounding boxes to the target objects. There are two architectures proposed for this 3D regression network model. This approach achieves better average precision than state-of-theart for truncation of 15% or fully visible objects and lowers but comparable results for truncation more than 30% or partly/fully occluded objects.
256

An Approach To Cluster And Benchmark Regional Emergency Medical Service Agencies

Kondapalli, Swetha 06 August 2020 (has links)
No description available.
257

Optimisation of autoencoders for prediction of SNPs determining phenotypes in wheat

Nair, Karthik January 2021 (has links)
The increase in demand for food has resulted in increased demand for tools that help streamline plant breeding process in order to create new varieties of crops. Identifying the underlying genetic mechanism of favourable characteristics is essential in order to make the best breeding decisions. In this project we have developed a modified autoencoder model which allows for lateral phenotype injection into the latent layer, in order to identify causal SNPs for phenotypes of interest in wheat. SNP and phenotype data for 500 samples of Lantmännen SW Seed provided by Lantmännen was used to train the network. Artificial phenotype created using a single SNP was used during training instead of real phenotype, since the relationship between the phenotype and SNP is already known. The modified training model with lateral phenotype injection showed significant increase in genotype concordance of the artificial phenotype when compared to the control model without phenotype injection. Causal SNP was successfully identified by using concordance terrain graph, where the difference in concordance of individual SNPs  between the modified modified model and control model was plotted against the genomic position of each SNP. The model requires further testing to elucidate its behaviour for phenotypes linked to multiple SNPs.
258

[en] A MIP APPROACH FOR COMMUNITY DETECTION IN THE STOCHASTIC BLOCK MODEL / [pt] UMA ABORDAGEM DE PROGRAMAÇÃO INTEIRA MISTA PARA DETECÇÃO DE COMUNIDADES NO STOCHASTIC BLOCK MODEL

BRENO SERRANO DE ARAUJO 04 November 2020 (has links)
[pt] O Degree-Corrected Stochastic Block Model (DCSBM) é um modelo popular para geração de grafos aleatórios com estrutura de comunidade, dada uma sequência de graus esperados. O princípio básico de algoritmos que utilizam o DCSBM para detecção de comunidades é ajustar os parâmetros do modelo a dados observados, de forma a encontrar a estimativa de máxima verossimilhança, ou maximum likelihood estimate (MLE), dos parâmetros do modelo. O problema de otimização para o MLE é comumente resolvido por meio de heurísticas. Neste trabalho, propomos métodos de programação matemática, para resolver de forma exata o problema de otimização descrito, e comparamos os métodos propostos com heurísticas baseadas no algoritmo de expectation-maximization (EM). Métodos exatos são uma ferramenta fundamental para a avaliação de heurísticas, já que nos permitem identificar se uma solução heurística é sub-ótima e medir seu gap de otimalidade. / [en] The Degree-Corrected Stochastic Block Model (DCSBM) is a popular model to generate random graphs with community structure given an expected degree sequence. The standard approach of community detection algorithms based on the DCSBM is to search for the model parameters which are the most likely to have produced the observed network data, via maximum likelihood estimation (MLE). Current techniques for the MLE problem are heuristics and therefore do not guarantee convergence to the optimum. We present mathematical programming formulations and exact solution methods that can provably find the model parameters and community assignments of maximum likelihood given an observed graph. We compare the proposed exact methods with classical heuristic algorithms based on expectation-maximization (EM). The solutions given by exact methods give us a principled way of recognizing when heuristic solutions are sub-optimal and measuring how far they are from optimality.
259

Customer segmentation of retail chain customers using cluster analysis / Kundsegmentering av detaljhandelskunder med klusteranalys

Bergström, Sebastian January 2019 (has links)
In this thesis, cluster analysis was applied to data comprising of customer spending habits at a retail chain in order to perform customer segmentation. The method used was a two-step cluster procedure in which the first step consisted of feature engineering, a square root transformation of the data in order to handle big spenders in the data set and finally principal component analysis in order to reduce the dimensionality of the data set. This was done to reduce the effects of high dimensionality. The second step consisted of applying clustering algorithms to the transformed data. The methods used were K-means clustering, Gaussian mixture models in the MCLUST family, t-distributed mixture models in the tEIGEN family and non-negative matrix factorization (NMF). For the NMF clustering a slightly different data pre-processing step was taken, specifically no PCA was performed. Clustering partitions were compared on the basis of the Silhouette index, Davies-Bouldin index and subject matter knowledge, which revealed that K-means clustering with K = 3 produces the most reasonable clusters. This algorithm was able to separate the customer into different segments depending on how many purchases they made overall and in these clusters some minor differences in spending habits are also evident. In other words there is some support for the claim that the customer segments have some variation in their spending habits. / I denna uppsats har klusteranalys tillämpats på data bestående av kunders konsumtionsvanor hos en detaljhandelskedja för att utföra kundsegmentering. Metoden som använts bestod av en två-stegs klusterprocedur där det första steget bestod av att skapa variabler, tillämpa en kvadratrotstransformation av datan för att hantera kunder som spenderar långt mer än genomsnittet och slutligen principalkomponentanalys för att reducera datans dimension. Detta gjordes för att mildra effekterna av att använda en högdimensionell datamängd. Det andra steget bestod av att tillämpa klusteralgoritmer på den transformerade datan. Metoderna som användes var K-means klustring, gaussiska blandningsmodeller i MCLUST-familjen, t-fördelade blandningsmodeller från tEIGEN-familjen och icke-negativ matrisfaktorisering (NMF). För klustring med NMF användes förbehandling av datan, mer specifikt genomfördes ingen PCA. Klusterpartitioner jämfördes baserat på silhuettvärden, Davies-Bouldin-indexet och ämneskunskap, som avslöjade att K-means klustring med K=3 producerar de rimligaste resultaten. Denna algoritm lyckades separera kunderna i olika segment beroende på hur många köp de gjort överlag och i dessa segment finns vissa skillnader i konsumtionsvanor. Med andra ord finns visst stöd för påståendet att kundsegmenten har en del variation i sina konsumtionsvanor.
260

Deep Scenario Generation of Financial Markets / Djup scenario generering av finansiella marknader

Carlsson, Filip, Lindgren, Philip January 2020 (has links)
The goal of this thesis is to explore a new clustering algorithm, VAE-Clustering, and examine if it can be applied to find differences in the distribution of stock returns and augment the distribution of a current portfolio of stocks and see how it performs in different market conditions. The VAE-clustering method is as mentioned a newly introduced method and not widely tested, especially not on time series. The first step is therefore to see if and how well the clustering works. We first apply the algorithm to a dataset containing monthly time series of the power demand in Italy. The purpose in this part is to focus on how well the method works technically. When the model works well and generates proper results with the Italian Power Demand data, we move forward and apply the model on stock return data. In the latter application we are unable to find meaningful clusters and therefore unable to move forward towards the goal of the thesis. The results shows that the VAE-clustering method is applicable for time series. The power demand have clear differences from season to season and the model can successfully identify those differences. When it comes to the financial data we hoped that the model would be able to find different market regimes based on time periods. The model is though not able distinguish different time periods from each other. We therefore conclude that the VAE-clustering method is applicable on time series data, but that the structure and setting of the financial data in this thesis makes it to hard to find meaningful clusters. The major finding is that the VAE-clustering method can be applied to time series. We highly encourage further research to find if the method can be successfully used on financial data in different settings than tested in this thesis. / Syftet med den här avhandlingen är att utforska en ny klustringsalgoritm, VAE-Clustering, och undersöka om den kan tillämpas för att hitta skillnader i fördelningen av aktieavkastningar och förändra distributionen av en nuvarande aktieportfölj och se hur den presterar under olika marknadsvillkor. VAE-klusteringsmetoden är som nämnts en nyinförd metod och inte testad i stort, särskilt inte på tidsserier. Det första steget är därför att se om och hur klusteringen fungerar. Vi tillämpar först algoritmen på ett datasätt som innehåller månatliga tidsserier för strömbehovet i Italien. Syftet med denna del är att fokusera på hur väl metoden fungerar tekniskt. När modellen fungerar bra och ger tillfredställande resultat, går vi vidare och tillämpar modellen på aktieavkastningsdata. I den senare applikationen kan vi inte hitta meningsfulla kluster och kan därför inte gå framåt mot målet som var att simulera olika marknader och se hur en nuvarande portfölj presterar under olika marknadsregimer. Resultaten visar att VAE-klustermetoden är väl tillämpbar på tidsserier. Behovet av el har tydliga skillnader från säsong till säsong och modellen kan framgångsrikt identifiera dessa skillnader. När det gäller finansiell data hoppades vi att modellen skulle kunna hitta olika marknadsregimer baserade på tidsperioder. Modellen kan dock inte skilja olika tidsperioder från varandra. Vi drar därför slutsatsen att VAE-klustermetoden är tillämplig på tidsseriedata, men att strukturen på den finansiella data som undersöktes i denna avhandling gör det svårt att hitta meningsfulla kluster. Den viktigaste upptäckten är att VAE-klustermetoden kan tillämpas på tidsserier. Vi uppmuntrar ytterligare forskning för att hitta om metoden framgångsrikt kan användas på finansiell data i andra former än de testade i denna avhandling

Page generated in 0.0294 seconds