• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Exploring Alarm Data for Improved Return Prediction in Radios : A Study on Imbalanced Data Classification

Färenmark, Sofia January 2023 (has links)
The global tech company Ericsson has been tracking the return rate of their products for over 30 years, using it as a key performance indicator (KPI). These KPIs play a critical role in making sound business decisions, identifying areas for improvement, and planning. To enhance the customer experience, the company highly values the ability to predict the number of returns in advance each month. However, predicting returns is a complex problem affected by multiple factors that determine when radios are returned. Analysts at the company have observed indications of a potential correlation between alarm data and the number of returns. This paper aims to address the need for better prediction models to improve return rate forecasting for radios, utilizing alarm data. The alarm data, which is stored in an internal database, includes logs of activated alarms at various sites, along with technical and logistical information about the products, as well as the historical records of returns. The problem is approached as a classification task, where radios are classified as either "return" or "no return" for a specific month, using the alarm dataset as input. However, due to the significantly smaller number of returned radios compared to the distributed ones, the dataset suffers from a heavy class imbalance. The imbalance class problem has garnered considerable attention in the field of machine learning in recent years, as traditional classification models struggle to identify patterns in the minority class of imbalanced datasets. Therefore, a specific method that addresses the imbalanced class problem was required to construct an effective prediction model for returns. Therefore, this paper has adopted a systematic approach inspired by similar problems. It applies the feature selection methods LASSO and Boruta, along with the resampling technique SMOTE, and evaluates various classifiers including the Support vector machine (SVM), Random Forest classifier (RFC), Decision tree (DT), and a Neural network (NN) with weights to identify the best-performing model. As accuracy is not suitable as an evaluation metric for imbalanced datasets, the AUC and AUPRC values were calculated for all models to assess the impact of feature selection, weights, resampling techniques, and the choice of classifier. The best model was determined to be the NN with weights, achieving a median AUC value of 0.93 and a median AUPRC value of 0.043. Likewise, both the LASSO+SVM+SMOTE and LASSO+RFC+SMOTE models demonstrated similar performance with median AUC values of 0.92 and 0.93, and median AUPRC values of 0.038 and 0.041, respectively. The baseline for the AUPRC value for this data set was 0.005. Furthermore, the results indicated that resampling techniques are necessary for successful classification of the minority class. Thorough pre-processing and a balanced split between the test and training sets are crucial before applying resampling, as this technique is sensitive to noisy data. While feature selection improved performance to some extent, it could also lead to unreadable results due to noise. The choice of classifier did not have an equal impact on model performance compared to the effects of resampling and feature selection.
2

Exploring DeepSEA CNN and DNABERT for Regulatory Feature Prediction of Non-coding DNA

Stachowicz, Jacob January 2021 (has links)
Prediction and understanding of the regulatory effects of non-coding DNA is an extensive research area in genomics. Convolutional neural networks have been used with success in the past to predict regulatory features, making chromatin feature predictions based solely on non-coding DNA sequences. Non-coding DNA shares various similarities with the human spoken language. This makes Language models such as the transformer attractive candidates for deciphering the non-coding DNA language. This thesis investigates how well the transformer model, usually used for NLP problems, predicts chromatin features based on genome sequences compared to convolutional neural networks. More specifically, the CNN DeepSEA, which is used for regulatory feature prediction based on noncoding DNA, is compared with the transformer DNABert. Further, this study explores the impact different parameters and training strategies have on performance. Furthermore, other models (DeeperDeepSEA and DanQ) are also compared on the same tasks to give a broader comparison value. Lastly, the same experiments are conducted on modified versions of the dataset where the labels cover different amounts of the DNA sequence. This could prove beneficial to the transformer model, which can understand and capture longrange dependencies in natural language problems. The replication of DeepSEA was successful and gave similar results to the original model. Experiments used for DeepSEA were also conducted on DNABert, DeeperDeepSEA, and DanQ. All the models were trained on different datasets, and their results were compared. Lastly, a Prediction voting mechanism was implemented, which gave better results than the models individually. The results showed that DeepSEA performed slightly better than DNABert, regarding AUC ROC. The Wilcoxon Signed-Rank Test showed that, even if the two models got similar AUC ROC scores, there is statistical significance between the distribution of predictions. This means that the models look at the dataset differently and might be why combining their prediction presents good results. Due to time restrictions of training the computationally heavy DNABert, the best hyper-parameters and training strategies for the model were not found, only improved. The Datasets used in this thesis were gravely unbalanced and is something that needs to be worked on in future projects. This project works as a good continuation for the paper Whole-genome deep-learning analysis identifies contribution of non-coding mutations to autism risk, Which uses the DeepSEA model to learn more about how specific mutations correlate with Autism Spectrum Disorder. / Arbetet kring hur icke-kodande DNA påverkar genreglering är ett betydande forskningsområde inom genomik. Convolutional neural networks (CNN) har tidigare framgångsrikt använts för att förutsäga reglerings-element baserade endast på icke-kodande DNA-sekvenser. Icke-kod DNA har ett flertal likheter med det mänskliga språket. Detta gör språkmodeller, som Transformers, till attraktiva kandidater för att dechiffrera det icke-kodande DNA-språket. Denna avhandling undersöker hur väl transformermodellen kan förutspå kromatin-funktioner baserat på gensekvenser jämfört med CNN. Mer specifikt jämförs CNN-modellen DeepSEA, som används för att förutsäga reglerande funktioner baserat på icke-kodande DNA, med transformern DNABert. Vidare undersöker denna studie vilken inverkan olika parametrar och träningsstrategier har på prestanda. Dessutom jämförs andra modeller (DeeperDeepSEA och DanQ) med samma experiment för att ge ett bredare jämförelsevärde. Slutligen utförs samma experiment på modifierade versioner av datamängden där etiketterna täcker olika mängder av DNA-sekvensen. Detta kan visa sig vara fördelaktigt för transformer modellen, som kan förstå beroenden med lång räckvidd i naturliga språkproblem. Replikeringen av DeepSEA experimenten var lyckad och gav liknande resultat som i den ursprungliga modellen. Experiment som användes för DeepSEA utfördes också på DNABert, DeeperDeepSEA och DanQ. Alla modeller tränades på olika datamängder, och resultat på samma datamängd jämfördes. Slutligen implementerades en algoritm som kombinerade utdatan av DeepDEA och DNABERT, vilket gav bättre resultat än modellerna individuellt. Resultaten visade att DeepSEA presterade något bättre än DNABert, med avseende på AUC ROC. Wilcoxon Signed-Rank Test visade att, även om de två modellerna fick liknande AUC ROC-poäng, så finns det en statistisk signifikans mellan fördelningen av deras förutsägelser. Det innebär att modellerna hanterar samma information på olika sätt och kan vara anledningen till att kombinationen av deras förutsägelser ger bra resultat. På grund av tidsbegränsningar för träning av det beräkningsmässigt tunga DNABert hittades inte de bästa hyper-parametrarna och träningsstrategierna för modellen, utan förbättrades bara. De datamängder som användes i denna avhandling var väldigt obalanserade, vilket måste hanteras i framtida projekt. Detta projekt fungerar som en bra fortsättning för projektet Whole-genome deep-learning analysis identifies contribution of non-coding mutations to autism risk, som använder DeepSEA-modellen för att lära sig mer om hur specifika DNA-mutationer korrelerar med autismspektrumstörning.

Page generated in 0.0223 seconds