• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 52
  • 52
  • 39
  • 32
  • 22
  • 19
  • 15
  • 12
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Water Contamination Detection With Binary Classification Using Artificial Neural Networks

Lundholm, Christoffer, von Butovitsch, Nicholas January 2022 (has links)
Water contamination is a major source of diseasearound the world. Therefore, the reliable monitoring of harmfulcontamination in water distribution networks requires considerableeffort and attention. It is a vital necessity to possess a reliablemonitoring system in order to detect harmful contamination inwater distribution networks. To measure the potential contamination,a new sensor called an ’electric tongue’ was developedin Link¨opings University. It was created for the purpose ofmeasuring various features of the water reliably. This projecthas developed a supervised machine learning algorithm that usesan artificial neural network for the detection of anomalies in thesystem. The algorithm can detect anomalies with an accuracy ofaround 99.98% based on the data that was available. This wasachieved through a binary classifier, which reconstructs a vectorand compares it to the expected outcome. Despite the limitationsof the problem and the system’s capabilities, binary classificationis a potential solution to this problem. / Vatten kontaminering är en huvudsaklig anledning till sjukdom runtom i världen. Därför är det en avgörande nödvändighet att ha ett tillförlitligt övervakningssystem för att upptäcka skadliga föroreningar i vattendistributionsnät. För att mäta den potentiella föroreningen skapades en ny sensor, den så kallade ”Electric Tongue” vid Linköpings universitet Den skapades i syfte att mäta olika egenskaper i vattnet på ett tillförlitligt sätt. Genom att använda ett artificiellt neuralt nätverk utvecklades en supervised machine learning algoritm för att upptäcka anomalier i systemet. Algoritmen kan upptäcka anomalier med 99.98% säkerhet som baseras på befintliga data. Detta uppnåddes genom att rekonstruera en vektor och jämföra det med det förväntade resultatet genom att använda en binär klassificerare. Trots att det finns begränsningar som orsakats både av problemet men också systemets förmågor, så är binär klassificering en potentiell lösning till detta problem. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
32

Neonatal Sepsis Detection Using Decision Tree Ensemble Methods: Random Forest and XGBoost

Al-Bardaji, Marwan, Danho, Nahir January 2022 (has links)
Neonatal sepsis is a potentially fatal medical conditiondue to an infection and is attributed to about 200 000annual deaths globally. With healthcare systems that are facingconstant challenges, there exists a potential for introducingmachine learning models as a diagnostic tool that can beautomatized within existing workflows and would not entail morework for healthcare personnel. The Herlenius Research Teamat Karolinska Institutet has collected neonatal sepsis data thathas been used for the development of many machine learningmodels across several papers. However, none have tried to studydecision tree ensemble methods. In this paper, random forestand XGBoost models are developed and evaluated in order toassess their feasibility for clinical practice. The data contained24 features of vital parameters that are easily collected througha patient monitoring system. The validation and evaluationprocedure needed special consideration due to the data beinggrouped based on patient level and being imbalanced. Theproposed methods developed in this paper have the potentialto be generalized to other similar applications. Finally, usingthe measure receiver-operating-characteristic area-under-curve(ROC AUC), both models achieved around ROC AUC= 0.84.Such results suggest that the random forest and XGBoost modelsare potentially feasible for clinical practice. Another gainedinsight was that both models seemed to perform better withsimpler models, suggesting that future work could create a moreexplainable model. / Nenatal sepsis är ett potentiellt dödligt‌‌‌ medicinskt tillstånd till följd av en infektion och uppges globalt orsaka 200 000 dödsfall årligen. Med sjukvårdssystem som konstant utsätts för utmaningar existerar det en potential för maskininlärningsmodeller som diagnostiska verktyg automatiserade inom existerande arbetsflöden utan att innebära mer arbete för sjukvårdsanställda. Herelenius forskarteam på Karolinska Institet har samlat ihop neonatal sepsis data som har använts för att utveckla många maskininlärningsmodeller över flera studier. Emellertid har ingen prövat att undersöka beslutsträds ensemble metoder. Syftet med denna studie är att utveckla och utvärdera random forest och XGBoost modeller för att bedöma deras möjligheter i klinisk praxis. Datan innehör 24 attribut av vitalparameterar som enkelt samlas in genom patientövervakningssystem. Förfarandet för validering och utvärdering krävde särskild hänsyn med tanke på att datan var grupperad på patientnivå och var obalanserad. Den föreslagna metoden har potential att generaliseras till andra liknande tillämpningar. Slutligen, genom att använda receiveroperating-characteristic area-under-curve (ROC AUC) måttet kunde vi uppvisa att båda modellerna presterade med ett resultat på ROC AUC= 0.84. Sådana resultat föreslår att både random forest och XGBoost modellerna kan potentiellt användas i klinisk praxis. En annan insikt var att båda modellerna verkade prestera bättre med enklare modeller vilket föreslår att ete skulle kunna vara att skapa en mer förklarlig skininlärningsmodell. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
33

Predicting Purchase of Airline Seating Using Machine Learning / Förutsägelse på köp av sätesreservation med maskininlärning.

El-Hage, Sebastian January 2020 (has links)
With the continuing surge in digitalization within the travel industry and the increased demand of personalized services, understanding customer behaviour is becoming a requirement to survive for travel agencies. The number of cases that addresses this problem are increasing and machine learning is expected to be the enabling technique. This thesis will attempt to train two different models, a multi-layer perceptron and a support vector machine, to reliably predict whether a customer will add a seat reservation with their flight booking. The models are trained on a large dataset consisting of 69 variables and over 1.1 million historical recordings of bookings dating back to 2017. The results from the trained models are satisfactory and the models are able to classify the data with an accuracy of around 70%. This shows that this type of problem is solvable with the techniques used. The results moreover suggest that further exploration of models and additional data could be of interest since this could help increase the level of performance. / Med den fortsatta ökningen av digitalisering inom reseindustrin och det faktum att kunder idag visar ett stort behov av skräddarsydda tjänster så stiger även kraven på företag att förstå sina kunders beteende för att överleva. En uppsjö av studier har gjorts där man försökt tackla problemet med att kunna förutse kundbeteende och maskininlärning har pekats ut som en möjliggörande teknik. Inom maskininlärning har det skett en stor utveckling och specifikt inom området djupinlärning. Detta har gjort att användningen av dessa teknologier för att lösa komplexa problem spritt sig till allt fler branscher. Den här studien implementerar en Multi-Layer Perceptron och en Support Vector Machine och tränar dessa på befintliga data för att tillförlitligt kunna avgöra om en kund kommer att köpa en sätesreservation eller inte till sin bokning. Datat som användes bestod av 69 variabler och över 1.1 miljoner historiska bokningar inom tidsspannet 2017 till 2020. Resultaten från studien är tillfredställande då modellerna i snitt lyckas klassificera med en noggrannhet på 70%, men inte optimala. Multi-Layer Perceptronen presterar bäst på båda mätvärdena som användes för att estimera prestandan på modellerna, accuracy och F1 score. Resultaten pekar även på att en påbyggnad av denna studie med mer data och fler klassificeringsmodeller är av intresse då detta skulle kunna leda till en högre nivå av prestanda.
34

Customer Churn Prediction for PC Games : Probability of churn predicted for big-spenders usingsupervised machine learning / Kundchurn prediktering för PC-spel : Sannolikheten av churn förutsagd för spelaresom spenderar mycket pengar med övervakad maskininlärning

Tryggvadottir, Valgerdur January 2019 (has links)
Paradox Interactive is a Swedish video game developer and publisher which has players all around the world. Paradox’s largest platform in terms of amount of players and revenue is the PC. The goal of this thesis was to make a churn predic-tion model to predict the probability of players churning in order to know which players to focus on in retention campaigns. Since the purpose of churn prediction is to minimize loss due to customers churning the focus was on big-spenders (whales) in Paradox PC games. In order to define which players are big-spenders the spending for players over a 12 month rolling period (from 2016-01-01 until 2018-12-31) was investigated. The players spending more than the 95th-percentile of the total spending for each pe-riod were defined as whales. Defining when a whale has churned, i.e. stopped being a big-spender in Paradox PC games, was done by looking at how many days had passed since the players bought something. A whale has churned if he has not bought anything for the past 28 days. When data had been collected about the whales the data set was prepared for a number of di˙erent supervised machine learning methods. Logistic Regression, L1 Regularized Logistic Regression, Decision Tree and Random Forest were the meth-ods tested. Random Forest performed best in terms of AUC, with AUC = 0.7162. The conclusion is that it seems to be possible to predict the probability of churning for Paradox whales. It might be possible to improve the model further by investi-gating more data and fine tuning the definition of churn. / Paradox Interactive är en svensk videospelutvecklare och utgivare som har spelare över hela världen. Paradox största plattform när det gäller antal spelare och intäk-ter är PC:n. Målet med detta exjobb var att göra en churn-predikterings modell för att förutsäga sannolikheten för att spelare har "churnat" för att veta vilka spelare fokusen ska vara på i retentionskampanjer. Eftersom syftet med churn-prediktering är att minimera förlust på grund av kunderna som "churnar", var fokusen på spelare som spenderar mest pengar (valar) i Paradox PC-spel.För att definiera vilka spelare som är valar undersöktes hur mycket spelarna spenderar under en 12 månaders rullande period (från 2016-01-01 till 2018-12-31). Spelarna som spenderade mer än 95:e percentilen av den totala spenderingen för varje period definierades som valar. För att definiera när en val har "churnat", det vill säga slutat vara en kund som spenderar mycket pengar i Paradox PC-spel, tittade man på hur många dagar som gått sedan spelarna köpte någonting. En val har "churnat" om han inte har köpt något under de senaste 28 dagarna.När data hade varit samlad om valarna var datan förberedd för ett antal olika maskininlärningsmetoder. Logistic Regression, L1 Regularized Logistic Regression, Decision Tree och Random Forest var de metoder som testades. Random Forest var den metoden som gav bäst resultat med avseende på AUC, med AUC = 0, 7162. Slutsatsen är att det verkar vara möjligt att förutsäga sannolikheten att Paradox valar "churnar". Det kan vara möjligt att förbättra modellen ytterligare genom att undersöka mer data och finjustera definitionen av churn.
35

Rare Events Predictions with Time Series Data / Prediktion av sällsynta händelser med tidsseriedata

Eriksson, Jonas, Kuusela, Tuomas January 2024 (has links)
This study aims to develop models for predicting rare events, specifically elevated intracranial pressure (ICP) in patients with traumatic brain injury (TBI). Using time-series data of ICP, we created and evaluated several machine learning models, including K-Nearest Neighbors, Random Forest, and logistic regression, in order to predict ICP levels exceeding 20 mmHg – acritical threshold for medical intervention. The time-series data was segmented and transformed into a tabular format, with feature engineering applied to extract meaningful statistical characteristics. We framed the problem as a binary classification task, focusing on whether ICP levels exceeded the 20 mmHg threshold. We focused on evaluating the optimal model by comparing the predictive performance of the algorithms. All models demonstrated good performance for predictions up to 30 minutes in advance, after which a significant decline in performance was observed. Within this timeframe, the models achieved Matthews Correlation Coefficient (MCC) scores ranging between 0.876 and 0.980, where the Random Forest models showed the highest performance. In contrast, logistic regression displayed a notable deviation at the 40-minute mark, recording an MCC score of 0.752. The results presented highlight potential to provide reliable, real-time predictions of dangerous ICP levels up to 30 minutes in advance, which is crucial for timely and effective medical interventions.
36

Bearing Diagnosis Using Fault Signal Enhancing Teqniques and Data-driven Classification

Lembke, Benjamin January 2019 (has links)
Rolling element bearings are a vital part in many rotating machinery, including vehicles. A defective bearing can be a symptom of other problems in the machinery and is due to a high failure rate. Early detection of bearing defects can therefore help to prevent malfunction which ultimately could lead to a total collapse. The thesis is done in collaboration with Scania that wants a better understanding of how external sensors such as accelerometers, can be used for condition monitoring in their gearboxes. Defective bearings creates vibrations with specific frequencies, known as Bearing Characteristic Frequencies, BCF [23]. A key component in the proposed method is based on identification and extraction of these frequencies from vibration signals from accelerometers mounted near the monitored bearing. Three solutions are proposed for automatic bearing fault detection. Two are based on data-driven classification using a set of machine learning methods called Support Vector Machines and one method using only the computed characteristic frequencies from the considered bearing faults. Two types of features are developed as inputs to the data-driven classifiers. One is based on the extracted amplitudes of the BCF and the other on statistical properties from Intrinsic Mode Functions generated by an improved Empirical Mode Decomposition algorithm. In order to enhance the diagnostic information in the vibration signals two pre-processing steps are proposed. Separation of the bearing signal from masking noise are done with the Cepstral Editing Procedure, which removes discrete frequencies from the raw vibration signal. Enhancement of the bearing signal is achieved by band pass filtering and amplitude demodulation. The frequency band is produced by the band selection algorithms Kurtogram and Autogram. The proposed methods are evaluated on two large public data sets considering bearing fault classification using accelerometer data, and a smaller data set collected from a Scania gearbox. The produced features achieved significant separation on the public and collected data. Manual detection of the induced defect on the outer race on the bearing from the gearbox was achieved. Due to the small amount of training data the automatic solutions were only tested on the public data sets. Isolation performance of correct bearing and fault mode among multiplebearings were investigated. One of the best trade offs achieved was 76.39 % fault detection rate with 8.33 % false alarm rate. Another was 54.86 % fault detection rate with 0 % false alarm rate.
37

Cooperative coevolutionary mixture of experts : a neuro ensemble approach for automatic decomposition of classification problems

Nguyen, Minh Ha, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2006 (has links)
Artificial neural networks have been widely used for machine learning and optimization. A neuro ensemble is a collection of neural networks that works cooperatively on a problem. In the literature, it has been shown that by combining several neural networks, the generalization of the overall system could be enhanced over the separate generalization ability of the individuals. Evolutionary computation can be used to search for a suitable architecture and weights for neural networks. When evolutionary computation is used to evolve a neuro ensemble, it is usually known as evolutionary neuro ensemble. In most real-world problems, we either know little about these problems or the problems are too complex to have a clear vision on how to decompose them by hand. Thus, it is usually desirable to have a method to automatically decompose a complex problem into a set of overlapping or non-overlapping sub-problems and assign one or more specialists (i.e. experts, learning machines) to each of these sub-problems. An important feature of neuro ensemble is automatic problem decomposition. Some neuro ensemble methods are able to generate networks, where each individual network is specialized on a unique sub-task such as mapping a subspace of the feature space. In real world problems, this is usually an important feature for a number of reasons including: (1) it provides an understanding of the decomposition nature of a problem; (2) if a problem changes, one can replace the network associated with the sub-space where the change occurs without affecting the overall ensemble; (3) if one network fails, the rest of the ensemble can still function in their sub-spaces; (4) if one learn the structure of one problem, it can potentially be transferred to other similar problems. In this thesis, I focus on classification problems and present a systematic study of a novel evolutionary neuro ensemble approach which I call cooperative coevolutionary mixture of experts (CCME). Cooperative coevolution (CC) is a branch of evolutionary computation where individuals in different populations cooperate to solve a problem and their fitness function is calculated based on their reciprocal interaction. The mixture of expert model (ME) is a neuro ensemble approach which can generate networks that are specialized on different sub-spaces in the feature space. By combining CC and ME, I have a powerful framework whereby it is able to automatically form the experts and train each of them. I show that the CCME method produces competitive results in terms of generalization ability without increasing the computational cost when compared to traditional training approaches. I also propose two different mechanisms for visualizing the resultant decomposition in high-dimensional feature spaces. The first mechanism is a simple one where data are grouped based on the specialization of each expert and a color-map of the data records is visualized. The second mechanism relies on principal component analysis to project the feature space onto lower dimensions, whereby decision boundaries generated by each expert are visualized through convex approximations. I also investigate the regularization effect of learning by forgetting on the proposed CCME. I show that learning by forgetting helps CCME to generate neuro ensembles of low structural complexity while maintaining their generalization abilities. Overall, the thesis presents an evolutionary neuro ensemble method whereby (1) the generated ensemble generalizes well; (2) it is able to automatically decompose the classification problem; and (3) it generates networks with small architectures.
38

Near Sets in Set Pattern Classification

Uchime, Chidoteremndu Chinonyelum 06 February 2015 (has links)
This research is focused on the extraction of visual set patterns in digital images, using relational properties like nearness and similarity measures, as well as descriptive properties such as texture, colour and image gradient directions. The problem considered in this thesis is application of topology in visual set pattern discovery, and consequently pattern generation. A visual set pattern is a collection of motif patterns generated from different unique points called seed motifs in the set. Each motif pattern is a descriptive neighbourhood of a seed motif. Such a neighbourhood is a set of points that are descriptively near a seed motif. A new similarity distance measure based on dot product between image feature vectors was introduced in this research, for image classification with the generated visual set patterns. An application of this approach to pattern generation can be useful in content based image retrieval and image classification.
39

Bank Customer Churn Prediction : A comparison between classification and evaluation methods

Tandan, Isabelle, Goteman, Erika January 2020 (has links)
This study aims to assess which supervised statistical learning method; random forest, logistic regression or K-nearest neighbor, that is the best at predicting banks customer churn. Additionally, the study evaluates which cross-validation set approach; k-Fold cross-validation or leave-one-out cross-validation that yields the most reliable results. Predicting customer churn has increased in popularity since new technology, regulation and changed demand has led to an increase in competition for banks. Thus, with greater reason, banks acknowledge the importance of maintaining their customer base.   The findings of this study are that unrestricted random forest model estimated using k-Fold is to prefer out of performance measurements, computational efficiency and a theoretical point of view. Albeit, k-Fold cross-validation and leave-one-out cross-validation yield similar results, k-Fold cross-validation is to prefer due to computational advantages.   For future research, methods that generate models with both good interpretability and high predictability would be beneficial. In order to combine the knowledge of which customers end their engagement as well as understanding why. Moreover, interesting future research would be to analyze at which dataset size leave-one-out cross-validation and k-Fold cross-validation yield the same results.
40

Building Information Extraction and Refinement from VHR Satellite Imagery using Deep Learning Techniques

Bittner, Ksenia 26 March 2020 (has links)
Building information extraction and reconstruction from satellite images is an essential task for many applications related to 3D city modeling, planning, disaster management, navigation, and decision-making. Building information can be obtained and interpreted from several data, like terrestrial measurements, airplane surveys, and space-borne imagery. However, the latter acquisition method outperforms the others in terms of cost and worldwide coverage: Space-borne platforms can provide imagery of remote places, which are inaccessible to other missions, at any time. Because the manual interpretation of high-resolution satellite image is tedious and time consuming, its automatic analysis continues to be an intense field of research. At times however, it is difficult to understand complex scenes with dense placement of buildings, where parts of buildings may be occluded by vegetation or other surrounding constructions, making their extraction or reconstruction even more difficult. Incorporation of several data sources representing different modalities may facilitate the problem. The goal of this dissertation is to integrate multiple high-resolution remote sensing data sources for automatic satellite imagery interpretation with emphasis on building information extraction and refinement, which challenges are addressed in the following: Building footprint extraction from Very High-Resolution (VHR) satellite images is an important but highly challenging task, due to the large diversity of building appearances and relatively low spatial resolution of satellite data compared to airborne data. Many algorithms are built on spectral-based or appearance-based criteria from single or fused data sources, to perform the building footprint extraction. The input features for these algorithms are usually manually extracted, which limits their accuracy. Based on the advantages of recently developed Fully Convolutional Networks (FCNs), i.e., the automatic extraction of relevant features and dense classification of images, an end-to-end framework is proposed which effectively combines the spectral and height information from red, green, and blue (RGB), pan-chromatic (PAN), and normalized Digital Surface Model (nDSM) image data and automatically generates a full resolution binary building mask. The proposed architecture consists of three parallel networks merged at a late stage, which helps in propagating fine detailed information from earlier layers to higher levels, in order to produce an output with high-quality building outlines. The performance of the model is examined on new unseen data to demonstrate its generalization capacity. The availability of detailed Digital Surface Models (DSMs) generated by dense matching and representing the elevation surface of the Earth can improve the analysis and interpretation of complex urban scenarios. The generation of DSMs from VHR optical stereo satellite imagery leads to high-resolution DSMs which often suffer from mismatches, missing values, or blunders, resulting in coarse building shape representation. To overcome these problems, a methodology based on conditional Generative Adversarial Network (cGAN) is developed for generating a good-quality Level of Detail (LoD) 2 like DSM with enhanced 3D object shapes directly from the low-quality photogrammetric half-meter resolution satellite DSM input. Various deep learning applications benefit from multi-task learning with multiple regression and classification objectives by taking advantage of the similarities between individual tasks. Therefore, an observation of such influences for important remote sensing applications such as realistic elevation model generation and roof type classification from stereo half-meter resolution satellite DSMs, is demonstrated in this work. Recently published deep learning architectures for both tasks are investigated and a new end-to-end cGAN-based network is developed, which combines different models that provide the best results for their individual tasks. To benefit from information provided by multiple data sources, a different cGAN-based work-flow is proposed where the generative part consists of two encoders and a common decoder which blends the intensity and height information within one network for the DSM refinement task. The inputs to the introduced network are single-channel photogrammetric DSMs with continuous values and pan-chromatic half-meter resolution satellite images. Information fusion from different modalities helps in propagating fine details, completes inaccurate or missing 3D information about building forms, and improves the building boundaries, making them more rectilinear. Lastly, additional comparison between the proposed methodologies for DSM enhancements is made to discuss and verify the most beneficial work-flow and applicability of the resulting DSMs for different remote sensing approaches.

Page generated in 0.0444 seconds