• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 60
  • 27
  • 14
  • 12
  • 11
  • 9
  • 8
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 335
  • 335
  • 106
  • 91
  • 88
  • 67
  • 58
  • 51
  • 47
  • 45
  • 41
  • 41
  • 39
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

A COMPARATIVE STUDY OF DEEP-LEARNING APPROACHES FOR ACTIVITY RECOGNITION USING SENSOR DATA IN SMART OFFICE ENVIRONMENTS

Johansson, Alexander, Sandberg, Oscar January 2018 (has links)
Syftet med studien är att jämföra tre deep learning nätverk med varandra för att ta reda på vilket nätverk som kan producera den högsta uppmätta noggrannheten. Noggrannheten mäts genom att nätverken försöker förutspå antalet personer som vistas i rummet där observation äger rum. Utöver att jämföra de tre djupinlärningsnätverk med varandra, kommer vi även att jämföra dem med en traditionell metoder inom maskininlärning - i syfte för att ta reda på ifall djupinlärningsnätverken presterar bättre än vad traditionella metoder gör. I studien används design and creation. Design and creation är en forskningsmetodologi som lägger stor fokus på att utveckla en IT produkt och använda produkten som dess bidrag till ny kunskap. Metodologin har fem olika faser, vi valde att göra en iterativ process mellan utveckling- och utvärderingfaserna. Observation är den datagenereringsmetod som används i studien för att samla in data. Datagenereringen pågick under tre veckor och under tiden hann 31287 rader data registreras i vår databas. Ett av våra nätverk fick vi en noggrannhet på 78.2%, de andra två nätverken fick en noggrannhet på 45.6% respektive 40.3%. För våra traditionella metoder använde vi ett beslutsträd med två olika formler, de producerade en noggrannhet på 61.3% respektive 57.2%. Resultatet av denna studie visar på att utav de tre djupinlärningsnätverken kan endast en av djupinlärningsnätverken producera en högre noggrannhet än de traditionella maskininlärningsmetoderna. Detta resultatet betyder nödvändigtvis inte att djupinlärningsnätverk i allmänhet kan producera en högre noggrannhet än traditionella maskininlärningsmetoder. Ytterligare arbete som kan göras är följande: ytterligare experiment med datasetet och hyperparameter av djupinlärningsnätverken, samla in mer data och korrekt validera denna data och jämföra fler djupinlärningsnätverk och maskininlärningsmetoder. / The purpose of the study is to compare three deep learning networks with each other to evaluate which network can produce the highest prediction accuracy. Accuracy is measured as the networks try to predict the number of people in the room where observation takes place. In addition to comparing the three deep learning networks with each other, we also compare the networks with a traditional machine learning approach - in order to find out if deep learning methods perform better than traditional methods do. This study uses design and creation. Design and creation is a methodology that places great emphasis on developing an IT product and uses the product as its contribution to new knowledge. The methodology has five different phases; we choose to make an iterative process between the development and evaluation phases. Observation is the data generation method used to collect data. Data generation lasted for three weeks, resulting in 31287 rows of data recorded in our database. One of our deep learning networks produced an accuracy of 78.2% meanwhile, the two other approaches produced an accuracy of 45.6% and 40.3% respectively. For our traditional method decision trees were used, we used two different formulas and they produced an accuracy of 61.3% and 57.2% respectively. The result of this thesis shows that out of the three deep learning networks included in this study, only one deep learning network is able to produce a higher predictive accuracy than the traditional ML approaches. This result does not necessarily mean that deep learning approaches in general, are able to produce a higher predictive accuracy than traditional machine learning approaches. Further work that can be made is the following: further experimentation with the dataset and hyperparameters, gather more data and properly validate this data and compare more and other deep learning and machine learning approaches.
172

AN AGENT-BASED SYSTEMATIC ENSEMBLE APPROACH FOR AUTO AUCTION PREDICTION

Alfuhaid, Abdulaziz Ataallah January 2018 (has links)
No description available.
173

A Statistical Analysis of Medical Data for Breast Cancer and Chronic Kidney Disease

Yang, Kaolee 05 May 2020 (has links)
No description available.
174

Predicting Risk Level in Life Insurance Application : Comparing Accuracy of Logistic Regression, DecisionTree, Random Forest and Linear Support VectorClassifiers

Karthik Reddy, Pulagam, Veerababu, Sutapalli January 2023 (has links)
Background: Over the last decade, there has been a significant rise in the life insurance industry. Every life insurance application is associated with some level ofrisk, which determines the premium they charge. The process of evaluating this levelof risk for a life insurance application is time-consuming. In the present scenario, it is hard for the insurance industry to process millions of life insurance applications.One potential approach is to involve machine learning to establish a framework forevaluating the level of risk associated with a life insurance application. Objectives: The aim of this thesis is to perform two comparison studies. The firststudy aims to compare the accuracy of the logistic regression classifier, decision tree classifier, random forest classifier and linear support vector classifier for evaluatingthe level of risk associated with a life insurance application. The second study aimsto identify the impact of changes in the dataset over the accuracy of these selected classification models. Methods: The chosen approach was an experimentation methodology to attain theaim of the thesis and address its research questions. The experimentation involvedcomparing four ML algorithms, namely the LRC, DTC, RFC and Linear SVC. These algorithms were trained, validated and tested on two datasets. A new dataset wascreated by replacing the "BMI" variable with the "Life Expectancy" variable. Thefour selected ML algorithms were compared based on their performance metrics,which included accuracy, precision, recall and f1-score. Results: Among the four selected machine learning algorithms, random forest classifier attained higher accuracy with 53.79% and 52.80% on unmodified and modifieddatasets respectively. Hence, it was the most accurate algorithm for predicting risklevel in life insurance application. The second best algorithm was decision tree classifier with 51.12% and 50.79% on unmodified and modified datasets. The selectedmodels attained higher accuracies when they are trained, validated and tested withunmodified dataset. Conclusions: The random forest classifier scored high accuracy among the fourselected algorithms on both unmodified dataset and modified datasets. The selected models attained higher accuracies when they are trained, validated and tested with unmodified compared to modified dataset. Therefore, the unmodified dataset is more suitable for predicting risk level in life insurance application.
175

Probability of Default Machine Learning Modeling : A Stress Testing Evaluation

Andersson, Tobias, Mentes, Mattias January 2023 (has links)
This thesis aims to assist in the development of machine learning models tailored for stress testing. The main objective is to create models that can predict loan defaults while considering the impact of macroeconomic stress. By achieving this, Nordea can continue the development of machine learning models for stress testing by utilizing the models as a basis for further advancement. The research begins with an analysis of historical loan data, encompassing diverse customer and macroeconomic variables that influence loan default rates. Leveraging machine learning algorithms, feature selection methods, data imbalance management and model training techniques, a set of predictive models is constructed. These models aim to capture the intricate relationships between the identified variables and loan defaults, ensuring their suitability for stress testing purposes. The subsequent phase of the research focuses on subjecting the developed models to simulated adverse economic conditions during stress testing. By evaluating the models’ performance under various stressed scenarios, their ability to provide predictions is assessed. This stress testing process allows us to analyse the models’ capabilities of incorporating a stressed scenario in their predictions. The thesis concludes with an evaluation of the developed machine learning models and their abilities to identify defaulted loans in a stressed macroeconomy. By creating these models specifically tailored for stress testing loans, we will provide a basis for further development within the area of stress testing modeling. / Denna uppsats syftar till att bidra till utvecklingen av maskininlärningsmodeller lämpade för stress testing. Det främsta målet är att skapa modeller som kan förutsäga lån som kommer att misslyckas samtidigt som de beaktar påverkan av makroekonomisk stress. Genom att uppnå detta kan Nordea fortsätta utvecklingen av maskininlärningsmodeller för stress testning genom att använda modellerna som grund för ytterligare utveckling. Arbetet inleds med en analys av historisk lånedata, som omfattar olika kund- och makroekonomiska variabler som påverkar lån. Genom att använda oss av maskininlärningsalgoritmer, metoder för urval av förklarande variabler, hantering av dataobalans och tekniker för modellträning konstrueras en uppsättning prediktiva modeller. Dessa modeller syftar till att fånga de komplexa relationerna mellan de identifierade variablerna och låneavvikelser och säkerställa deras lämplighet för stress testning. Den efterföljande fasen av arbetet fokuserar på att utsätta de utvecklade modellerna för simulerade stressade ekonomiska förhållanden. Genom att utvärdera modellernas prestanda under olika stressade förhållanden bedöms deras förmåga att prediktera uteblivna lån. Denna process för stress testning gör det möjligt för oss att analysera modellernas förmåga att inkludera stressade förhållanden i sina prediktioner. Uppsatsen avslutas med en utvärdering av de utvecklade maskininlärningsmodellerna och deras förmåga att identifiera uteblivna lån i en stressad makroekonomi. Genom att skapa dessa modeller specifikt anpassade för stresstestning av lån kommer vi att ge en grund för ytterligare utveckling inom området.
176

Tillämpning av maskininlärning för att införa automatisk adaptiv uppvärmning genom en studie på KTH Live-In Labs lägenheter / Using machine learning to implement adaptive heating; A study on KTH Live-In Labs apartments

Åsenius, Ingrid January 2020 (has links)
The purpose of this study is to investigate if it is possible to decrease Sweden's energy consumption through adaptive heating that uses climate data to detect occupancy in apartments using machine learning. The application of the study has been made using environmental data from one of KTH Live-In Labs apartments. The data was first used to investigate the possibility to detect occupancy through machine learning and was then used as input in an adaptive heating model to investigate potential benefits on the energy consumption and costs of heating. The result of the study show that occupancy can be detected using environmental data but not with 100% accuracy. It also shows that the features that have greatest impact in detecting occupancy is light and carbon dioxide and that the best performing machine learning algorithm, for the used dataset, is the Decision Tree algorithm. The potential energy savings through adaptive heating was estimated to be up to 10,1%. In the final part of the paper, it is discussed how a value creating service can be created around adaptive heating and its possibility to reach the market. / Syftet med den här rapporten är att undersöka om det är möjligt att sänka Sveriges energikonsumtion genom att i lägenheter införa adaptiv uppvärmning som baserar sig på närvaroklassificering av klimatdata. Klimatdatan som använts i studien är tagen från en av KTH Live-In Labs lägenheter. Datan användes först för att undersöka om det var möjligt att detektera närvaro  genom maskininlärning och sedan som input i en modell för adaptiv uppvärmning. I modellen för adaptiv uppvärmning undersöktes de potentiella besparingarna i energibehov och uppvärmningskostnader. Resultaten visar att de bästa featuresen för att klassificera närvaro är ljus och koldioxid. Den maskininlärningsalgoritm som presterade bäst på datasetet var Decision Tree algoritmen. Den potentiella energibesparingen genom införandet av adaptiv uppvärmning uppskattas vara upp till 10,1%. I rapportens sista del diskuteras det hur en värdeskapande tjänst kan skapas kring adaptiv uppvärmning samt dess potential att nå marknaden.
177

An Analysis Of Misclassification Rates For Decision Trees

Zhong, Mingyu 01 January 2007 (has links)
The decision tree is a well-known methodology for classification and regression. In this dissertation, we focus on the minimization of the misclassification rate for decision tree classifiers. We derive the necessary equations that provide the optimal tree prediction, the estimated risk of the tree's prediction, and the reliability of the tree's risk estimation. We carry out an extensive analysis of the application of Lidstone's law of succession for the estimation of the class probabilities. In contrast to existing research, we not only compute the expected values of the risks but also calculate the corresponding reliability of the risk (measured by standard deviations). We also provide an explicit expression of the k-norm estimation for the tree's misclassification rate that combines both the expected value and the reliability. Furthermore, our proposed and proven theorem on k-norm estimation suggests an efficient pruning algorithm that has a clear theoretical interpretation, is easily implemented, and does not require a validation set. Our experiments show that our proposed pruning algorithm produces accurate trees quickly that compares very favorably with two other well-known pruning algorithms, CCP of CART and EBP of C4.5. Finally, our work provides a deeper understanding of decision trees.
178

Using random forest and decision tree models for a new vehicle prediction approach in computational toxicology

Mistry, Pritesh, Neagu, Daniel, Trundle, Paul R., Vessey, J.D. 22 October 2015 (has links)
yes / Drug vehicles are chemical carriers that provide beneficial aid to the drugs they bear. Taking advantage of their favourable properties can potentially allow the safer use of drugs that are considered highly toxic. A means for vehicle selection without experimental trial would therefore be of benefit in saving time and money for the industry. Although machine learning is increasingly used in predictive toxicology, to our knowledge there is no reported work in using machine learning techniques to model drug-vehicle relationships for vehicle selection to minimise toxicity. In this paper we demonstrate the use of data mining and machine learning techniques to process, extract and build models based on classifiers (decision trees and random forests) that allow us to predict which vehicle would be most suited to reduce a drug’s toxicity. Using data acquired from the National Institute of Health’s (NIH) Developmental Therapeutics Program (DTP) we propose a methodology using an area under a curve (AUC) approach that allows us to distinguish which vehicle provides the best toxicity profile for a drug and build classification models based on this knowledge. Our results show that we can achieve prediction accuracies of 80 % using random forest models whilst the decision tree models produce accuracies in the 70 % region. We consider our methodology widely applicable within the scientific domain and beyond for comprehensively building classification models for the comparison of functional relationships between two variables.
179

Automated dust storm detection using satellite images. Development of a computer system for the detection of dust storms from MODIS satellite images and the creation of a new dust storm database.

El-Ossta, Esam E.A. January 2013 (has links)
Dust storms are one of the natural hazards, which have increased in frequency in the recent years over Sahara desert, Australia, the Arabian Desert, Turkmenistan and northern China, which have worsened during the last decade. Dust storms increase air pollution, impact on urban areas and farms as well as affecting ground and air traffic. They cause damage to human health, reduce the temperature, cause damage to communication facilities, reduce visibility which delays both road and air traffic and impact on both urban and rural areas. Thus, it is important to know the causation, movement and radiation effects of dust storms. The monitoring and forecasting of dust storms is increasing in order to help governments reduce the negative impact of these storms. Satellite remote sensing is the most common method but its use over sandy ground is still limited as the two share similar characteristics. However, satellite remote sensing using true-colour images or estimates of aerosol optical thickness (AOT) and algorithms such as the deep blue algorithm have limitations for identifying dust storms. Many researchers have studied the detection of dust storms during daytime in a number of different regions of the world including China, Australia, America, and North Africa using a variety of satellite data but fewer studies have focused on detecting dust storms at night. The key elements of this present study are to use data from the Moderate Resolution Imaging Spectroradiometers on the Terra and Aqua satellites to develop more effective automated method for detecting dust storms during both day and night and generate a MODIS dust storm database. / Libyan Centre for Remote Sensing and Space Science / Appendix A was submitted with extra data files which are not available online.
180

Exploring Alarm Data for Improved Return Prediction in Radios : A Study on Imbalanced Data Classification

Färenmark, Sofia January 2023 (has links)
The global tech company Ericsson has been tracking the return rate of their products for over 30 years, using it as a key performance indicator (KPI). These KPIs play a critical role in making sound business decisions, identifying areas for improvement, and planning. To enhance the customer experience, the company highly values the ability to predict the number of returns in advance each month. However, predicting returns is a complex problem affected by multiple factors that determine when radios are returned. Analysts at the company have observed indications of a potential correlation between alarm data and the number of returns. This paper aims to address the need for better prediction models to improve return rate forecasting for radios, utilizing alarm data. The alarm data, which is stored in an internal database, includes logs of activated alarms at various sites, along with technical and logistical information about the products, as well as the historical records of returns. The problem is approached as a classification task, where radios are classified as either "return" or "no return" for a specific month, using the alarm dataset as input. However, due to the significantly smaller number of returned radios compared to the distributed ones, the dataset suffers from a heavy class imbalance. The imbalance class problem has garnered considerable attention in the field of machine learning in recent years, as traditional classification models struggle to identify patterns in the minority class of imbalanced datasets. Therefore, a specific method that addresses the imbalanced class problem was required to construct an effective prediction model for returns. Therefore, this paper has adopted a systematic approach inspired by similar problems. It applies the feature selection methods LASSO and Boruta, along with the resampling technique SMOTE, and evaluates various classifiers including the Support vector machine (SVM), Random Forest classifier (RFC), Decision tree (DT), and a Neural network (NN) with weights to identify the best-performing model. As accuracy is not suitable as an evaluation metric for imbalanced datasets, the AUC and AUPRC values were calculated for all models to assess the impact of feature selection, weights, resampling techniques, and the choice of classifier. The best model was determined to be the NN with weights, achieving a median AUC value of 0.93 and a median AUPRC value of 0.043. Likewise, both the LASSO+SVM+SMOTE and LASSO+RFC+SMOTE models demonstrated similar performance with median AUC values of 0.92 and 0.93, and median AUPRC values of 0.038 and 0.041, respectively. The baseline for the AUPRC value for this data set was 0.005. Furthermore, the results indicated that resampling techniques are necessary for successful classification of the minority class. Thorough pre-processing and a balanced split between the test and training sets are crucial before applying resampling, as this technique is sensitive to noisy data. While feature selection improved performance to some extent, it could also lead to unreadable results due to noise. The choice of classifier did not have an equal impact on model performance compared to the effects of resampling and feature selection.

Page generated in 0.0717 seconds