• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 334
  • 31
  • 18
  • 11
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 483
  • 245
  • 201
  • 189
  • 163
  • 137
  • 127
  • 112
  • 105
  • 102
  • 88
  • 88
  • 85
  • 83
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Reconstruction of the ionization history from 21cm maps with deep learning

Mangena January 2020 (has links)
Masters of Science / Upcoming and ongoing 21cm surveys, such as the Square Kilometre Array (SKA), Hydrogen Epoch of Reionization Array (HERA) and Low Frequency Array (LOFAR), will enable imaging of the neutral hydrogen distribution on cosmological scales in the early Universe. These experiments are expected to generate huge imaging datasets that will encode more information than the power spectrum. This provides an alternative unique way to constrain the astrophysical and cosmological parameters, which might break the degeneracies in the power spectral analysis. The global history of reionization remains fairly unconstrained. In this thesis, we explore the viability of directly using the 21cm images to reconstruct and constrain the reionization history. Using Convolutional Neural Networks (CNN), we create a fast estimator of the global ionization fraction from the 21cm images as produced by our Large Semi-numerical Simulation (SimFast21). Our estimator is able to efficiently recover the ionization fraction (xHII) at several redshifts, z = 7; 8; 9; 10 with an accuracy of 99% as quantified by the coefficient of determination R2 without being given any additional information about the 21cm maps. This approach, contrary to estimations based on the power spectrum, is model independent. When adding the thermal noise and instrumental effects from these 21cm arrays, the results are sensitive to the foreground removal level, affecting the recovery of high neutral fractions. We also observe similar trend when combining all redshifts but with an improved accuracy. Our analysis can be easily extended to place additional constraints on other astrophysical parameters such as the photon escape fraction. This work represents a step forward to extract the astrophysical and cosmological information from upcoming 21cm surveys.
282

Automatic Dispatching of Issues using Machine Learning / Automatisk fördelning av ärenden genom maskininlärning

Bengtsson, Fredrik, Combler, Adam January 2019 (has links)
Many software companies use issue tracking systems to organize their work. However, when working on large projects, across multiple teams, a problem of finding the correctteam to solve a certain issue arises. One team might detect a problem, which must be solved by another team. This can take time from employees tasked with finding the correct team and automating the dispatching of these issues can have large benefits for the company. In this thesis, the use of machine learning methods, mainly convolutional neural networks (CNN) for text classification, has been applied to this problem. For natural language processing both word- and character-level representations are commonly used. The results in this thesis suggests that the CNN learns different information based on whether word- or character-level representation is used. Furthermore, it was concluded that the CNN models performed on similar levels as the classical Support Vector Machine for this task. When compared to a human expert, working with dispatching issues, the best CNN model performed on a similar level when given the same information. The high throughput of a computer model, therefore, suggests automation of this task is very much possible.
283

Automatic Melanoma Diagnosis in Dermoscopic Imaging Base on Deep Learning System

Nie, Yali January 2021 (has links)
Melanoma is one of the deadliest forms of cancer. Unfortunately, its incidence rates have been increasing all over the world. One of the techniques used by dermatologists to diagnose melanomas is an imaging modality called dermoscopy. The skin lesion is inspected using a magnification device and a light source. This technique makes it possible for the dermatologist to observe subcutaneous structures that would be invisible otherwise. However, the use of dermoscopy is not straightforward, requiring years of practice. Moreover, the diagnosis is many times subjective and challenging to reproduce. Therefore, it is necessary to develop automatic methods that will help dermatologists provide more reliable diagnoses.  Since this cancer is visible on the skin, it is potentially detectable at a very early stage when it is curable. Recent developments have converged to make fully automatic early melanoma detection a real possibility. First, the advent of dermoscopy has enabled a dramatic boost in the clinical diagnostic ability to the point that it can detect melanoma in the clinic at the earliest stages. This technology’s global adoption has allowed the accumulation of extensive collections of dermoscopy images. The development of advanced technologies in image processing and machine learning has given us the ability to distinguish malignant melanoma from the many benign mimics that require no biopsy. These new technologies should allow earlier detection of melanoma and reduce a large number of unnecessary and costly biopsy procedures. Although some of the new systems reported for these technologies have shown promise in preliminary trials, a widespread implementation must await further technical progress in accuracy and reproducibility.  This thesis provides an overview of our deep learning (DL) based methods used in the diagnosis of melanoma in dermoscopy images. First, we introduce the background. Then, this paper gives a brief overview of the state-of-art article on melanoma interpret. After that, a review is provided on the deep learning models for melanoma image analysis and the main popular techniques to improve the diagnose performance. We also made a summary of our research results. Finally, we discuss the challenges and opportunities for automating melanocytic skin lesions’ diagnostic procedures. We end with an overview of a conclusion and directions for the following research plan.
284

Hybrid Model Approach to Appliance Load Disaggregation : Expressive appliance modelling by combining convolutional neural networks and hidden semi Markov models. / Hybridmodell för disaggregering av hemelektronik : Detaljerad modellering av elapparater genom att kombinera neurala nätverk och Markovmodeller.

Huss, Anders January 2015 (has links)
The increasing energy consumption is one of the greatest environmental challenges of our time. Residential buildings account for a considerable part of the total electricity consumption and is further a sector that is shown to have large savings potential. Non Intrusive Load Monitoring (NILM), i.e. the deduction of the electricity consumption of individual home appliances from the total electricity consumption of a household, is a compelling approach to deliver appliance specific consumption feedback to consumers. This enables informed choices and can promote sustainable and cost saving actions. To achieve this, accurate and reliable appliance load disaggregation algorithms must be developed. This Master's thesis proposes a novel approach to tackle the disaggregation problem inspired by state of the art algorithms in the field of speech recognition. Previous approaches, for sampling frequencies <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cleq" />1 Hz, have primarily focused on different types of hidden Markov models (HMMs) and occasionally the use of artificial neural networks (ANNs). HMMs are a natural representation of electric appliances, however with a purely generative approach to disaggregation, basically all appliances have to be modelled simultaneously. Due to the large number of possible appliances and variations between households, this is a major challenge. It imposes strong restrictions on the complexity, and thus the expressiveness, of the respective appliance model to make inference algorithms feasible. In this thesis, disaggregation is treated as a factorisation problem where the respective appliance signal has to be extracted from its background. A hybrid model is proposed, where a convolutional neural network (CNN) extracts features that correlate with the state of a single appliance and the features are used as observations for a hidden semi Markov model (HSMM) of the appliance. Since this allows for modelling of a single appliance, it becomes computationally feasible to use a more expressive Markov model. As proof of concept, the hybrid model is evaluated on 238 days of 1 Hz power data, collected from six households, to predict the power usage of the households' washing machine. The hybrid model is shown to perform considerably better than a CNN alone and it is further demonstrated how a significant increase in performance is achieved by including transitional features in the HSMM. / Den ökande energikonsumtionen är en stor utmaning för en hållbar utveckling. Bostäder står för en stor del av vår totala elförbrukning och är en sektor där det påvisats stor potential för besparingar. Non Intrusive Load Monitoring (NILM), dvs. härledning av hushållsapparaters individuella elförbrukning utifrån ett hushålls totala elförbrukning, är en tilltalande metod för att fortlöpande ge detaljerad information om elförbrukningen till hushåll. Detta utgör ett underlag för medvetna beslut och kan bidraga med incitament för hushåll att minska sin miljöpåverakan och sina elkostnader. För att åstadkomma detta måste precisa och tillförlitliga algoritmer för el-disaggregering utvecklas. Denna masteruppsats föreslår ett nytt angreppssätt till el-disaggregeringsproblemet, inspirerat av ledande metoder inom taligenkänning. Tidigare angreppsätt inom NILM (i frekvensområdet <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cleq" />1 Hz) har huvudsakligen fokuserat på olika typer av Markovmodeller (HMM) och enstaka förekomster av artificiella neurala nätverk. En HMM är en naturlig representation av en elapparat, men med uteslutande generativ modellering måste alla apparater modelleras samtidigt. Det stora antalet möjliga apparater och den stora variationen i sammansättningen av dessa mellan olika hushåll utgör en stor utmaning för sådana metoder. Det medför en stark begränsning av komplexiteten och detaljnivån i modellen av respektive apparat, för att de algoritmer som används vid prediktion ska vara beräkningsmässigt möjliga. I denna uppsats behandlas el-disaggregering som ett faktoriseringsproblem, där respektive apparat ska separeras från bakgrunden av andra apparater. För att göra detta föreslås en hybridmodell där ett neuralt nätverk extraherar information som korrelerar med sannolikheten för att den avsedda apparaten är i olika tillstånd. Denna information används som obervationssekvens för en semi-Markovmodell (HSMM). Då detta utförs för en enskild apparat blir det beräkningsmässigt möjligt att använda en mer detaljerad modell av apparaten. Den föreslagna Hybridmodellen utvärderas för uppgiften att avgöra när tvättmaskinen används för totalt 238 dagar av elförbrukningsmätningar från sex olika hushåll. Hybridmodellen presterar betydligt bättre än enbart ett neuralt nätverk, vidare påvisas att prestandan förbättras ytterligare genom att introducera tillstånds-övergång-observationer i HSMM:en.
285

Detection and Segmentation of Brain Metastases with Deep Convolutional Networks

Losch, Max January 2015 (has links)
As deep convolutional networks (ConvNets) reach spectacular results on a multitude of computer vision tasks and perform almost as well as a human rater on the task of segmenting gliomas in the brain, I investigated the applicability for detecting and segmenting brain metastases. I trained networks with increasing depth to improve the detection rate and introduced a border-pair-scheme to reduce oversegmentation. A constraint on the time for segmenting a complete brain scan required the utilization of fully convolutional networks which reduced the time from 90 minutes to 40 seconds. Despite some present noise and label errors in the 490 full brain MRI scans, the final network achieves a true positive rate of 82.8% and 0.05 misclassifications per slice where all lesions greater than 3 mm have a perfect detection score. This work indicates that ConvNets are a suitable approach to both detect and segment metastases, especially as further architectural extensions might improve the predictive performance even more.
286

Application of Deep-learning Method to Surface Anomaly Detection / Tillämpning av djupinlärningsmetoder för detektering av ytanomalier

Le, Jiahui January 2021 (has links)
In traditional industrial manufacturing, due to the limitations of science and technology, manual inspection methods are still used to detect product surface defects. This method is slow and inefficient due to manual limitations and backward technology. The aim of this thesis is to research whether it is possible to automate this using modern computer hardware and image classification of defects using different deep learning methods. The report concludes, based on results from controlled experiments, that it is possible to achieve a dice coefficient of more than 81%.
287

Optimizing Convolutional Neural Networks for Inference on Embedded Systems

Strömberg, Lucas January 2021 (has links)
Convolutional neural networks (CNN) are state of the art machine learning models used for various computer vision problems, such as image recognition. As these networks normally need a vast amount of parameters they can be computationally expensive, which complicates deployment on embedded hardware, especially if there are contraints on for instance latency, memory or power consumption. This thesis examines the CNN optimization methods pruning and quantization, in order to explore how they affect not only model accuracy, but also possible inference latency speedup. Four baseline CNN models, based on popular and relevant architectures, were implemented and trained on the CIFAR-10 dataset. The networks were then quantized or pruned for various optimization parameters. All models can be successfully quantized to both 5-bit weights and activations, or pruned with 70% sparsity without any substantial effect on accuracy. The larger baseline models are generally more robust and can be quantized more aggressively, however they are also more sensitive to low-bit activations. Moreover, for 8-bit integer quantization the networks were implemented on an ARM Cortex-A72 microprocessor, where inference latency was studied. These fixed-point models achieves up to 5.5x inference speedup on the ARM processor, compared to the 32-bit floating-point baselines. The larger models gain more speedup from quantization than the smaller ones. While the results are not necessarily generalizable to different CNN architectures or datasets, the valuable insights obtained in this thesis can be used as starting points for further investigations in model optimization and possible effects on accuracy and embedded inference latency.
288

Predictive Maintenance in Smart Agriculture Using Machine Learning : A Novel Algorithm for Drift Fault Detection in Hydroponic Sensors

Shaif, Ayad January 2021 (has links)
The success of Internet of Things solutions allowed the establishment of new applications such as smart hydroponic agriculture. One typical problem in such an application is the rapid degradation of the deployed sensors. Traditionally, this problem is resolved by frequent manual maintenance, which is considered to be ineffective and may harm the crops in the long run. The main purpose of this thesis was to propose a machine learning approach for automating the detection of sensor fault drifts. In addition, the solution’s operability was investigated in a cloud computing environment in terms of the response time. This thesis proposes a detection algorithm that utilizes RNN in predicting sensor drifts from time-series data streams. The detection algorithm was later named; Predictive Sliding Detection Window (PSDW) and consisted of both forecasting and classification models. Three different RNN algorithms, i.e., LSTM, CNN-LSTM, and GRU, were designed to predict sensor drifts using forecasting and classification techniques. The algorithms were compared against each other in terms of relevant accuracy metrics for forecasting and classification. The operability of the solution was investigated by developing a web server that hosted the PSDW algorithm on an AWS computing instance. The resulting forecasting and classification algorithms were able to make reasonably accurate predictions for this particular scenario. More specifically, the forecasting algorithms acquired relatively low RMSE values as ~0.6, while the classification algorithms obtained an average F1-score and accuracy of ~80% but with a high standard deviation. However, the response time was ~5700% slower during the simulation of the HTTP requests. The obtained results suggest the need for future investigations to improve the accuracy of the models and experiment with other computing paradigms for more reliable deployments.
289

Evaluation of 3D motion capture data from a deep neural network combined with a biomechanical model

Rydén, Anna, Martinsson, Amanda January 2021 (has links)
Motion capture has in recent years grown in interest in many fields from both game industry to sport analysis. The need of reflective markers and expensive multi-camera systems limits the business since they are costly and time-consuming. One solution to this could be a deep neural network trained to extract 3D joint estimations from a 2D video captured with a smartphone. This master thesis project has investigated the accuracy of a trained convolutional neural network, MargiPose, that estimates 25 joint positions in 3D from a 2D video, against a gold standard, multi-camera Vicon-system. The project has also investigated if the data from the deep neural network can be connected to a biomechanical modelling software, AnyBody, for further analysis. The final intention of this project was to analyze how accurate such a combination could be in golf swing analysis. The accuracy of the deep neural network has been evaluated with three parameters: marker position, angular velocity and kinetic energy for different segments of the human body. MargiPose delivers results with high accuracy (Mean Per Joint Position Error (MPJPE) = 1.52 cm) for a simpler movement but for a more advanced motion such as a golf swing, MargiPose achieves less accuracy in marker distance (MPJPE = 3.47 cm). The mean difference in angular velocity shows that MargiPose has difficulties following segments that are occluded or has a greater motion, such as the wrists in a golf swing where they both move fast and are occluded by other body segments. The conclusion of this research is that it is possible to connect data from a trained CNN with a biomechanical modelling software. The accuracy of the network is highly dependent on the intention of the data. For the purpose of golf swing analysis, this could be a great and cost-effective solution which could enable motion analysis for professionals but also for interested beginners. MargiPose shows a high accuracy when evaluating simple movements. However, when using it with the intention of analyzing a golf swing in i biomechanical modelling software, the outcome might be beyond the bounds of reliable results.
290

Tyre sound classification with machine learning

Jabali, Aghyad, Mohammedbrhan, Husein Abdelkadir January 2021 (has links)
Having enough data about the usage of tyre types on the road can lead to a better understanding of the consequences of studded tyres on the environment. This paper is focused on training and testing a machine learning model which can be further integrated into a larger system for automation of the data collection process. Different machine learning algorithms, namely CNN, SVM, and Random Forest, were compared in this experiment. The method used in this paper is an empirical method. First, sound data for studded and none-studded tyres was collected from three different locations in the city of Gävle/Sweden. A total of 760 Mel spectrograms from both classes was generated to train and test a well-known CNN model (AlexNet) on MATLAB. Sound features for both classes were extracted using JAudio to train and test models that use SVM and Random Forest classifi-ers on Weka. Unnecessary features were removed one by one from the list of features to improve the performance of the classifiers. The result shows that CNN achieved accuracy of 84%, SVM has the best performance both with and without removing some audio features (i.e 94% and 92%, respectively), while Random Forest has 89 % accuracy. The test data is comprised of 51% of the studded class and 49% of the none-studded class and the result of the SVM model has achieved more than 94 %. Therefore, it can be considered as an acceptable result that can be used in practice.

Page generated in 0.0901 seconds