• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 266
  • 35
  • 14
  • 10
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 388
  • 388
  • 388
  • 249
  • 166
  • 160
  • 141
  • 87
  • 85
  • 81
  • 79
  • 77
  • 70
  • 68
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Segmentation of People and Vehicles in Dense Voxel Grids from Photon Counting LiDAR using 3D-Unet

Danielsson, Fredrik January 2021 (has links)
In recent years, the usage of 3D deep learning techniques has seen a surge,mainly driven by advancements in autonomous driving and medical applications.This thesis investigates the applicability of existing state-of-the-art 3Ddeep learning network architectures to dense voxel grids from single photoncounting 3D LiDAR. This work also examine the choice of loss function asa means of dealing with extreme data imbalance, in order to segment peopleand vehicles in outdoor forest scenes. Due to data similarities with volumetricmedical data, such as computer tomography scans, this thesis investigates ifa model for 3D deep learning used for medical applications, the commonlyused 3D U-Net, can be used for photon counting data. The results showthat segmentation of people and vehicles is possible in this type of data butthat performance depends on the segmentation task, light conditions, and theloss function. For people segmentation the final models are able to predictall targets, but with a significant amount of false positives, something that islikely caused by similar LiDAR responses between people and tree trunks.For vehicle detection, the results are more inconsistent and varies greatlybetween different loss functions as well as the position and orientation of thevehicles. Overall, we consider the 3D U-Net model a successful proof-ofconceptregarding the applicability of 3D deep learning techniques to this kindof data. / Under de senaste åren har användningen för djupinlärningstekniker för 3Dsett en kraftig ökning, främst driven av framsteg inom autonoma fordon ochmedicinska tillämpningar. Denna avhandling undersöker befintliga modernadjupinlärningsnätverk för 3D i täta voxelgriddar från fotonräknande 3D LiDARför att segmentera människor och fordon i skogsscener. Vidare undersöksvalet av målfunktion som ett sätt att hantera extrem dataobalans. På grundav datalikheter med volymetriska medicinska data, såsom datortomografi,kommer denna avhandling att undersöka om en modell för 3D-djupinlärningsom används för medicinska applikationer, nämligen 3D U-Net, kan användasför fotonräknande data. Resultaten visar att segmentering av människor ochfordon är möjligt men att prestanda varier avsevärt med segmenteringsuppgiften,ljusförhållanden, och målfunktioner. För segmentering av människorkan de slutgiltiga modellerna segmentera alla mål men med en betydandemängd falska utslag, något som sannolikt orsakas av liknande LiDAR-svarmellan människor och trädstammar. För segmentering av fordon är resultatenmer oberäkneliga och varierar kraftigt mellan olika målfunktioner såväl somfordonens position och orientering. Sammantaget anser vi att 3D U-Netmodellenvisar på en framgångsrik konceptvalidering när det gäller tillämpningav djupinlärningstekniker för 3D på denna typ av data.
282

Rogue Drone Detection

Raheem, Muiz Olalekan January 2023 (has links)
Rogue drones have become a significant concern in recent years due to their potential to cause harm to people and property and disrupt critical infrastructure and public safety. As a result, there has been a growing need for effective methods to detect and mitigate the risks posed by these drones. The proposed study aims to address the task by using a Radio Frequency (RF) based approach. Also, ensemble Machine Learning (ML) methods, as well as Deep Learning (DL) techniques were utilized as classification algorithms. Three levels of classification were defined for the task which includes drone detection, identification, and characterization based on operation mode. For the three levels, Deep-Complex Convolutional Neural Network performed the best and achieved an average accuracy of 99.82%, 94.20%, and 90.25%, respectively.
283

Deep Learning for Earth Observation: improvement of classification methods for land cover mapping : Semantic segmentation of satellite image time series

Carpentier, Benjamin January 2021 (has links)
Satellite Image Time Series (SITS) are becoming available at high spatial, spectral and temporal resolutions across the globe by the latest remote sensing sensors. These series of images can be highly valuable when exploited by classification systems to produce frequently updated and accurate land cover maps. The richness of spectral, spatial and temporal features in SITS is a promising source of data for developing better classification algorithms. However, machine learning methods such as Random Forests (RFs), despite their fruitful application to SITS to produce land cover maps, are structurally unable to properly handle intertwined spatial, spectral and temporal dynamics without breaking the structure of the data. Therefore, the present work proposes a comparative study of various deep learning algorithms from the Convolutional Neural Network (CNN) family and evaluate their performance on SITS classification. They are compared to the processing chain coined iota2, developed by the CESBIO and based on a RF model. Experiments are carried out in an operational context using with sparse annotations from 290 labeled polygons. Less than 80 000 pixel time series belonging to 8 land cover classes from a year of Sentinel- 2 monthly syntheses are used. Results show on a test set of 131 polygons that CNNs using 3D convolutions in space and time are more accurate than 1D temporal, stacked 2D and RF approaches. Best-performing models are CNNs using spatio-temporal features, namely 3D-CNN, 2D-CNN and SpatioTempCNN, a two-stream model using both 1D and 3D convolutions. / Tidsserier av satellitbilder (SITS) blir tillgängliga med hög rumslig, spektral och tidsmässig upplösning över hela världen med hjälp av de senaste fjärranalyssensorerna. Dessa bildserier kan vara mycket värdefulla när de utnyttjas av klassificeringssystem för att ta fram ofta uppdaterade och exakta kartor över marktäcken. Den stora mängden spektrala, rumsliga och tidsmässiga egenskaper i SITS är en lovande datakälla för utveckling av bättre algoritmer. Metoder för maskininlärning som Random Forests (RF), trots att de har tillämpats på SITS för att ta fram kartor över landtäckning, är strukturellt sett oförmögna att hantera den sammanflätade rumsliga, spektrala och temporala dynamiken utan att bryta sönder datastrukturen. I detta arbete föreslås därför en jämförande studie av olika algoritmer från Konvolutionellt Neuralt Nätverk (CNN) -familjen och en utvärdering av deras prestanda för SITS-klassificering. De jämförs med behandlingskedjan iota2, som utvecklats av CESBIO och bygger på en RF-modell. Försöken utförs i ett operativt sammanhang med glesa annotationer från 290 märkta polygoner. Mindre än 80 000 pixeltidsserier som tillhör 8 marktäckeklasser från ett års månatliga Sentinel-2-synteser används. Resultaten visar att CNNs som använder 3D-falsningar i tid och rum är mer exakta än 1D temporala, staplade 2D- och RF-metoder. Bäst presterande modeller är CNNs som använder spatiotemporala egenskaper, nämligen 3D-CNN, 2D-CNN och SpatioTempCNN, en modell med två flöden som använder både 1D- och 3D-falsningar.
284

Development of alternative air filtration materials and methods of analysis

Beckman, Ivan Philip 09 December 2022 (has links) (PDF)
Clean air is a global health concern. Each year more than seven million people across the globe perish from breathing poor quality air. Development of high efficiency particulate air (HEPA) filters demonstrate an effort to mitigate dangerous aerosol hazards at the point of production. The nuclear power industry installs HEPA filters as a final line of containment of hazardous particles. Advancement air filtration technology is paramount to achieving global clean air. An exploration of analytical, experimental, computational, and machine learning models is presented in this dissertation to advance the science of air filtration technology. This dissertation studies, develops, and analyzes alternative air filtration materials and methods of analysis that optimize filtration efficiency and reduce resistance to air flow. Alternative nonwoven filter materials are considered for use in HEPA filtration. A detailed review of natural and synthetic fibers is presented to compare mechanical, thermal, and chemical properties of fibers to desirable characteristics for air filtration media. An experimental effort is undertaken to produce and evaluate new nanofibrous air filtration materials through electrospinning. Electrospun and stabilized nanofibrous media are visually analyzed through optical imaging and tested for filtration efficiency and air flow resistance. The single fiber efficiency (SFE) analytical model is applied to air filtration media for the prediction of filtration efficiency and air flow resistance. Digital twin replicas of nonwoven nanofibrous media are created using computer scripting and commercial digital geometry software. Digital twin filters are visually compared to melt-blown and electrospun filters. Scanning electron microscopy images are evaluated using a machine learning model. A convolutional neural network is presented as a method to analyze complex geometry. Digital replication of air filtration media enables coordination among experimental, analytical, machine learning, and computational air filtration models. The value of using synthetic data to train and evaluate computational and machine learning models is demonstrated through prediction of air filtration performance, and comparison to analytical results. This dissertation concludes with discussion on potential opportunities and future work needed in the continued effort to advance clean air technologies for the mitigation of a global health and safety challenge.
285

Rethinking continual learning approach and study out-of-distribution generalization algorithms

Laleh, Touraj 08 1900 (has links)
L'un des défis des systèmes d'apprentissage automatique actuels est que les paradigmes d'IA standard ne sont pas doués pour transférer (ou exploiter) les connaissances entre les tâches. Alors que de nombreux systèmes ont été formés et ont obtenu des performances élevées sur une distribution spécifique d'une tâche, il est pas facile de former des systèmes d'IA qui peuvent bien fonctionner sur un ensemble diversifié de tâches qui appartiennent aux différentes distributions. Ce problème a été abordé sous différents angles dans différents domaines, y compris l'apprentissage continu et la généralisation hors distribution. Si un système d'IA est formé sur un ensemble de tâches appartenant à différentes distributions, il pourrait oublier les connaissances acquises lors des tâches précédentes. En apprentissage continu, ce processus entraîne un oubli catastrophique qui est l'un des problèmes fondamentaux de ce domaine. La première projet de recherche dans cette thèse porte sur la comparaison d'un apprenant chaotique et d'un naïf configuration de l'apprentissage continu. La formation d'un modèle de réseau neuronal profond nécessite généralement plusieurs itérations, ou époques, sur l'ensemble de données d'apprentissage, pour mieux estimer les paramètres du modèle. La plupart des approches proposées pour ce problème tentent de compenser les effets de mises à jour des paramètres dans la configuration incrémentielle par lots dans laquelle le modèle de formation visite un grand nombre de échantillons pour plusieurs époques. Cependant, il n'est pas réaliste de s'attendre à ce que les données de formation soient toujours alimenté au modèle. Dans ce chapitre, nous proposons un apprenant de flux chaotique qui imite le chaotique comportement des neurones biologiques et ne met pas à jour les paramètres du réseau. De plus, il peut fonctionner avec moins d'échantillons par rapport aux modèles d'apprentissage en profondeur sur les configurations d'apprentissage par flux. Fait intéressant, nos expériences sur différents ensembles de données montrent que l'apprenant de flux chaotique a moins d'oubli catastrophique de par sa nature par rapport à un modèle CNN en continu apprentissage. Les modèles d'apprentissage en profondeur ont une performance de généralisation hors distribution naïve où la distribution des tests est inconnue et différente de la formation. Au cours des dernières années, il y a eu eu de nombreux projets de recherche pour comparer les algorithmes hors distribution, y compris la moyenne et méthodes basées sur les scores. Cependant, la plupart des méthodes proposées ne tiennent pas compte du niveau de difficulté de tâches. Le deuxième projet de recherche de cette thèse, l'analyse de certains éléments logiques et pratiques les forces et les inconvénients des méthodes existantes de comparaison et de classement hors distribution algorithmes. Nous proposons une nouvelle approche de classement pour définir les ratios de difficulté des tâches afin de comparer les algorithmes de généralisation hors distribution. Nous avons comparé la moyenne, basée sur le score, et des classements basés sur la difficulté de quatre tâches sélectionnées du benchmark WILDS et cinq algorithmes hors distribution populaires pour l'expérience. L'analyse montre d'importantes changements dans les ordres de classement par rapport aux approches de classement actuelles. / One of the challenges of current machine learning systems is that standard AI paradigms are not good at transferring (or leveraging) knowledge across tasks. While many systems have been trained and achieved high performance on a specific distribution of a task, it is not easy to train AI systems that can perform well on a diverse set of tasks that belong to different distributions. This problem has been addressed from different perspectives in different domains including continual learning and out-of-distribution generalization. If an AI system is trained on a set of tasks belonging to different distributions, it could forget the knowledge it acquired from previous tasks. In continual learning, this process results in catastrophic forgetting which is one of the core issues of this domain. The first research project in this thesis focuses on the comparison of a chaotic learner and a naive continual learning setup. Training a deep neural network model usually requires multiple iterations, or epochs, over the training data set, to better estimate the parameters of the model. Most proposed approaches for this issue try to compensate for the effects of parameter updates in the batch incremental setup in which the training model visits a lot of samples for several epochs. However, it is not realistic to expect training data will always be fed to the model. In this chapter, we propose a chaotic stream learner that mimics the chaotic behavior of biological neurons and does not update network parameters. In addition, it can work with fewer samples compared to deep learning models on stream learning setups. Interestingly, our experiments on different datasets show that the chaotic stream learner has less catastrophic forgetting by its nature in comparison to a CNN model in continual learning. Deep Learning models have a naive out-of-distribution~(OoD) generalization performance where the testing distribution is unknown and different from the training. In the last years, there have been many research projects to compare OoD algorithms, including average and score-based methods. However, most proposed methods do not consider the level of difficulty of tasks. The second research project in this thesis, analysis some logical and practical strengths and drawbacks of existing methods for comparing and ranking OoD algorithms. We propose a novel ranking approach to define the task difficulty ratios to compare OoD generalization algorithms. We compared the average, score-based, and difficulty-based rankings of four selected tasks from the WILDS benchmark and five popular OoD algorithms for the experiment. The analysis shows significant changes in the ranking orders compared with current ranking approaches.
286

ISAR Imaging Enhancement Without High-Resolution Ground Truth

Enåkander, Moltas January 2023 (has links)
In synthetic aperture radar (SAR) and inverse synthetic aperture radar (ISAR), an imaging radar emits electromagnetic waves of varying frequencies towards a target and the backscattered waves are collected. By either moving the radar antenna or rotating the target and combining the collected waves, a much longer synthetic aperture can be created. These radar measurements can be used to determine the radar cross-section (RCS) of the target and to reconstruct an estimate of the target. However, the reconstructed images will suffer from spectral leakage effects and are limited in resolution. Many methods of enhancing the images exist and some are based on deep learning. Most commonly the deep learning methods rely on high-resolution ground truth data of the scene to train a neural network to enhance the radar images. In this thesis, a method that does not rely on any high-resolution ground truth data is applied to train a convolutional neural network to enhance radar images. The network takes a conventional ISAR image subject to spectral leakage effects as input and outputs an enhanced ISAR image which contains much more defined features. New RCS measurements are created from the enhanced ISAR image and the network is trained to minimise the difference between the original RCS measurements and the new RCS measurements. A sparsity constraint is added to ensure that the proposed enhanced ISAR image is sparse. The synthetic training data consists of scenes containing point scatterers that are either individual or grouped together to form shapes. The scenes are used to create synthetic radar measurements which are then used to reconstruct ISAR images of the scenes. The network is tested using both synthetic data and measurement data from a cylinder and two aeroplane models. The network manages to minimise spectral leakage and increase the resolution of the ISAR images created from both synthetic and measured RCSs, especially on measured data from target models which have similar features to the synthetic training data.  The contributions of this thesis work are firstly a convolutional neural network that enhances ISAR images affected by spectral leakage. The neural network handles complex-valued signals as a single channel and does not perform any rescaling of the input. Secondly, it is shown that it is sufficient to calculate the new RCS for much fewer frequency samples and angular positions and compare those measurements to the corresponding frequency samples and angular positions in the original RCS to train the neural network.
287

Deep Learning Methods for Recovering Trading Strategies

Emtell, Erik, Spjuth, Oliver January 2022 (has links)
The aim of this paper is first of all to determine whether deep learning methods can recover trading strategies based on historical price and volume data, with scarcity of real data in mind. The second aim is to evaluate the methods to generate a deep learning blueprint for strategy extraction. Trading strategies can be built on many different types of data, often combined from different areas. In this paper, we focus on trading strategies based solely on historical price and volume data to limit the scope of the problem. Combinations of different deep learning architectures and methods such as transfer- and ensemble methods were evaluated. The results clearly show that deep learning models can recover relatively complex trading strategies to some extent. Models leveraging transfer learning outperform other models when data is scarce and ensemble methods elevate performance in certain regards. / Målet med denna rapport är i första hand att ta reda på om djupinlärningsmetoder kan återskapa handlingsstragetier baserat på historiska priser och volymdata, med vetskapen att datan är begränsad. Det andra målet är att utvärdera metoder för att skapa en djupinlärningsmall för att utvinna handelsstrategier. Handelsstrategier kan vara byggda på många olika datatyper, ofta i kombination från olika områden. I denna rapport fokuserar vi på strategier som enbart är baserade på historiska priser och volymdata för att begränsa problemet. Kombinationer av olika djupinlärningsarkitekturer tillsammans med metoder som till exempel överföringsinlärning och ensembleinlärning utvärderades. Resultaten visar tydligt att djupinlärningsmodeller kan återskapa relativt komplexa handlingsstrategier. Modeller som utnyttjade överföringsinlärning presterade bättre än andra modeller när datan var begränsad och ensembleinlärning ökade prestandan ytterligare i vissa sammanhang. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
288

Deep Learning for Prediction of Falling Blood Pressure During Surgery : Prediction of Falling Blood Pressure

Zandpour, Navid January 2022 (has links)
Perioperative hypotension corresponds to critically low blood pressure events during the pre, intra and postoperative periods. It is a common side effect of general anaesthesia and is strongly associated with an increased risk of postoperative complications, such as acute kidney injury, myocardial injury and in the worst case death. Early treatment of hypotension, preferably even before onset, is crucial in order to reduce the risk and severity of its associated complications. This work explores methods for predicting the onset of hypotension which could serve as a warning mechanism for clinicians managing the patient’s hemodynamics. More specifically, we present methods using only the arterial blood pressure curve to predict two different definitions of hypotension. The presented methods are based on a Convolutional Neural Network (CNN) trained on data from patients undergoing high-risk surgery. The experimental results show that our network can predict hypotension with 70% sensitivity and 80% specificity 5 minutes before onset. The prediction performance is then quickly reduced for longer prediction times, resulting in 60% sensitivity and 80% specificity 15 minutes before onset. / Perioperativ hypotension motsvarar perioder av kritiskt lågt blodtryck före, under och efter operation. Det är en vanlig bieffekt av generell anestesi och är starkt associerad med ökat risk av postoperativa komplikationer, så som akut leverskada, myokardskada och i värsta fall dödsfall. Tidig behandling av hypotension, helst innan perioden börjar, är avgörande för att minska risken och allvarlighetsgraden av postoperativa komplikationer. Det här arbetet utforskar metoder för att förutspå perioder av hypotension, vilket skulle kunna används för att varna vårdpersonal som ansvarar för patientens hemodynamiska övervakning. Mer specifikt så presenteras metoder som endast använder artärblodtryck för att förutspå två olika definitioner av hypotension. Metoderna som presenteras är baserade på ett Convolutional Neural Network (CNN) som tränats på data från patienter som genomgår högriskoperation. De experementella resultaten visar att våran modell kan förutspå hypotension med 70% sensitivitet och 80% specificitet 5 minuter i förväg. Förmågan att förutspå hypotension avtar sedan snabbt för längre prediktionstider, vilket resulterar i 60% sensitivitet och 80% specificitet 15 minuter i förväg.
289

Evaluation of Three Machine Learning Algorithms for the Automatic Classification of EMG Patterns in Gait Disorders

Fricke, Christopher, Alizadeh, Jalal, Zakhary, Nahrin, Woost, Timo B., Bogdan, Martin, Classen, Joseph 27 March 2023 (has links)
Gait disorders are common in neurodegenerative diseases and distinguishing between seemingly similar kinematic patterns associated with different pathological entities is a challenge even for the experienced clinician. Ultimately, muscle activity underlies the generation of kinematic patterns. Therefore, one possible way to address this problem may be to differentiate gait disorders by analyzing intrinsic features of muscle activations patterns. Here, we examined whether it is possible to differentiate electromyography (EMG) gait patterns of healthy subjects and patients with different gait disorders using machine learning techniques. Nineteen healthy volunteers (9 male, 10 female, age 28.2 ± 6.2 years) and 18 patients with gait disorders (10 male, 8 female, age 66.2 ± 14.7 years) resulting from different neurological diseases walked down a hallway 10 times at a convenient pace while their muscle activity was recorded via surface EMG electrodes attached to 5 muscles of each leg (10 channels in total). Gait disorders were classified as predominantly hypokinetic (n = 12) or ataxic (n = 6) gait by two experienced raters based on video recordings. Three different classification methods (Convolutional Neural Network—CNN, Support Vector Machine—SVM, K-Nearest Neighbors—KNN) were used to automatically classify EMG patterns according to the underlying gait disorder and differentiate patients and healthy participants. Using a leave-one-out approach for training and evaluating the classifiers, the automatic classification of normal and abnormal EMG patterns during gait (2 classes: “healthy” and “patient”) was possible with a high degree of accuracy using CNN (accuracy 91.9%), but not SVM (accuracy 67.6%) or KNN (accuracy 48.7%). For classification of hypokinetic vs. ataxic vs. normal gait (3 classes) best results were again obtained for CNN (accuracy 83.8%) while SVM and KNN performed worse (accuracy SVM 51.4%, KNN 32.4%). These results suggest that machine learning methods are useful for distinguishing individuals with gait disorders from healthy controls and may help classification with respect to the underlying disorder even when classifiers are trained on comparably small cohorts. In our study, CNN achieved higher accuracy than SVM and KNN and may constitute a promising method for further investigation.
290

Convolutional Neural Network Optimization Using Genetic Algorithms

Reiling, Anthony J. January 2017 (has links)
No description available.

Page generated in 0.0787 seconds