• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1886
  • 58
  • 57
  • 38
  • 37
  • 37
  • 20
  • 14
  • 13
  • 7
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 2738
  • 2738
  • 1124
  • 982
  • 848
  • 621
  • 581
  • 499
  • 496
  • 471
  • 450
  • 447
  • 420
  • 416
  • 386
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Models and Representation Learning Mechanisms for Graph Data

Susheel Suresh (14228138) 15 December 2022 (has links)
<p>Graph representation learning (GRL) has been increasing used to model and understand data from a wide variety of complex systems spanning social, technological, bio-chemical and physical domains. GRL consists of two main components (1) a parametrized encoder that provides representations of graph data and (2) a learning process to train the encoder parameters. Designing flexible encoders that capture the underlying invariances and characteristics of graph data are crucial to the success of GRL. On the other hand, the learning process drives the quality of the encoder representations and developing principled learning mechanisms are vital for a number of growing applications in self-supervised, transfer and federated learning settings. To this end, we propose a suite of models and learning algorithms for GRL which form the two main thrusts of this dissertation.</p> <p><br></p> <p>In Thrust I, we propose two novel encoders which build upon on a widely popular GRL encoder class called graph neural networks (GNNs). First, we empirically study the prediction performance of current GNN based encoders when applied to graphs with heterogeneous node mixing patterns using our proposed notion of local assortativity. We find that GNN performance in node prediction tasks strongly correlates with our local assortativity metric---thereby introducing a limit. We propose to transform the input graph into a computation graph with proximity and structural information as distinct types of edges. We then propose a novel GNN based encoder that operates on this computation graph and adaptively chooses between structure and proximity information. Empirically, adopting our transformation and encoder framework leads to improved node classification performance compared to baselines in real-world graphs that exhibit diverse mixing.</p> <p>Secondly, we study the trade-off between expressivity and efficiency of GNNs when applied to temporal graphs for the task of link ranking. We develop an encoder that incorporates a labeling approach designed to allow for efficient inference over the candidate set jointly, while provably boosting expressivity. We also propose to optimize a list-wise loss for improved ranking. With extensive evaluation on real-world temporal graphs, we demonstrate its improved performance and efficiency compared to baselines.</p> <p><br></p> <p>In Thrust II, we propose two principled encoder learning mechanisms for challenging and realistic graph data settings. First, we consider a scenario where only limited or even no labelled data is available for GRL. Recent research has converged on graph contrastive learning (GCL), where GNNs are trained to maximize the correspondence between representations of the same graph in its different augmented forms. However, we find that GNNs trained by traditional GCL often risk capturing redundant graph features and thus may be brittle and provide sub-par performance in downstream tasks. We then propose a novel principle, termed adversarial-GCL (AD-GCL), which enables GNNs to avoid capturing redundant information during the training by optimizing adversarial graph augmentation strategies used in GCL. We pair AD-GCL with theoretical explanations and design a practical instantiation based on trainable edge-dropping graph augmentation. We experimentally validate AD-GCL by comparing with state-of-the-art GCL methods and achieve performance gains in semi-supervised, unsupervised and transfer learning settings using benchmark chemical and biological molecule datasets. </p> <p>Secondly, we consider a scenario where graph data is silo-ed across clients for GRL. We focus on two unique challenges encountered when applying distributed training to GRL: (i) client task heterogeneity and (ii) label scarcity. We propose a novel learning framework called federated self-supervised graph learning (FedSGL), which first utilizes a self-supervised objective to train GNNs in a federated fashion across clients and then, each client fine-tunes the obtained GNNs based on its local task and available labels. Our framework enables the federated GNN model to extract patterns from the common feature (attribute and graph topology) space without the need of labels or being biased by heterogeneous local tasks. Extensive empirical study of FedSGL on both node and graph classification tasks yields fruitful insights into how the level of feature / task heterogeneity, the adopted federated algorithm and the level of label scarcity affects the clients’ performance in their tasks.</p>
462

XAI-assisted Radio Resource Management: Feature selection and SHAP enhancement / XAI-assisterad radio-resursallokering: Feature selection och förbättring av SHAP

Sibuet Ruiz, Nicolás January 2022 (has links)
With the fast development of radio technologies, wireless systems have become more convoluted. This complexity, accompanied by an increase of the number of connections, is translated into a need for more parameters to analyse and decisions to take at each instant. AI comes into play by automating these processes, particularly with Deep Learning techniques, that often show the best accuracy. However, the high performance by these methods also comes with the drawback of behaving like a black box from the view of a human. To this end, eXplainable AI serves as a technique to better understand the decision process of these algorithms. This thesis proposes an eXplainable AI framework to be used on Reinforcement Learning agents, particularly within the use case of antenna resource adaptation for network energy reduction. The framework puts a special emphasis on model adaptation/reduction, therefore focusing on feature importance techniques. The proposed framework presents a pre-model block using Concrete Autoencoders for feature reduction and a post-model block using self-supervised learning to estimate feature importance. Both of these can be used alone or in combination with DeepSHAP, in order to mitigate some of this popular method’s drawbacks. The explanations provided by the pipeline prove useful in order to reduce model complexity without loss of accuracy and to understand the usage of the input features by the AI model. / Med den snabba utvecklingen av radioteknologier har trådlösa system blivit alltmer invecklade. Denna komplexitet, kombinerat med en ökning av antalet anslutningar, innebär att fler parametrar behöver analyseras, och fler beslut behöver fattas vid varje ögonblick. AI kommer in i bilden genom att automatisera dessa processer, särskilt med Deep Learning-tekniker, som ofta uppvisar bäst noggrannhet. Men den höga prestandan med dessa metoder kommer också med nackdelen att tekniken beter sig som en svart låda från en människas synvinkel. Förklarlig AI fungerar därför som en teknik för att bättre förstå beslutet som fattas av dessa algoritmer. Denna avhandling föreslår ett förklarligt AI-ramverk som ska användas inom förstärkningsinlärning, särskilt inom användningsfallet med antenn-resursanpassning för energireduktion i trådlösa nätverk. Det föreslagna ramverket sätter en särskild tonvikt på modellanpassning/modellreduktion. Ramverket innehåller ett förmodellblock som använder Concrete Autoencoders för Feature Reduction och ett post-modellblock som använder självövervakad inlärning för att uppskatta Feature Importance. Båda dessa kan användas ensamt eller i kombination med DeepSHAP, för att lindra några av denna populära metods nackdelar. Feature Importance-uppskattningarna från ramverket visar sig vara användbara för att minska modellkomplexitet utan förlust av noggrannhet och för att förstå användningen av Input Features av AI-modellen.
463

Reinforcement Learning for Hydrobatic AUVs / Reinforcement learning för Hydrobatiska AUV

Woźniak, Grzegorz January 2022 (has links)
This master thesis focuses on developing a Reinforcement Learning (RL) controller to perform hydrobatic maneuvers on an Autonomous Underwater Vehicle (AUV) successfully. This work also aims to analyze the robustness of the RL controller, as well as provide a comparison between RL algorithms and Proportional Integral Derivative (PID) control. Training of the algorithms is initially conducted in a Numpy simulation in Python. We show how to model the Equations of Motion (EOM) of the AUV and how to use it to train the RL controllers. We use the stablebaselines3 RL framework and create a training environment with the OpenAI gym. The Twin-Delay Deep Deterministic Policy Gradient (TD3) algorithm offers good performance in the simulation. The following maneuvers are studied: trim control, waypoint following, and an inverted pendulum. We test the maneuvers both in the Numpy simulation and Stonefish simulator. Also, we test the robustness of the RL trim controller by simulating noise in the state feedback. Lastly, we run the RL trim controller on a real AUV hardware called SAM. We show that the RL algorithm trained in the Numpy simulator can achieve similar performance to the PID controller in the Stonefish simulator. We generate a policy that can perform the trim control and the Inverted Pendulum maneuver in the Numpy simulation. We show that we can generate a robust policy that executes other types of maneuvers by providing a parameterized cost function to the RL algorithm. We discuss the results of every maneuver we perform with the SAM AUV and provide a discussion about the advantages and disadvantages of this control method applied to underwater robotics. We conclude that RL can be used to create policies that perform hydrobatic maneuvers. This data-driven approach can be applied in the future to more complex problems in underwater robotics. / Denna masteruppsats fokuserar på att utveckla en Reinforcement Learning (RL) kontroller för att framgångsrikt utföra hydrobatiska manövrar på ett autonomt undervattensfordon (AUV). Detta arbete syftar också till att analysera robustheten hos RL-kontrollern, samt tillhandahålla en jämförelse mellan RL-algoritmer och Proportional Integral Derivative (PID) kontroll. Träning av algoritmerna utförs initialt i Numpy-simuleringen i Python. Vi visar hur man modellerar rörelseekvationerna (EOM) för AUV, och hur man använder den för att träna RL-kontrollerna. Vi använder ramverket stablebaselines3 RL och skapar en träningsmiljö med gymmet OpenAI. Algoritmen Twin-Delay Deep Deterministic Policy Gradient (TD3) erbjuder bra prestanda i simuleringen. Följande manövrar studeras: trimkontroll, waypointföljning och en inverterad pendel. Vi testar manövrarna både i Numpy-simulering och Stonefish-simulator. Vi testar också robustheten hos RL-trimkontrollern genom att simulera bruset i tillståndsåterkopplingen. Slutligen kör vi RL-trimkontrollern på den riktiga SAM AUV-hårdvaran. Vi visar att RL-algoritmen tränad i Numpy-simulatorn kan uppnå liknande prestanda som PID-regulatorn i Stonefish-simulatorn. Vi genererar en policy som kan utföra trimkontrollen och manövern med inverterad pendel i Numpy-simuleringen. Vi visar att vi kan generera en robust policy som utför andra typer av manövrar genom att tillhandahålla en parameteriserad kostnadsfunktion till RL-algoritmen. Vi diskuterar resultaten av varje manöver vi utför med SAM AUV och ger en diskussion om fördelarna och nackdelarna med denna kontrollmetod som tillämpas på undervattensrobotik. Vi drar slutsatsen att RL kan användas för att skapa policyer som utför hydrobatiska manövrar. Detta datadrivna tillvägagångssätt kan tillämpas i framtiden på mer komplexa problem inom undervattensrobotik.
464

Detecting Security Patches in Java OSS Projects Using NLP

Stefanoni, Andrea January 2022 (has links)
The use of Open Source Software is becoming more and more popular, but it comes with the risk of importing vulnerabilities in private codebases. Security patches, providing fixes to detected vulnerabilities, are vital in protecting against cyber attacks, therefore being able to apply all the security patches as soon as they are released is key. Even though there is a public database for vulnerability fixes the majority of them remain undisclosed to the public, therefore we propose a Machine Learning algorithm using NLP to detect security patches in Java Open Source Software. To train the model we preprocessed and extract patches from the commits present in two databases provided by Debricked and a public one released by Ponta et al. [57]. Two experiments were conducted, one performing binary classification and the other trying to have higher granularity classifying the macro-type of vulnerability. The proposed models leverage the structure of the input to have a better patch representation and they are based on RNNs, Transformers and CodeBERT [22], with the best performing model being the Transformer that surprisingly outperformed CodeBERT. The results show that it is possible to classify security patches but using more relevant pre-training techniques or tree-based representation of the code might improve the performance. / Användningen av programvara med öppen källkod blir alltmer populär, men det innebär en risk för att sårbarheter importeras från privata kodbaser. Säkerhetspatchar, som åtgärdar upptäckta sårbarheter, är viktiga för att skydda sig mot cyberattacker, och därför är det viktigt att kunna tillämpa alla säkerhetspatchar så snart de släpps. Även om det finns en offentlig databas för korrigeringar av sårbarheter förblir de flesta hemliga för allmänheten. Vi föreslår därför en maskininlärningsalgoritm som med hjälp av NLP upptäcker säkerhetspatchar i Java Open Source Software. För att träna modellen har vi förbehandlat och extraherat patchar från de commits som finns i två databaser, ena som tillhandahålls av Debricked och en annan offentlig databas som släppts av Ponta et al. [57]. Två experiment genomfördes, varav ett utförde binär klassificering och det andra försökte få en högre granularitet genom att klassificera makro-typen av sårbarheten. De föreslagna modellerna utnyttjar strukturen i indatat för att få en bättre representation av patcharna och de är baserade på RNNs, Transformers och CodeBERT [22], där den bäst presterande modellen var Transformer som överraskande nog överträffade CodeBERT. Resultaten visar att det är möjligt att klassificera säkerhetspatchar, men genom att använda mer relevanta förträningstekniker eller trädbaserade representationer av koden kan prestandan förbättras.
465

PREDICTION OF MULTI-PHASE LIVER CT VOLUMES USING DEEP NEURAL NETWORK

Afroza Haque (17544888) 04 December 2023 (has links)
<p dir="ltr">Progress in deep learning methodologies has transformed the landscape of medical image analysis, opening fresh pathways for precise and effective diagnostics. Currently, multi-phase liver CT scans follow a four-stage process, commencing with an initial scan carried out before the administration of <a href="" target="_blank">intravenous (IV) contrast-enhancing material</a>. Subsequently, three additional scans are performed following the contrast injection. The primary objective of this research is to automate the analysis and prediction of 50% of liver CT scans. It concentrates on discerning patterns of intensity change during the second, third, and fourth phases concerning the initial phase. The thesis comprises two key sections. The first section employs the non-contrast phase (first scan), late hepatic arterial phase (second scan), and portal venous phase (third scan) to predict the delayed phase (fourth scan). In the second section, the non-contrast phase and late hepatic arterial phase are utilized to predict both the portal venous and delayed phases. The study evaluates the performance of two deep learning models, U-Net and U²-Net. The process involves preprocessing steps like subtraction and normalization to compute contrast difference images, followed by post-processing techniques to generate the predicted 2D CT scans. Post-processing steps have similar techniques as in preprocessing but are performed in reverse order. Four fundamental evaluation metrics, including <a href="" target="_blank">Mean Absolute Error (MAE), Signal-to-Reconstruction Error Ratio (SRE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM), </a>are employed for assessment. Based on these evaluation metrics, U²-Net performed better than U-Net for the prediction of both portal venous (third) and delayed (fourth) phases. Specifically, U²-Net exhibited superior MAE and PSNR results for the predicted third and fourth scans. However, U-Net did show slightly better SRE and SSIM performance in the predicted scans. On the other hand, for the exclusive prediction of the fourth scan, U-Net outperforms U²-Net in all four evaluation metrics. This implementation shows promising results which will eliminate the need for additional CT scans and reduce patients’ exposure to harmful radiation. Predicting 50% of liver CT volumes will reduce exposure to harmful radiation by half. The proposed method is not limited to liver CT scans and can be applied to various other multi-phase medical imaging techniques, including multi-phase CT angiography, multi-phase renal CT, contrast-enhanced breast MRI, and more.</p>
466

Deep Image Processing with Spatial Adaptation and Boosted Efficiency & Supervision for Accurate Human Keypoint Detection and Movement Dynamics Tracking

Chao Yang Dai (14709547) 31 May 2023 (has links)
<p>This thesis aims to design and develop the spatial adaptation approach through spatial transformers to improve the accuracy of human keypoint recognition models. We have studied different model types and design choices to gain an accuracy increase over models without spatial transformers and analyzed how spatial transformers increase the accuracy of predictions. A neural network called Widenet has been leveraged as a specialized network for providing the parameters for the spatial transformer. Further, we have evaluated methods to reduce the model parameters, as well as the strategy to enhance the learning supervision for further improving the performance of the model. Our experiments and results have shown that the proposed deep learning framework can effectively detect the human key points, compared with the baseline methods. Also, we have reduced the model size without significantly impacting the performance, and the enhanced supervision has improved the performance. This study is expected to greatly advance the deep learning of human key points and movement dynamics. </p>
467

Robust recognition and exploratory analysis of crystal structures using machine learning

Leitherer, Andreas 04 July 2022 (has links)
In den Materialwissenschaften läuten Künstliche-Intelligenz Methoden einen Paradigmenwechsel in Richtung Big-data zentrierter Forschung ein. Datenbanken mit Millionen von Einträgen, sowie hochauflösende Experimente, z.B. Elektronenmikroskopie, enthalten eine Fülle wachsender Information. Um diese ungenützten, wertvollen Daten für die Entdeckung verborgener Muster und Physik zu nutzen, müssen automatische analytische Methoden entwickelt werden. Die Kristallstruktur-Klassifizierung ist essentiell für die Charakterisierung eines Materials. Vorhandene Daten bieten vielfältige atomare Strukturen, enthalten jedoch oft Defekte und sind unvollständig. Eine geeignete Methode sollte diesbezüglich robust sein und gleichzeitig viele Systeme klassifizieren können, was für verfügbare Methoden nicht zutrifft. In dieser Arbeit entwickeln wir ARISE, eine Methode, die auf Bayesian deep learning basiert und mehr als 100 Strukturklassen robust und ohne festzulegende Schwellwerte klassifiziert. Die einfach erweiterbare Strukturauswahl ist breit gefächert und umfasst nicht nur Bulk-, sondern auch zwei- und ein-dimensionale Systeme. Für die lokale Untersuchung von großen, polykristallinen Systemen, führen wir die strided pattern matching Methode ein. Obwohl nur auf perfekte Strukturen trainiert, kann ARISE stark gestörte mono- und polykristalline Systeme synthetischen als auch experimentellen Ursprungs charakterisieren. Das Model basiert auf Bayesian deep learning und ist somit probabilistisch, was die systematische Berechnung von Unsicherheiten erlaubt, welche mit der Kristallordnung von metallischen Nanopartikeln in Elektronentomographie-Experimenten korrelieren. Die Anwendung von unüberwachtem Lernen auf interne Darstellungen des neuronalen Netzes enthüllt Korngrenzen und nicht ersichtliche Regionen, die über interpretierbare geometrische Eigenschaften verknüpft sind. Diese Arbeit ermöglicht die Analyse atomarer Strukturen mit starken Rauschquellen auf bisher nicht mögliche Weise. / In materials science, artificial-intelligence tools are driving a paradigm shift towards big data-centric research. Large computational databases with millions of entries and high-resolution experiments such as electron microscopy contain large and growing amount of information. To leverage this under-utilized - yet very valuable - data, automatic analytical methods need to be developed. The classification of the crystal structure of a material is essential for its characterization. The available data is structurally diverse but often defective and incomplete. A suitable method should therefore be robust with respect to sources of inaccuracy, while being able to treat multiple systems. Available methods do not fulfill both criteria at the same time. In this work, we introduce ARISE, a Bayesian-deep-learning based framework that can treat more than 100 structural classes in robust fashion, without any predefined threshold. The selection of structural classes, which can be easily extended on demand, encompasses a wide range of materials, in particular, not only bulk but also two- and one-dimensional systems. For the local study of large, polycrystalline samples, we extend ARISE by introducing so-called strided pattern matching. While being trained on ideal structures only, ARISE correctly characterizes strongly perturbed single- and polycrystalline systems, from both synthetic and experimental resources. The probabilistic nature of the Bayesian-deep-learning model allows to obtain principled uncertainty estimates which are found to be correlated with crystalline order of metallic nanoparticles in electron-tomography experiments. Applying unsupervised learning to the internal neural-network representations reveals grain boundaries and (unapparent) structural regions sharing easily interpretable geometrical properties. This work enables the hitherto hindered analysis of noisy atomic structural data.
468

AI/ML Development for RAN Applications : Deep Learning in Log Event Prediction / AI/ML-utveckling för RAN-applikationer : Deep Learning i Log Event Prediction

Sun, Yuxin January 2023 (has links)
Since many log tracing application and diagnostic commands are now available on nodes at base station, event log can easily be collected, parsed and structured for network performance analysis. In order to improve In Service Performance of customer network, a sequential machine learning model can be trained, test, and deployed on each node to learn from the past events to predict future crashes or a failure. This thesis project focuses on the evaluation and analysis of the effectiveness of deep learning models in predicting log events. It explores the application of stacked long short-term memory(LSTM) based model in capturing temporal dependencies and patterns within log event data. In addition, it investigates the probability distribution of the next event from the logs and estimates event trigger time to predict the future node restart event. This thesis project aims to improve the node availability time in base station of Ericsson and contribute to further application in log event prediction using deep learning techniques. A framework with two main phases is utilized to analyze and predict the occurrence of restart events based on the sequence of events. In the first phase, we perform natural language processing(NLP) on the log content to obtain the log key, and then identify the sequence that will cause the restart event from the sequence node events. In the second phase, we analyze these sequence of events which resulted in restart, and predict how many minutes in the future the restart event will occur. Experiment results show that our framework achieves no less than 73% accuracy on restart prediction and more than 1.5 minutes lead time on restart. Moreover, our framework also performs well for non-restart events. / Eftersom många loggspårningsapplikationer och diagnostiska kommandon nu finns tillgängliga på noder vid basstationen, kan händelseloggar enkelt samlas in, analyseras och struktureras för analys av nätverksprestanda. För att förbättra kundnätverkets In Service Performance kan en sekventiell maskininlärningsmodell tränas, testas och distribueras på varje nod för att lära av tidigare händelser för att förutsäga framtida krascher eller ett fel. Detta examensarbete fokuserar på utvärdering och analys av effektiviteten hos modeller för djupinlärning för att förutsäga logghändelser. Den utforskar tillämpningen av staplade långtidsminne (LSTM)-baserad modell för att fånga tidsmässiga beroenden och mönster i logghändelsedata. Dessutom undersöker den sannolikhetsfördelningen för nästa händelse från loggarna och uppskattar händelseutlösningstiden för att förutsäga den framtida omstartshändelsen för noden. Detta examensarbete syftar till att förbättra nodtillgänglighetstiden i Ericssons basstation och bidra till ytterligare tillämpning inom logghändelseprediktion med hjälp av djupinlärningstekniker. Ett ramverk med två huvudfaser används för att analysera och förutsäga förekomsten av omstartshändelser baserat på händelseförloppet. I den första fasen utför vi naturlig språkbehandling (NLP) på logginnehållet för att erhålla loggnyckeln och identifierar sedan sekvensen som kommer att orsaka omstartshändelsen från sekvensnodhändelserna. I den andra fasen analyserar vi dessa händelseförlopp som resulterade i omstart och förutsäger hur många minuter i framtiden omstartshändelsen kommer att inträffa. Experimentresultat visar att vårt ramverk uppnår inte mindre än 73% noggrannhet vid omstartsförutsägelse och mer än 1,5 minuters ledtid vid omstart. Dessutom fungerar vårt ramverk bra för händelser som inte startar om.
469

Self-supervised Learning for Efficient Object Detection / Självövervakat lärande för effektiv Objektdetektering

Berta, Benjamin István January 2021 (has links)
Self-supervised learning has become a prominent approach in pre-training Convolutional Neural Networks for computer vision. These methods are able to achieve state-of-the-art representation learning with unlabeled datasets. In this thesis, we apply Self-supervised Learning to the object detection problem. Previous methods have used large networks that are not suitable for embedded applications, so our goal was to train lightweight networks that can reach the accuracy of supervised learning. We used MoCo as a baseline for pre-training a ResNet-18 encoder and finetuned it on the COCO object detection task using a RetinaNet object detector. We evaluated our method based on the COCO evaluation metric with several additions to the baseline method. Our results show that lightweight networks can be trained by self-supervised learning and reach the accuracy of the supervised learning pre-training. / Självledd inlärning har blivit ett framträdande tillvägagångssätt vid träning av ”Convolutional Neural Networks” för datorseende. Dessa metoder kan uppnå topp prestanda med representationsinlärning med omärkta datamängder. I det här examensarbetet tillämpar vi Självledd inlärning på objektdetekteringsproblemet. Tidigare metoder har använt stora nätverk som inte är lämpliga för inbyggda applikationer, så vårt mål var att träna lättviktsnätverk som kan nå noggrannheten av ett tränat nätverk. Vi använde MoCo som basnivå för träning av en ResNet-18-kodare och finjusterade den på COCO-objektdetekteringsuppgiften med hjälp av en RetinaNet-objektdetektor. Vi utvärderade vår metod baserat på COCO-utvärderingsmåttet med flera tillägg till baslinjemetoden. Våra resultat visar att lättviktsnätverk kan tränas genom självledd inlärning och uppnå samma precisionen som för ett tränat nätverk.
470

Physics-Informed Deep Learning for System Identification of Autonomous Underwater Vehicles : A Lagrangian Neural Network Approach / Fysikinformerad Djupinlärning för Systemidentifiering av Autonoma Undervattensfordon : Med Användning av Lagrangianska Neurala Nätverk

Mirzai, Badi January 2021 (has links)
In this thesis, we explore Lagrangian Neural Networks (LNNs) for system identification of Autonomous Underwater Vehicles (AUVs) with 6 degrees of freedom. One of the main challenges of AUVs is that they have limited wireless communication and navigation under water. AUVs operate under strict and uncertain conditions, where they need to be able to navigate and perform tasks in unknown ocean environments with limited and noisy sensor data. A crucial requirement for localization and adaptive control of AUVs is having an accurate and reliable model of the system’s nonlinear dynamics while taking into account the dynamic environment of the ocean. Most of these dynamics models do not incorporate data. The collection of data for AUVs is difficult, but necessary in order to have more flexibility in the model’s parameters due to the dynamic environment of the ocean. Yet, traditional system identification methods are still dominant today, despite the recent breakthroughs in Deep Learning. Therefore, in this thesis, we aim for a data-driven approach that embeds laws from physics in order to learn the state-space model of an AUV. More precisely, exploring the LNN framework for higher-dimensional systems. Furthermore, we also extend the LNN to account for non-conservative forces acting upon the system, such as damping and control inputs. The networks are trained to learn from simulated data of a second-order ordinary differential equation of an AUV. The trained model is evaluated by integrating paths from different initial states and comparing them to the true dynamics. The results yielded a model capable of predicting the output acceleration of the state space model but struggled in learning the direction of the forward movement with time. / I den här uppsatsen utforskas Lagrangianska Neurala Nätverk (LNN) för systemidentifiering av Autonoma Undervattensfordon (AUV) med 6 frihetsgrader. En av de största utmaningarna med AUV är deras begränsningar när det kommer till trådlös kommunikation och navigering under vatten. Ett krav för att ha fungerande AUV är deras förmåga att navigera och utföra uppdrag under okända undervattensförhållanden med begränsad och brusig sensordata. Dessutom är ett kritiskt krav för lokalisering och adaptiv reglerteknik att ha noggranna modeller av systemets olinjära dynamik, samtidigt som den dynamiska miljön i havet tas i beaktande. De flesta sådana modeller tar inte i beaktande sensordata för att reglera dess parameterar. Insamling av sådan data för AUVer är besvärligt, men nödvändigt för att skapa större flexibilitet hos modellens parametrar. Trots de senaste genombrotten inom djupinlärning är traditionella metoder av systemidentifiering dominanta än idag för AUV. Det är av dessa anledningar som vi i denna uppsats strävar efter en datadriven metod, där vi förankrar lagar från fysik under inlärningen av systemets state-space modell. Mer specifikt utforskar vi LNN för ett system med högre dimension. Vidare expanderar vi även LNN till att även ta ickekonservativa krafter som verkar på systemet i beaktande, såsom dämpning och styrsignaler. Nätverket tränas att lära sig från simulerad data från en andra ordningens differentialekvation som beskriver en AUV. Den tränade modellen utvärderas genom att iterativt integrera fram dess rörelse från olika initialstillstånd, vilket jämförs med den korrekta modellen. Resultaten visade en modell som till viss del var kapabel till att förutspå korrekt acceleration, med begränsad framgång i att lära sig korrekt rörelseriktning framåt i tiden.

Page generated in 0.0956 seconds