• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Model Predictive Control of a Tricopter / Modellprediktiv reglering av en tricopter

Barsk, Karl-Johan January 2012 (has links)
In this master thesis, a real-time control system that stabilizes the rotational rates of a tri-copter, has been studied. The tricopter is a rotorcraft with three rotors. The tricopter has been modelled and identified, using system identification algorithms. The model has been used in a Kalman filter to estimate the state of the system and for design ofa model based controller. The control approach used in this thesis is a model predictive controller, which is a multi-variable controller that uses a quadratic optimization problem to compute the optimal con-trol signal. The problem is solved subject to a linear model of the system and the physicallimitations of the system. Two different types of algorithms that solves the MPC problem have been studied. These are explicit MPC and the fast gradient method. Explicit MPC is a pre-computed solution to the problem, while the fast gradient method is an online solution. The algorithms have been simulated with the Kalman filter and were implemented on themicrocontroller of the tricopter.
2

Analysis of Flow Prolongation Using Graph Neural Network in FIFO Multiplexing System / Analys av Flödesförlängning Med Hjälp av Graph Neural Network i FIFO-Multiplexering System

Wang, Weiran January 2023 (has links)
Network Calculus views a network system as a queuing framework and provides a series of mathematical functions for finding an upper bound of an end-to-end delay. It is crucial for the design of networks and applications with a hard delay guarantee, such as the emerging Time Sensitive Network. Even though several approaches in Network Calculus can be used directly to find bounds on the worst-case delay, these bounds are usually not tight, and making them tight is a hard problem due to the extremely intensive computing requirements. This problem has also been proven as NP-Hard. One newly introduced solution to tighten the delay bound is the so-called Flow Prolongation. It extends the paths of cross flows to new sink servers, which naturally increases the worst-case delay, but might at the same time decrease the delay bound. The most straightforward and the most rigorous solution to find the optimal Flow Prolongation combinations is by doing exhaustive searches. However, this approach is not scalable with the network size. Thus, a machine learning model, Graph Neural Network (GNN), has been introduced for the prediction of the optimal Flow Prolongation combinations, mitigating the scalability issue. However, early research also found out that machine learning models consistently misclassify adversarial examples. In this thesis, Fast Gradient Sign Method (FGSM) is used to benchmark how adversarial attacks will influence the delay bound achieved by the Flow Prolongation method. It is performed by slightly modifying the input network features based on their gradients. To achieve this, we first learned the usage of NetCal DNC, an Free and Open Source Software, to calculate the Pay Multiplexing Only Once (PMOO), one of the Network Calculus methods for the delay bound calculation. Then we reproduced the GNN model based on PMOO, and achieved an accuracy of 65%. Finally, the FGSM is implemented on a newly created dataset with a large number of servers and flows inside. Our results demonstrate that with at most 14% changes on the network features input, the accuracy of GNN drastically decreases to an average 9.45%, and some prominent examples are found whose delay bounds are largely loosened by the GNN Flow Prolongation prediction after the FGSM attack. / Nätverkskalkylen behandlar ett nätverkssystem som ett system av köer och tillhandahåller ett antal matematiska funktioner som används för att hitta en övre gräns för end-to-end förseningar. Det är mycket viktigt för designen av nätverk och applikationer med strikta begränsningar för förseningar, så som det framväxande Time Sensitive Network. Även om ett flertal tillvägagångssätt i nätverkskalkylen kan användas direkt för att finna gränsen för förseningar i det värsta fallet så är dessa vanligtvis inte snäva. Att göra gränserna snäva är svårt då det är ett NP-svårt problem som kräver extremt mycket beräkningar. En lösning för att strama åt förseningsgränserna som nyligen introducerats kallas Flow Prolongation. Den utökar vägarna av korsflöden till nya sink servrar, vilket naturligt ökar förseningen i värsta fallet, men kan eventuellt också sänka förseningsgränsen. Den enklaste och mest rigorösa lösningen för att hitta de optimala Flow Prolongation kombinationerna är att göra uttömmande sökningar. Detta tillvägagångssätt är dock inte skalbart för stora nätverk. Därför har en maskininlärningsmodell, ett Graph Neural Network (GNN), introducerats för att förutspå de optimala Flow Prolongation kombinationerna och samtidigt mildra problemen med skalbarhet. Dock så visar de tidiga fynden att maskininlärningsmodeller ofta felaktigt klassificerar motstridiga exempel. I detta projekt används Fast Gradient Sign Method (FGSM) för att undersöka hur motståndarattacker kan påverka förseningsgränsen som hittas med hjälp av Flow Prolongation metoden. Detta görs genom att modifiera indata-nätverksfunktionerna en aning baserat på dess gradienter. För att uppnå detta lärde vi oss först att använda NetCal DNC, en mjukvara som är gratis och Open Source, för att kunna beräkna Pay Multiplexinng Only Once (PMOO), en metod inom nätverkskalkylen för att beräkna förseningsgränser. Sedan reproducerade GNN modellen baserat på PMOO, och uppnådde en träffsäkerhet på 65%. Slutligen implementerades FGSM på ett nytt dataset med ett stort antal servrar och flöden. Våra resultat visar att förändringar på upp till 14% på indata-nätverksfunktionerna resulterar i att träffsäkerheten hos GNN minskar drastiskt till ett genomsnitt på 9.45%. Vissa exempel identifierades där förseningsgränsen utvidgas kraftfullt i GNN Flow Prolongation förutsägelsen efter FGSM attacken.
3

Robust Neural Receiver in Wireless Communication : Defense against Adversarial Attacks

Nicklasson Cedbro, Alice January 2023 (has links)
In the field of wireless communication systems, the interest in machine learning has increased in recent years. Adversarial machine learning includes attack and defense methods on machine learning components. It is a topic that has been thoroughly studied in computer vision and natural language processing but not to the same extent in wireless communication. In this thesis, a Fast Gradient Sign Method (FGSM) attack on a neural receiver is studied. Furthermore, the thesis investigates whether it is possible to make a neural receiver robust against these attacks. The study is made using the python library Sionna, a library used for research on for example 5G, 6G and machine learning in wireless communication. The effect of a FGSM attack is evaluated and mitigated with different models within adversarial training. The training data of the models is either augmented with adversarial samples, or original samples are replaced with adversarial ones. Furthermore, the power distribution and range of the adversarial samples included in the training are varied. The thesis concludes that a FGSM attack decreases the performance of a neural receiver and needs less power than a barrage jamming attack to achieve the same performance loss. A neural receiver can be made more robust against a FGSM attack when the training data of the model is augmented with adversarial samples. The samples are concentrated on a specific attack power range and the power of the adversarial samples is normally distributed. A neural receiver is also proven to be more robust against a barrage jamming attack than conventional methods without defenses.
4

Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration

Heinrich, André 27 March 2013 (has links) (PDF)
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
5

Robustness and optimization in anti-windup control

Alli-Oke, Razak Olusegun January 2014 (has links)
This thesis is broadly concerned with online-optimizing anti-windup control. These are control structures that implement some online-optimization routines to compensate for the windup effects in constrained control systems. The first part of this thesis examines a general framework for analyzing robust preservation in anti-windup control systems. This framework - the robust Kalman conjecture - is defined for the robust Lur’e problem. This part of the thesis verifies this conjecture for first-order plants perturbed by various norm-bounded unstructured uncertainties. Integral quadratic constraint theory is exploited to classify the appropriate stability multipliers required for verification in these cases. The remaining part of the thesis focusses on accelerated gradient methods. In particular, tight complexity-certificates can be obtained for the Nesterov gradient method, which makes it attractive for implementation of online-optimizing anti-windup control. This part of the thesis presents a proposed algorithm that extends the classical Nesterov gradient method by using available secant information. Numerical results demonstrating the efficiency of the proposed algorithm are analysed with the aid of performance profiles. As the objective function becomes more ill-conditioned, the proposed algorithm becomes significantly more efficient than the classical Nesterov gradient method. The improved performance bodes well for online-optimization anti-windup control since ill-conditioning is common place in constrained control systems. In addition, this thesis explores another subcategory of accelerated gradient methods known as Barzilai-Borwein gradient methods. Here, two algorithms that modify the Barzilai-Borwein gradient method are proposed. Global convergence of the proposed algorithms for all convex functions is established by using discrete Lyapunov theorems.
6

Generation and Detection of Adversarial Attacks for Reinforcement Learning Policies

Drotz, Axel, Hector, Markus January 2021 (has links)
In this project we investigate the susceptibility ofreinforcement rearning (RL) algorithms to adversarial attacks.Adversarial attacks have been proven to be very effective atreducing performance of deep learning classifiers, and recently,have also been shown to reduce performance of RL agents.The goal of this project is to evaluate adversarial attacks onagents trained using deep reinforcement learning (DRL), aswell as to investigate how to detect these types of attacks. Wefirst use DRL to solve two environments from OpenAI’s gymmodule, namely Cartpole and Lunarlander, by using DQN andDDPG (DRL techniques). We then evaluate the performanceof attacks and finally we also train neural networks to detectattacks. The attacks was successful at reducing performancein the LunarLander environment and CartPole environment.The attack detector was very successful at detecting attacks onthe CartPole environment, but performed not quiet as well onLunarLander.We hypothesize that continuous action space environmentsmay pose a greater difficulty for attack detectors to identifypotential adversarial attacks. / I detta projekt undersöker vikänsligheten hos förstärknings lärda (RL) algotritmerför attacker mot förstärknings lärda agenter. Attackermot förstärknings lärda agenter har visat sig varamycket effektiva för att minska prestandan hos djuptförsärknings lärda klassifierare och har nyligen visat sigockså minska prestandan hos förstärknings lärda agenter.Målet med detta projekt är att utvärdera attacker motdjupt förstärknings lärda agenter och försöka utföraoch upptäcka attacker. Vi använder först RL för attlösa två miljöer från OpenAIs gym module CartPole-v0och ContiniousLunarLander-v0 med DQN och DDPG.Vi utvärderar sedan utförandet av attacker och avslutarslutligen med ett möjligt sätt att upptäcka attacker.Attackerna var mycket framgångsrika i att minskaprestandan i både CartPole-miljön och LunarLandermiljön. Attackdetektorn var mycket framgångsrik medatt upptäcka attacker i CartPole-miljön men presteradeinte lika bra i LunarLander-miljön.Vi hypotiserar att miljöer med kontinuerligahandlingsrum kan innebära en större svårighet fören attack identifierare att upptäcka attacker mot djuptförstärknings lärda agenter. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
7

Methods for ℓp/TVp Regularized Optimization and Their Applications in Sparse Signal Processing

Yan, Jie 14 November 2014 (has links)
Exploiting signal sparsity has recently received considerable attention in a variety of areas including signal and image processing, compressive sensing, machine learning and so on. Many of these applications involve optimization models that are regularized by certain sparsity-promoting metrics. Two most popular regularizers are based on the l1 norm that approximates sparsity of vectorized signals and the total variation (TV) norm that serves as a measure of gradient sparsity of an image. Nevertheless, the l1 and TV terms are merely two representative measures of sparsity. To explore the matter of sparsity further, in this thesis we investigate relaxations of the regularizers to nonconvex terms such as lp and TVp "norms" with 0 <= p < 1. The contributions of the thesis are two-fold. First, several methods to approach globally optimal solutions of related nonconvex problems for improved signal/image reconstruction quality have been proposed. Most algorithms studied in the thesis fall into the category of iterative reweighting schemes for which nonconvex problems are reduced to a series of convex sub-problems. In this regard, the second main contribution of this thesis has to do with complexity improvement of the l1/TV-regularized methodology for which accelerated algorithms are developed. Along with these investigations, new techniques are proposed to address practical implementation issues. These include the development of an lp-related solver that is easily parallelizable, and a matrix-based analysis that facilitates implementation for TV-related optimizations. Computer simulations are presented to demonstrate merits of the proposed models and algorithms as well as their applications for solving general linear inverse problems in the area of signal and image denoising, signal sparse representation, compressive sensing, and compressive imaging. / Graduate
8

Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration

Heinrich, André 21 March 2013 (has links)
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.

Page generated in 0.0919 seconds