• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 5
  • 4
  • 1
  • Tagged with
  • 58
  • 58
  • 19
  • 18
  • 17
  • 16
  • 13
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Overcoming local optima in control and optimization of cooperative multi-agent systems

Welikala, Shirantha 15 May 2021 (has links)
A cooperative multi-agent system is a collection of interacting agents deployed in a mission space where each agent is allowed to control its local state so that the fleet of agents collectively optimizes a common global objective. While optimization problems associated with multi-agent systems intend to determine the fixed set of globally optimal agent states, control problems aim to obtain the set of globally optimal agent controls. Associated non-convexities in these problems result in multiple local optima. This dissertation explores systematic techniques that can be deployed to either escape or avoid poor local optima while in search of provably better (still local) optima. First, for multi-agent optimization problems with iterative gradient-based solutions, a distributed approach to escape local optima is proposed based on the concept of boosting functions. These functions temporarily transform gradient components at a local optimum into a set of boosted non-zero gradient components in a systematic manner so that it is more effective compared to the methods where gradient components are randomly perturbed. A novel variable step size adjustment scheme is also proposed to establish the convergence of this distributed boosting process. Developed boosting concepts are successfully applied to the class of coverage problems. Second, as a means of avoiding convergence to poor local optima in multi-agent optimization, the use of greedy algorithms in generating effective initial conditions is explored. Such greedy methods are computationally cheap and can often exploit submodularity properties of the problem to provide performance bound guarantees to the obtained solutions. For the class of submodular maximization problems, two new performance bounds are proposed and their effectiveness is illustrated using the class of coverage problems. Third, a class of multi-agent control problems termed Persistent Monitoring on Networks (PMN) is considered where a team of agents is traversing a set of nodes (targets) interconnected according to a network topology aiming to minimize a measure of overall node state. For this class of problems, a gradient-based parametric control solution developed in a prior work relies heavily on the initial selection of its `parameters' which often leads to poor local optima. To overcome this initialization challenge, the PMN system's asymptotic behavior is analyzed, and an off-line greedy algorithm is proposed to systematically generate an effective set of initial parameters. Finally, for the same class of PMN problems, a computationally efficient distributed on-line Event-Driven Receding Horizon Control (RHC) solution is proposed as an alternative. This RHC solution is parameter-free as it automatically optimizes its planning horizon length and gradient-free as it uses explicitly derived solutions for each RHC problem invoked at each agent upon each event of interest. Hence, unlike the gradient-based parametric control solutions, the proposed RHC solution does not force the agents to converge to one particular behavior that is likely to be a poor local optimum. Instead, it keeps the agents actively searching for the optimum behavior. In each of these four parts of the thesis, an interactive simulation platform is developed (and made available online) to generate extensive numerical examples that highlight the respective contributions made compared to the state of the art.
22

Fast Tracking ADMM for Distributed Optimization and Convergence under Time-Varying Networks

Shreyansh Rakeshkuma Shethia (10716096) 06 May 2021 (has links)
Due to the increase in the advances in wireless communication, there has been an increase in the use of multi-agents systems to complete any given task. In various applications, multi-agent systems are required to solve an underlying optimization problem to obtain the best possible solution within a feasible region. Solving such multi-agent optimization problems in a distributed framework preferable over centralized frameworks as the former ensures scalability, robustness, and security. Further distributed optimization problem becomes challenging when the decision variables of the individual agents are coupled. In this thesis, a distributed optimization problem with coupled constraints is considered, where a network of agents aims to cooperatively minimize the sum of their local objective functions, subject to individual constraints. This problem setup is relevant to many practical applications like formation flying, sensor fusion, smart grids, etc. For practical scenarios, where agents can solve their local optimal solution efficiently and require fewer assumptions on objective functions, the Alternating Direction Method of Multipliers(ADMM)-based approaches are preferred over gradient-based approaches. For such a constraint coupled problem, several distributed ADMM algorithms are present that guarantee convergence to optimality but they do not discuss the complete analysis for the rate of convergence. Thus, the primary goal of this work is to improve upon the convergence rate of the existing state-of-the-art Tracking-ADMM (TADMM) algorithm to solve the above-distributed optimization problem. Moreover, the current analysis in literature does not discuss the convergence in the case of a time-varying communication network. The first part of the thesis focuses on improving the convergence rate of the Tracking-ADMM algorithm to solve the above-distributed optimization problem more efficiently. To this end, an upper bound on the convergence rate of the TADMM algorithm is derived in terms of the weight matrix of the network. To achieve faster convergence, the optimal weight matrix is computed using a semi-definite programming (SDP) formulation. The improved convergence rate of this Fast-TADMM (F-TADMM) is demonstrated with a simple yet illustrative, coupled constraint optimization problem. Then, the applicability of F-TADMM is demonstrated to the problem of distributed optimal control for trajectory generation of aircraft in formation flight. In the second part of the thesis, the convergence analysis for TADMM is extended while considering a time-varying communication network. The modified algorithm is named as Time-Varying Tracking (TV-TADMM). The formal guarantees on asymptotic convergence are provided with the help of control system analysis of a dynamical system that uses Lyapunov-like theory. The convergence of this TV-TADMM is demonstrated on a simple yet illustrative, coupled constraint optimization problem with switching topology and is compared with the fixed topology setting.
23

Estimation et optimisation distribuée dans les réseaux asynchrones / Distributed estimation and optimization in asynchronous networks

Iutzeler, Franck 06 December 2013 (has links)
Cette thèse s’intéresse au problème d’estimation et d’optimisation distribuée dans les réseaux asynchrones, c’est à dire en n’utilisant que des communication locales et asynchrones. A partir de multiples applications allant de l’apprentissage automatique aux réseaux de capteurs sans-fils, nous concevons et analysons théoriquement de nouveaux algorithmes résolvant trois problèmes de nature très différentes : la propagation de la plus grande des valeurs initiales, l’estimation de leur moyenne et enfin l’optimisation distribuée. / This thesis addresses the distributed estimation and optimization of a global value of interest over a network using only local and asynchronous (sometimes wireless) communications. Motivated by many different applications ranging from cloud computing to wireless sensor networks via machine learning, we design new algorithms and theoretically study three problems of very different nature : the propagation of the maximal initial value, the estimation of their average and finally distributed optimization.
24

Contributions aux méthodes de calibration robuste en radioastronomie / Contributions to robust calibration methods in radio astronomy

Ollier, Virginie 05 July 2018 (has links)
En radioastronomie, les signaux d'intérêt mesurés par les interféromètres sont perturbés par de nombreux effets environnementaux et instrumentaux, nécessitant la mise en œuvre de techniques algorithmiques pour les traiter et pouvoir ainsi reconstruire in fine des images parfaitement nettes de l'espace. Cette étape de correction des perturbations se nomme la calibration et repose généralement sur une modélisation gaussienne du bruit, pour une seule fréquence considérée. Cependant, en pratique, cette l'hypothèse n'est pas toujours valide car de multiples sources inconnues à faible intensité sont visibles dans le champ de vision et des interférences radioélectriques perturbent les données. En outre, réaliser une calibration indépendante, fréquence par fréquence, n'est pas la manière la plus optimale de procéder. Le but de ce travail est donc de développer des algorithmes de correction dans le traitement des signaux radio qui soient robustes à la présence d'éventuelles valeurs aberrantes ou sources d'interférences, et qui soient adaptés au contexte multi-fréquentiel. Par conséquent, nous nous appuyons sur une modélisation plus générale que la loi gaussienne, appelé processus Gaussien composé, et proposons un algorithme itératif basé sur l'estimation au sens du maximum de vraisemblance. En accord avec le scénario multi-fréquentiel sous étude, nous exploitons la variation spectrale des perturbations en utilisant des méthodologies telles que l'optimisation distribuée sous contraintes et le traitement parallèle des données. / Accurate calibration is of critical importance for new advanced interferometric systems in radio astronomy in order to recover high resolution images with no distortions. This process consists in correcting for all environmental and instrumental effects which corrupt the observations. Most state-of-the-art calibration approaches assume a Gaussian noise model and operate mostly in an iterative manner for a mono-frequency scenario. However, in practice, the Gaussian classical noise assumption is not valid as radio frequency interference affects the measurements and multiple unknown weak sources appear within the wide field-of-view. Furthermore, considering one frequency bin at a time with a single centralized agent processing all data leads to suboptimality and computational limitations. The goal of this thesis is to explore robustness of calibration algorithms w.r.t. the presence of outliers in a multi-frequency scenario. To this end, we propose the use of an appropriate noise model, namely, the so-called coumpound-Gaussian which encompasses a broad range of different heavy-tailed distributions. To combine limited computational complexity and quality of calibration, we designed an iterative calibration algorithm based on the maximum likelihood estimator under the compound-Gaussian modeling. In addition, a computationally efficient way to handle multiple sub-frequency bands is to apply distributed and decentralized strategies. Thus, the global operational load is distributed over a network of computational agents and calibration amounts to solve a global constrained problem thanks to available variation models or by assuming smoothness across frequency.
25

Coded Computation for Speeding up Distributed Machine Learning

Wang, Sinong 11 July 2019 (has links)
No description available.
26

Control perspective on distributed optimization

Farkhooi, Sam January 2023 (has links)
In the intersection between machine learning, artificial intelligence and mathe- matical computation lies optimization. A powerful tool that enables us to solve a variety of large scale problems. The purpose of this work is to explore optimiza- tion in the distributed setting. We will then touch on factors that contribute to a faster and more stable algorithm while solving a distributed optimization problem. The main factor we will look into is how we can integrate control.
27

Cooperative Control And Advanced Management Of Distributed Generators In A Smart Grid

Maknouninejad, Ali 01 January 2013 (has links)
Smart grid is more than just the smart meters. The future smart grids are expected to include a high penetration of distributed generations (DGs), most of which will consist of renewable energy sources, such as solar or wind energy. It is believed that the high penetration of DGs will result in the reduction of power losses, voltage profile improvement, meeting future load demand, and optimizing the use of non-conventional energy sources. However, more serious problems will arise if a decent control mechanism is not exploited. An improperly managed high PV penetration may cause voltage profile disturbance, conflict with conventional network protection devices, interfere with transformer tap changers, and as a result, cause network instability. Indeed, it is feasible to organize DGs in a microgrid structure which will be connected to the main grid through a point of common coupling (PCC). Microgrids are natural innovation zones for the smart grid because of their scalability and flexibility. A proper organization and control of the interaction between the microgrid and the smartgrid is a challenge. Cooperative control makes it possible to organize different agents in a networked system to act as a group and realize the designated objectives. Cooperative control has been already applied to the autonomous vehicles and this work investigates its application in controlling the DGs in a micro grid. The microgrid power objectives are set by a higher level control and the application of the cooperative control makes it possible for the DGs to utilize a low bandwidth communication network and realize the objectives. Initially, the basics of the application of the DGs cooperative control are formulated. This includes organizing all the DGs of a microgrid to satisfy an active and a reactive power objective. Then, the cooperative control is further developed by the introduction of clustering DGs into several groups to satisfy multiple power objectives. Then, the cooperative distribution optimization is introduced iii to optimally dispatch the reactive power of the DGs to realize a unified microgrid voltage profile and minimize the losses. This distributed optimization is a gradient based technique and it is shown that when the communication is down, it reduces to a form of droop. However, this gradient based droop exhibits a superior performance in the transient response, by eliminating the overshoots caused by the conventional droop. Meanwhile, the interaction between each microgrid and the main grid can be formulated as a Stackelberg game. The main grid as the leader, by offering proper energy price to the micro grid, minimizes its cost and secures the power. This not only optimizes the economical interests of both sides, the microgrids and the main grid, but also yields an improved power flow and shaves the peak power. As such, a smartgrid may treat microgrids as individually dispatchable loads or generators.
28

Secure and efficient federated learning

Li, Xingyu 12 May 2023 (has links) (PDF)
In the past 10 years, the growth of machine learning technology has been significant, largely due to the availability of large datasets for training. However, gathering a sufficient amount of data on a central server can be challenging. Additionally, with the rise of mobile networking and the large amounts of data generated by IoT devices, privacy and security issues have become a concern, resulting in government regulations such as GDPR, HIPAA, CCPA, and ADPPA. Under these circumstances, traditional centralized machine learning methods face a problem in that sensitive data must be kept locally for privacy reasons, making it difficult to achieve the desired learning outcomes. Federated learning (FL) offers a solution to this by allowing for a global shared model to be trained by exchanging locally computed optimums instead of sharing the actual data. Despite its success as a natural solution for IoT machine learning implementation, Federated learning (FL) still faces challenges with regards to security and performance. These include high communication costs between IoT devices and the central server, the potential for sensitive information leakage and reduced model precision due to the aggregation process in the distributed IoT network, and performance concerns caused by the heterogeneity of data and devices in the network. In this dissertation, I present practical and effective techniques with strong theoretical supports to address these challenges. To optimize communication resources, I introduce a new multi-server FL framework called MS-FedAvg. To enhance security, I propose a robust defense algorithm called LoMar. To address data heterogeneity, I present FedLGA, and for device heterogeneity, I propose FedSAM.
29

Price-Based Distributed Optimization in Large-Scale Networked Systems

HomChaudhuri, Baisravan 12 September 2013 (has links)
No description available.
30

Behavior-based model predictive control for networked multi-agent systems

Droge, Greg Nathanael 22 May 2014 (has links)
We present a motion control framework which allows a group of robots to work together to decide upon their motions by minimizing a collective cost without any central computing component or any one agent performing a large portion of the computation. When developing distributed control algorithms, care must be taken to respect the limited computational capacity of each agent as well as respect the information and communication constraints of the network. To address these issues, we develop a distributed, behavior-based model predictive control (MPC) framework which alleviates the computational difficulties present in many distributed MPC frameworks, while respecting the communication and information constraints of the network. In developing the multi-agent control framework, we make three contributions. First, we develop a distributed optimization technique which respects the dynamic communication restraints of the network, converges to a collective minimum of the cost, and has transients suitable for robot motion control. Second, we develop a behavior-based MPC framework to control the motion of a single-agent and apply the framework to robot navigation. The third contribution is to combine the concepts of distributed optimization and behavior-based MPC to develop the mentioned multi-agent behavior-based MPC algorithm suitable for multi-robot motion control.

Page generated in 0.1256 seconds