1 |
System of Systems Based Decision-Making for Power Systems OperationKargarian Marvasti, Amin 13 December 2014 (has links)
A modern power system is composed of many individual entities collaborating with each other to operate the entire system in a secure and economic manner. These entities may have different owners and operators with their own operating rules and policies, and it complicates the decision-making process in the system. In this work, a system of systems (SoS) engineering framework is presented for optimally operating the modern power systems. The proposed SoS framework defines each entity as an independent system with its own regulations, and the communication and process of information exchange between the systems are discussed. Since the independent systems are working in an interconnected system, the operating condition of one may impact the operating condition of others. According to the independent systems’ characteristics and connection between them, an optimization problem is formulated for each independent system. In order to solve the optimization problem of each system and to optimally operate the entire SoS-based power system, a decentralized decision-making algorithm is developed. Using this algorithm, only a limited amount of information is exchanged among different systems, and the operators of independent systems do not need to exchange all the information, which may be commercially sensitive, with each other. In addition, applying chance-constrained stochastic programming, the impact of uncertain variables, such as renewable generation and load demands, is modeled in the proposed SoS-based decision-making algorithm. The proposed SoS-based decision-making algorithm is applied to find the optimal and secure operating point of an active distribution grid (ADG). This SoS framework models the distribution company (DISCO) and microgrids (MGs) as independent systems having the right to work based on their own operating rules and policies, and it coordinates the DISCO and MGs operating condition. The proposed decision-making algorithm is also performed to solve the security-constrained unit commitment incorporating distributed generations (DGs) located in ADGs. The independent system operator (ISO) and DISCO are modeled as self-governing systems, and competition and collaboration between them are explained according to the SoS framework.
|
2 |
New and Provable Results for Network Inference Problems and Multi-agent Optimization AlgorithmsJanuary 2017 (has links)
abstract: Our ability to understand networks is important to many applications, from the analysis and modeling of biological networks to analyzing social networks. Unveiling network dynamics allows us to make predictions and decisions. Moreover, network dynamics models have inspired new ideas for computational methods involving multi-agent cooperation, offering effective solutions for optimization tasks. This dissertation presents new theoretical results on network inference and multi-agent optimization, split into two parts -
The first part deals with modeling and identification of network dynamics. I study two types of network dynamics arising from social and gene networks. Based on the network dynamics, the proposed network identification method works like a `network RADAR', meaning that interaction strengths between agents are inferred by injecting `signal' into the network and observing the resultant reverberation. In social networks, this is accomplished by stubborn agents whose opinions do not change throughout a discussion. In gene networks, genes are suppressed to create desired perturbations. The steady-states under these perturbations are characterized. In contrast to the common assumption of full rank input, I take a laxer assumption where low-rank input is used, to better model the empirical network data. Importantly, a network is proven to be identifiable from low rank data of rank that grows proportional to the network's sparsity. The proposed method is applied to synthetic and empirical data, and is shown to offer superior performance compared to prior work. The second part is concerned with algorithms on networks. I develop three consensus-based algorithms for multi-agent optimization. The first method is a decentralized Frank-Wolfe (DeFW) algorithm. The main advantage of DeFW lies on its projection-free nature, where we can replace the costly projection step in traditional algorithms by a low-cost linear optimization step. I prove the convergence rates of DeFW for convex and non-convex problems. I also develop two consensus-based alternating optimization algorithms --- one for least square problems and one for non-convex problems. These algorithms exploit the problem structure for faster convergence and their efficacy is demonstrated by numerical simulations.
I conclude this dissertation by describing future research directions. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2017
|
3 |
Adaptation des méthodes d’apprentissage aux U-statistiques / Adapting machine learning methods to U-statisticsColin, Igor 24 November 2016 (has links)
L’explosion récente des volumes de données disponibles a fait de la complexité algorithmique un élément central des méthodes d’apprentissage automatique. Les algorithmes d’optimisation stochastique ainsi que les méthodes distribuées et décentralisées ont été largement développés durant les dix dernières années. Ces méthodes ont permis de faciliter le passage à l’échelle pour optimiser des risques empiriques dont la formulation est séparable en les observations associées. Pourtant, dans de nombreux problèmes d’apprentissage statistique, l’estimation précise du risque s’effectue à l’aide de U-statistiques, des fonctions des données prenant la forme de moyennes sur des d-uplets. Nous nous intéressons tout d’abord au problème de l’échantillonnage pour la minimisation du risque empirique. Nous montrons que le risque peut être remplacé par un estimateur de Monte-Carlo, intitulé U-statistique incomplète, basé sur seulement O(n) termes et permettant de conserver un taux d’apprentissage du même ordre. Nous établissons des bornes sur l’erreur d’approximation du U-processus et les simulations numériques mettent en évidence l’avantage d’une telle technique d’échantillonnage. Nous portons par la suite notre attention sur l’estimation décentralisée, où les observations sont désormais distribuées sur un réseau connexe. Nous élaborons des algorithmes dits gossip, dans des cadres synchrones et asynchrones, qui diffusent les observations tout en maintenant des estimateurs locaux de la U-statistique à estimer. Nous démontrons la convergence de ces algorithmes avec des dépendances explicites en les données et la topologie du réseau. Enfin, nous traitons de l’optimisation décentralisée de fonctions dépendant de paires d’observations. De même que pour l’estimation, nos méthodes sont basées sur la concomitance de la propagation des observations et l’optimisation local du risque. Notre analyse théorique souligne que ces méthodes conservent une vitesse de convergence du même ordre que dans le cas centralisé. Les expériences numériques confirment l’intérêt pratique de notre approche. / With the increasing availability of large amounts of data, computational complexity has become a keystone of many machine learning algorithms. Stochastic optimization algorithms and distributed/decentralized methods have been widely studied over the last decade and provide increased scalability for optimizing an empirical risk that is separable in the data sample. Yet, in a wide range of statistical learning problems, the risk is accurately estimated by U-statistics, i.e., functionals of the training data with low variance that take the form of averages over d-tuples. We first tackle the problem of sampling for the empirical risk minimization problem. We show that empirical risks can be replaced by drastically computationally simpler Monte-Carlo estimates based on O(n) terms only, usually referred to as incomplete U-statistics, without damaging the learning rate. We establish uniform deviation results and numerical examples show that such approach surpasses more naive subsampling techniques. We then focus on the decentralized estimation topic, where the data sample is distributed over a connected network. We introduce new synchronous and asynchronous randomized gossip algorithms which simultaneously propagate data across the network and maintain local estimates of the U-statistic of interest. We establish convergence rate bounds with explicit data and network dependent terms. Finally, we deal with the decentralized optimization of functions that depend on pairs of observations. Similarly to the estimation case, we introduce a method based on concurrent local updates and data propagation. Our theoretical analysis reveals that the proposed algorithms preserve the convergence rate of centralized dual averaging up to an additive bias term. Our simulations illustrate the practical interest of our approach.
|
4 |
Efficient Decentralized Learning Methods for Deep Neural NetworksSai Aparna Aketi (18258529) 26 March 2024 (has links)
<p dir="ltr">Decentralized learning is the key to training deep neural networks (DNNs) over large distributed datasets generated at different devices and locations, without the need for a central server. They enable next-generation applications that require DNNs to interact and learn from their environment continuously. The practical implementation of decentralized algorithms brings about its unique set of challenges. In particular, these algorithms should be (a) compatible with time-varying graph structures, (b) compute and communication efficient, and (c) resilient to heterogeneous data distributions. The objective of this thesis is to enable efficient decentralized learning in deep neural networks addressing the abovementioned challenges. Towards this, firstly a communication-efficient decentralized algorithm (Sparse-Push) that supports directed and time-varying graphs with error-compensated communication compression is proposed. Second, a low-precision decentralized training that aims to reduce memory requirements and computational complexity is proposed. Here, we design ”Range-EvoNorm” as the normalization activation layer which is better suited for low-precision decentralized training. Finally, addressing the problem of data heterogeneity, three impactful advancements namely Neighborhood Gradient Mean (NGM), Global Update Tracking (GUT), and Cross-feature Contrastive Loss (CCL) are proposed. NGM utilizes extra communication rounds to obtain cross-agent gradient information whereas GUT tracks global update information with no communication overhead, improving the performance on heterogeneous data. CCL explores an orthogonal direction of using a data-free knowledge distillation approach to handle heterogeneous data in decentralized setups. All the algorithms are evaluated on computer vision tasks using standard image-classification datasets. We conclude this dissertation by presenting a summary of the proposed decentralized methods and their trade-offs for heterogeneous data distributions. Overall, the methods proposed in this thesis address the critical limitations of training deep neural networks in a decentralized setup and advance the state-of-the-art in this domain.</p>
|
5 |
Optimization and resource management in wireless sensor networksRoseveare, Nicholas January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Balasubramaniam Natarajan / In recent years, there has been a rapid expansion in the development and use of low-power, low-cost wireless modules with sensing, computing, and communication functionality. A wireless sensor network (WSN) is a group of these devices networked together wirelessly. Wireless sensor networks have found widespread application in infrastructure, environmental, and human health monitoring, surveillance, and disaster management. While there are many interesting problems within the WSN framework, we address the challenge of energy availability in a WSN tasked with a cooperative objective. We develop approximation algorithms and execute an analysis of concave utility maximization in resource constrained systems. Our analysis motivates a unique algorithm which we apply to resource management in WSNs. We also investigate energy harvesting as a way of improving system lifetime. We then analyze the effect of using these limited and stochastically available communication resources on the convergence of decentralized optimization techniques. The main contributions of this research are: (1) new optimization formulations which explicitly consider the energy states of a WSN executing a cooperative task; (2) several analytical insights regarding the distributed optimization of resource constrained systems; (3) a varied set of algorithmic solutions, some novel to this work and others based on extensions of existing techniques; and (4) an analysis of the effect of using stochastic resources (e.g., energy harvesting) on the performance of decentralized optimization methods. Throughout this work, we apply our developments to distribution estimation and rate maximization. The simulation results obtained help to provide verification of algorithm performance. This research provides valuable intuition concerning the trade-offs between energy-conservation and system performance in WSNs.
|
6 |
Game theory and Optimization Methods for Decentralized Electric Systems / Méthodes d'Optimisation et de Théorie des Jeux Appliquées aux Systèmes Électriques DécentralisésJacquot, Paulin 05 December 2019 (has links)
Dans le contexte de transition vers un système électrique décentralisé et intelligent, nous abordons le problème de la gestion des flexibilités de consommation électriques. Nous développons différentes méthodes basées sur l'optimisation distribuée et la théorie des jeux.Nous commençons par adopter le point de vue d'un opérateur central en charge de la gestion des flexibilités de plusieurs agents. Nous présentons un algorithme distribué permettant le calcul des profils de consommations des agents optimaux pour l'opérateur.Cet algorithme garantit la confidentialité des agents~: les contraintes individuelles, ainsi que le profil individuel de consommation de chaque agent, ne sont jamais révélés à l'opérateur ni aux autres agents.Ensuite, nous adoptons dans un second modèle une vision plus décentralisée et considérons un cadre de théorie des jeux pour la gestion des flexibilités de consommation.Cette approche nous permet en particulier de modéliser les comportements stratégiques des consommateurs.Dans ce cadre, une classe de jeux adéquate est donnée par les jeux de congestion atomiques fractionnables.Nous obtenons plusieurs résultats théoriques concernant les équilibres de Nash dans cette classe de jeux, et nous quantifions l'efficacité de ces équilibres en établissant des bornes supérieures sur le prix de l'anarchie.Nous traitons la question du calcul décentralisé des équilibres de Nash dans ce contexte en étudiant les conditions et les vitesses de convergence des algorithmes de meilleure réponse et de gradient projeté.En pratique un opérateur peut faire face à un très grand nombre de joueurs, et calculer les équilibres d'un jeu de congestion dans ce cas est difficile.Afin de traiter ce problème, nous établissons des résultats sur l'approximation d'un équilibre dans les jeux de congestion et jeux agrégatifs avec un très grand nombre de joueurs et en présence de contraintes couplantes.Ces résultats, obtenus dans le cadre des inégalités variationnelles et sous certaines hypothèses de monotonie, peuvent être utilisés pour calculer un équilibre approché comme solution d'un problème de petite dimension.Toujours dans la perspective de modéliser un très grand nombre d'agents, nous considérons des jeux de congestion nonatomiques avec contraintes couplantes et avec une infinité de joueurs hétérogènes~: ce type de jeux apparaît lorsque les caractéristiques d'une population sont décrites par une fonction de distribution paramétrique.Sous certaines hypothèses de monotonie, nous prouvons que les équilibres de Wardrop de ces jeux, définis comme solutions d'une inégalité variationnelle de dimension infinie, peuvent être approchés par des équilibres de Wardrop symétriques de jeux annexes, solutions d'inégalités variationnelles de petite dimension.Enfin, nous considérons un modèle de jeu pour l'étude d'échanges d'électricité pair-à-pair au sein d'une communauté de consommateurs possédant des actifs de production électrique renouvelable.Nous étudions les équilibres généralisés du jeu obtenu, qui caractérisent les échanges possibles d'énergie et les consommations individuelles.Nous comparons ces équilibres avec la solution centralisée minimisant le coût social, et nous évaluons l'efficacité des équilibres via la notion de prix de l'anarchie. / In the context of smart grid and in the transition to decentralized electric systems, we address the problem of the management of distributed electric consumption flexibilities. We develop different methods based on distributed optimization and game theory approaches.We start by adopting the point of view of a centralized operator in charge of the management of flexibilities for several agents. We provide a distributed and privacy-preserving algorithm to compute consumption profiles for agents that are optimal for the operator.In the proposed method, the individual constraints as well as the individual consumption profile of each agent are never revealed to the operator or the other agents.Then, in a second model, we adopt a more decentralized vision and consider a game theoretic framework for the management of consumption flexibilities.This approach enables, in particular, to take into account the strategic behavior of consumers.Individual objectives are determined by dynamic billing mechanisms, which is motivated by the modeling of congestion effects occurring on time periods receiving a high electricity load from consumers.A relevant class of games in this framework is given by atomic splittable congestion games.We obtain several theoretical results on Nash equilibria for this class of games, and we quantify the efficiency of those equilibria by providing bounds on the price of anarchy.We address the question of the decentralized computation of equilibria in this context by studying the conditions and rates of convergence of the best response and projected gradients algorithms.In practice an operator may deal with a very large number of players, and evaluating the equilibria in a congestion game in this case will be difficult.To address this issue, we give approximation results on the equilibria in congestion and aggregative games with a very large number of players, in the presence of coupling constraints.These results, obtained in the framework of variational inequalities and under some monotonicity conditions, can be used to compute an approximate equilibrium, solution of a small dimension problem.In line with the idea of modeling large populations, we consider nonatomic congestion games with coupling constraints, with an infinity of heterogeneous players: these games arise when the characteristics of a population are described by a parametric density function.Under monotonicity hypotheses, we prove that Wardrop equilibria of such games, given as solutions of an infinite dimensional variational inequality, can be approximated by symmetric Wardrop equilibria of auxiliary games, solutions of low dimension variational inequalities.Again, those results can be the basis of tractable methods to compute an approximate Wardrop equilibrium in a nonatomic infinite-type congestion game.Last, we consider a game model for the study of decentralized peer-to-peer energy exchanges between a community of consumers with renewable production sources.We study the generalized equilibria in this game, which characterize the possible energy trades and associated individual consumptions.We compare the equilibria with the centralized solution minimizing the social cost, and evaluate the efficiency of equilibria through the price of anarchy.
|
7 |
Coordination of reactive power scheduling in a multi-area power system operated by independent utilitiesPhulpin, Yannick 13 October 2009 (has links) (PDF)
This thesis addresses the problem of reactive power scheduling in a power system with several areas controlled by independent transmission system operators (TSOs). To design a fair method for optimizing the control settings in the interconnected multi-TSO system, two types of schemes are developed.<br />First, a centralized multi-TSO optimization scheme is introduced, and it is shown that this scheme has some properties of fairness in the economic sense.<br />Second, the problem is addressed through a decentralized optimization scheme with no information exchange between the TSOs. In this framework, each TSO assumes an external network equivalent in place of its neighboring TSOs and optimizes the objective function corresponding to its own control area regardless of the impact that its choice may have on the other TSOs.<br />The thesis presents simulation results obtained with the IEEE 39 bus system and IEEE 118 bus systems partitioned between three TSOs. It also presents some results for a UCTE-like 4141 bus system with seven TSOs. The decentralized control scheme is applied to both time-invariant and time-varying power systems. Nearly optimal performance is obtained in those contexts.
|
8 |
Analyses and Scalable Algorithms for Byzantine-Resilient Distributed OptimizationKananart Kuwaranancharoen (16480956) 03 July 2023 (has links)
<p>The advent of advanced communication technologies has given rise to large-scale networks comprised of numerous interconnected agents, which need to cooperate to accomplish various tasks, such as distributed message routing, formation control, robust statistical inference, and spectrum access coordination. These tasks can be formulated as distributed optimization problems, which require agents to agree on a parameter minimizing the average of their local cost functions by communicating only with their neighbors. However, distributed optimization algorithms are typically susceptible to malicious (or "Byzantine") agents that do not follow the algorithm. This thesis offers analysis and algorithms for such scenarios. As the malicious agent's function can be modeled as an unknown function with some fundamental properties, we begin in the first two parts by analyzing the region containing the potential minimizers of a sum of functions. Specifically, we explicitly characterize the boundary of this region for the sum of two unknown functions with certain properties. In the third part, we develop resilient algorithms that allow correctly functioning agents to converge to a region containing the true minimizer under the assumption of convex functions of each regular agent. Finally, we present a general algorithmic framework that includes most state-of-the-art resilient algorithms. Under the strongly convex assumption, we derive a geometric rate of convergence of all regular agents to a ball around the optimal solution (whose size we characterize) for some algorithms within the framework.</p>
|
9 |
Decentralized Algorithms for Wasserstein BarycentersDvinskikh, Darina 29 October 2021 (has links)
In dieser Arbeit beschäftigen wir uns mit dem Wasserstein Baryzentrumproblem diskreter Wahrscheinlichkeitsmaße sowie mit dem population Wasserstein Baryzentrumproblem gegeben von a Fréchet Mittelwerts von der rechnerischen und statistischen Seiten. Der statistische Fokus liegt auf der Schätzung der Stichprobengröße von Maßen zur Berechnung einer Annäherung des Fréchet Mittelwerts (Baryzentrum) der Wahrscheinlichkeitsmaße mit einer bestimmten Genauigkeit. Für empirische Risikominimierung (ERM) wird auch die Frage der Regularisierung untersucht zusammen mit dem Vorschlag einer neuen Regularisierung, die zu den besseren Komplexitätsgrenzen im Vergleich zur quadratischen Regularisierung beiträgt. Der Rechenfokus liegt auf der Entwicklung von dezentralen Algorithmen zurBerechnung von Wasserstein Baryzentrum: duale Algorithmen und Sattelpunktalgorithmen. Die Motivation für duale Optimierungsmethoden ist geschlossene Formen für die duale Formulierung von entropie-regulierten Wasserstein Distanz und ihren Derivaten, während, die primale Formulierung nur in einigen Fällen einen Ausdruck in geschlossener Form hat, z.B. für Gaußsches Maß. Außerdem kann das duale Orakel, das den Gradienten der dualen Darstellung für die entropie-regulierte Wasserstein Distanz zurückgibt, zu einem günstigeren Preis berechnet werden als das primale Orakel, das den Gradienten der (entropie-regulierten) Wasserstein Distanz zurückgibt. Die Anzahl der dualen Orakel rufe ist in diesem Fall ebenfalls weniger, nämlich die Quadratwurzel der Anzahl der primalen Orakelrufe. Im Gegensatz zum primalen Zielfunktion, hat das duale Zielfunktion Lipschitz-stetig Gradient aufgrund der starken Konvexität regulierter Wasserstein Distanz. Außerdem untersuchen wir die Sattelpunktformulierung des (nicht regulierten) Wasserstein Baryzentrum, die zum Bilinearsattelpunktproblem führt. Dieser Ansatz ermöglicht es uns auch, optimale Komplexitätsgrenzen zu erhalten, und kann einfach in einer dezentralen Weise präsentiert werden. / In this thesis, we consider the Wasserstein barycenter problem of discrete probability measures as well as the population Wasserstein barycenter problem given by a Fréchet mean from computational and statistical sides. The statistical focus is estimating the sample size of measures needed to calculate an approximation of a Fréchet mean (barycenter) of probability distributions with a given precision. For empirical risk minimization approaches, the question of the regularization is also studied along with proposing a new regularization which contributes to the better complexity bounds in comparison with the quadratic regularization. The computational focus is developing decentralized algorithms for calculating Wasserstein barycenters: dual algorithms and saddle point algorithms. The motivation for dual approaches is closed-forms for the dual formulation of entropy-regularized Wasserstein distances and their derivatives, whereas the primal formulation has a closed-form expression only in some cases, e.g., for Gaussian measures.Moreover, the dual oracle returning the gradient of the dual representation forentropy-regularized Wasserstein distance can be computed for a cheaper price in comparison with the primal oracle returning the gradient of the (entropy-regularized) Wasserstein distance. The number of dual oracle calls in this case will be also less, i.e., the square root of the number of primal oracle calls. Furthermore, in contrast to the primal objective, the dual objective has Lipschitz continuous gradient due to the strong convexity of regularized Wasserstein distances. Moreover, we study saddle-point formulation of the non-regularized Wasserstein barycenter problem which leads to the bilinear saddle-point problem. This approach also allows us to get optimal complexity bounds and it can be easily presented in a decentralized setup.
|
Page generated in 0.1386 seconds