• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 26
  • 13
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Scalable User Assignment in Power Grids: A Data Driven Approach

Li, Shijian 08 December 2017 (has links)
"The fast pace of global urbanization is drastically changing the population distributions over the world, which leads to significant changes in geographical population densities. Such changes in turn alter the underlying geographical power demand over time, and drive power substations to become over-supplied (demand ≪ capacity) or under-supplied (demand ≈ capacity). In this work, we make the first attempt to investigate the problem of power substation/user assignment by analyzing large scale power grid data. We develop a Scalable Power User Assignment (SPUA) framework, that takes large-scale spatial power user/substation distribution data and temporal user power consumption data as input, and assigns users to substations, in a manner that minimizes the maximum substation utilization among all substations. To evaluate the performance of SPUA framework, we conduct evaluations on real power consumption data and user/substation location data collected from Xinjiang Province in China for 35 days in 2015. The evaluation results demonstrate that our SPUA framework can achieve a 20%–65% reduction on the maximum substation utilization, and 2 to 3.7 times reduction on total transmission loss over other baseline methods."
2

Algorithm Development for Large-Scale Multiple Antenna Wireless Systems in Cloud Computing Environment

Chao , Wen-Yuen 31 July 2012 (has links)
Currently, data size that we have to deal with is growing bigger and bigger. This fact implies that the computing time and computing power for dealing with the data is demanded. A way to circumvent the difficulty is as follows: Divide the data into several small blocks and then process these small blocks by several computers. Therefore, we need a tool for the decomposition-coordinated procedure. Alternating direction method of multipliers (ADMM) is a powerful algorithm for the mentioned purpose and has widely used in distributed optimizations. With ADMM algorithm, a big global optimization problem can be decomposed into several small local optimization problems. ADMM algorithm has been used in several recent distributed systems such as cloud systems and distributed antenna systems. In this thesis, we aim to apply the ADMM in a distributed antenna system. For the uplink setting, we develop a distributed demodulation algorithm, where multiple base stations collaborate with each other for data detection. On the other hand, for the downlink setting, we develop a distributed beamforming design algorithm, where multiple base stations collaborate to form a beamforming for mitigating the inter-cell interference. Finally, simulations are conducted to verify the efficiency of our designs.
3

Optimisation distribuée dans les grands systèmes interconnectés avec ADMM / Distributed optimization in large interconnected systems using ADMM

Abboud, Azary 12 January 2016 (has links)
Cette thèse porte sur la construction des algorithmes distribués pour l’optimisation de la production et du partage de ressources au sein d’un réseau de large dimension. Notamment, on se concentre sur les réseaux électriques et les réseaux cellulaires 5G. On considère dans le cas des réseaux électriques le problème OPF (Optimal Power Flow) dans lequel on vise à faire la gestion et l’optimisation de la production de l’énergie électrique d’une manière distribuée. On se concentre sur une version linéarisée du problème, la DC-OPF (Direct-Current Optimal Power Flow). Comme le problème d’optimisation est convexe dans ce cas, on vise à minimiser le coût de production de l’énergie tout en respectant les limites des lignes de transmission et les contraintes caractéristiques du système. Dans le cas des réseaux cellulaires, on formule un problème de Caching. On a pour but de réduire l’utilisation du backhaul liant les stations de base et le contrôleur du réseau. Les stations de base sont équipées d’une capacité de stockage limitée. Ils visent à trouver d’une manière optimale les fichiers à stocker dans le but de réduire une certaine fonction de coût sur l’utilisation du backhaul et sur le partage des fichiers avec les autres stations de base. L’approche adoptée dans cette thèse consiste à appliquer l’ADMM (Alternating Direction Method of Multipliers), une méthode d’optimisation de manière itérative, à un problème d’optimisation que l’on a préalablement reformulée de façon adéquate. Ce problème permet à la fois de décrire le DC-OPF et le problème de Caching. On démontre la convergence de cette méthode quand elle est appliquée noeud par noeudd’une manière totalement distribuée. Ainsi que dans le cas où le réseau est divisé en plusieurs zones. Ces zones peuvent se chevaucher mais aussi elles peuvent être séparées ou indépendantes. De plus, dans le contexte d’un réseau à zones, on démontre que l’application de l’ADMM d’une manière aléatoire par une seule zone converge aussi vers la solution optimale du problème. / This thesis focuses on the construction of distributed algorithms for optimizing resource production in a large interconnected system. In particular, it focuses on power grid and 5G cellular networks. In the case of power grid networks, we consider the OPF (Optimal Power Flow) problem in which one seeks to manage and optimize the production of electrical energy in a distributed manner. We focus on a linearized version of the problem, the DC-OPF (Direct- Current Optimal Power Flow) problem. This optimization problem is convex; the aim is to minimize the cost of energy generation while respecting the limits of the transmission line and the power flow constraints. In the case of 5G cellular networks, we formulate a caching problem. We aim to offload the backhaul link usage connecting the small bases stations (SBSs) to the central scheduler (CS). The SBSs are equipped with a limited storage capacity. We seek to find the optimal way to store files so as to reduce the cost on the use of backhaul and sharing files with other SBSs. The approach adopted in this thesis is to apply the ADMM (Alternating Direction Method of Multipliers), an optimization method that is applied iteratively, to an optimization problem that we adequately formulated previously. This problem can both describe the DC-OPF problem and the Caching problem. We prove the convergence of the method when applied node by node in a fully distributed manner. Additionally, we prove its convergence in the case where the network is divided into multiple areas or nations that may or may not overlap. Furthermore, in the context of a network with multiple areas, we show that the application of ADMM in a random manner by a single randomly chosen area also converges to the optimal solution of the problem.
4

Approche parcimonieuse pour l’imagerie 3D haute résolution de surface équivalente radar. / Sparse approach for high resolution 3D radar cross section imaging.

Benoudiba-Campanini, Thomas 13 July 2018 (has links)
La SER (Surface Équivalente Radar) est une grandeur caractérisant le pouvoir rétrodiffuseurd’une cible soumise à un champ électromagnétique. Dans de nombreuses applications,il est capital d’analyser et de contrôler la SER. L’imagerie 3D est l’outil adapté pourlocaliser et caractériser en trois dimensions les principaux contributeurs à la SER. Cependant,ce traitement est un problème de synthèse de Fourier qui n’est pas inversible car il y aplus d’inconnues que de données. Les méthodes conventionnelles telles que le Polar FormatAlgorithm, consistant en un reformatage des données avec complétion de zéro suivi d’unetransformée de Fourier inverse rapide, fournissent des résultats de qualité limitée.Dans ce travail, nous proposons une nouvelle méthode haute résolution. Elle est dénomméeSPRITE (pour SParse Radar Imaging TEchnique) et permet d’accroître considérablementla qualité des cartes de rétro-diffusion estimées. Elle repose sur une régularisation duproblème consistant en la prise en compte d’informations a priori de parcimonie et d’uneinformation de support. La solution est alors définie comme le minimiseur d’un critère pénaliséet contraint. L’optimisation est assurée par l’algorithme primal-dual ADMM (AlternatingDirection Method of Multiplier) dont une adaptation aux spécificités du problème mène à descalculs efficaces à l’aide de transformées de Fourier rapides.Finalement, la méthode est évaluée sur des données synthétiques et réelles. Comparativementà la méthode conventionnelle, la résolution est drastiquement accrue. Les images 3Dproduites sont alors un outil particulièrement adapté à l’analyse et au contrôle de SER. / The RCS (Radar Cross Section) is a quantity which characterizes the scattering power ofa target exposed to an electromagnetic field. Its analysis and control are important in manyapplications. 3D imaging is a suitable tool to accurately locate and characterize in 3D themain contributors to the RCS. However, this is a non-invertible Fourier synthesis problembecause the number of unknowns is larger than the number of data. Conventional methodssuch as the Polar Format Algorithm, which consists of data reformatting including zeropaddingfollowed by a fast inverse Fourier transform, provide results of limited quality.In this work, we propose a new high resolution method, named SPRITE (for SParse RadarImaging TEchnique), which considerably increases the quality of the estimated RCS maps. Itis based on a regularization scheme that accounts for information of sparsity and support. Thesolution is then defined as the minimizer of a penalized and constrained criterion. Optimizationis ensured by an appropriate adaptation of the ADMM (Alternating Direction Methodof Multiplier) algorithm that is able to quickly perform calculations using fast Fourier transforms.Finally, the method is evaluated on both simulated and real data. Compared to the conventionalmethod, the resolution is significantly increased and the images can support a betterRCS analysis and control.
5

Efficient inference and learning in graphical models for multi-organ shape segmentation / Inférence efficace et apprentissage des modèles graphiques pour la segmentation des formes multi-organes

Boussaid, Haithem 08 January 2015 (has links)
Cette thèse explore l’utilisation des modèles de contours déformables pour la segmentation basée sur la forme des images médicales. Nous apportons des contributions sur deux fronts: dans le problème de l’apprentissage statistique, où le modèle est formé à partir d’un ensemble d’images annotées, et le problème de l’inférence, dont le but est de segmenter une image étant donnée un modèle. Nous démontrons le mérite de nos techniques sur une grande base d’images à rayons X, où nous obtenons des améliorations systématiques et des accélérations par rapport à la méthode de l’état de l’art. Concernant l’apprentissage, nous formulons la formation de la fonction de score des modèles de contours déformables en un problème de prédiction structurée à grande marge et construisons une fonction d’apprentissage qui vise à donner le plus haut score à la configuration vérité-terrain. Nous intégrons une fonction de perte adaptée à la prédiction structurée pour les modèles de contours déformables. En particulier, nous considérons l’apprentissage avec la mesure de performance consistant en la distance moyenne entre contours, comme une fonction de perte. L’utilisation de cette fonction de perte au cours de l’apprentissage revient à classer chaque contour candidat selon sa distance moyenne du contour vérité-terrain. Notre apprentissage des modèles de contours déformables en utilisant la prédiction structurée avec la fonction zéro-un de perte surpasse la méthode [Seghers et al. 2007] de référence sur la base d’images médicales considérée [Shiraishi et al. 2000, van Ginneken et al. 2006]. Nous démontrons que l’apprentissage avec la fonction de perte de distance moyenne entre contours améliore encore plus les résultats produits avec l’apprentissage utilisant la fonction zéro-un de perte et ce d’une quantité statistiquement significative.Concernant l’inférence, nous proposons des solveurs efficaces et adaptés aux problèmes combinatoires à variables spatiales discrétisées. Nos contributions sont triples: d’abord, nous considérons le problème d’inférence pour des modèles graphiques qui contiennent des boucles, ne faisant aucune hypothèse sur la topologie du graphe sous-jacent. Nous utilisons un algorithme de décomposition-coordination efficace pour résoudre le problème d’optimisation résultant: nous décomposons le graphe du modèle en un ensemble de sous-graphes en forme de chaines ouvertes. Nous employons la Méthode de direction alternée des multiplicateurs (ADMM) pour réparer les incohérences des solutions individuelles. Même si ADMM est une méthode d’inférence approximative, nous montrons empiriquement que notre implémentation fournit une solution exacte pour les exemples considérés. Deuxièmement, nous accélérons l’optimisation des modèles graphiques en forme de chaîne en utilisant l’algorithme de recherche hiérarchique A* [Felzenszwalb & Mcallester 2007] couplé avec les techniques d’élagage développés dans [Kokkinos 2011a]. Nous réalisons une accélération de 10 fois en moyenne par rapport à l’état de l’art qui est basé sur la programmation dynamique (DP) couplé avec les transformées de distances généralisées [Felzenszwalb & Huttenlocher 2004]. Troisièmement, nous intégrons A* dans le schéma d’ADMM pour garantir une optimisation efficace des sous-problèmes en forme de chaine. En outre, l’algorithme résultant est adapté pour résoudre les problèmes d’inférence augmentée par une fonction de perte qui se pose lors de l’apprentissage de prédiction des structure, et est donc utilisé lors de l’apprentissage et de l’inférence. [...] / This thesis explores the use of discriminatively trained deformable contour models (DCMs) for shape-based segmentation in medical images. We make contributions in two fronts: in the learning problem, where the model is trained from a set of annotated images, and in the inference problem, whose aim is to segment an image given a model. We demonstrate the merit of our techniques in a large X-Ray image segmentation benchmark, where we obtain systematic improvements in accuracy and speedups over the current state-of-the-art. For learning, we formulate training the DCM scoring function as large-margin structured prediction and construct a training objective that aims at giving the highest score to the ground-truth contour configuration. We incorporate a loss function adapted to DCM-based structured prediction. In particular, we consider training with the Mean Contour Distance (MCD) performance measure. Using this loss function during training amounts to scoring each candidate contour according to its Mean Contour Distance to the ground truth configuration. Training DCMs using structured prediction with the standard zero-one loss already outperforms the current state-of-the-art method [Seghers et al. 2007] on the considered medical benchmark [Shiraishi et al. 2000, van Ginneken et al. 2006]. We demonstrate that training with the MCD structured loss further improves over the generic zero-one loss results by a statistically significant amount. For inference, we propose efficient solvers adapted to combinatorial problems with discretized spatial variables. Our contributions are three-fold:first, we consider inference for loopy graphical models, making no assumption about the underlying graph topology. We use an efficient decomposition-coordination algorithm to solve the resulting optimization problem: we decompose the model’s graph into a set of open, chain-structured graphs. We employ the Alternating Direction Method of Multipliers (ADMM) to fix the potential inconsistencies of the individual solutions. Even-though ADMMis an approximate inference scheme, we show empirically that our implementation delivers the exact solution for the considered examples. Second,we accelerate optimization of chain-structured graphical models by using the Hierarchical A∗ search algorithm of [Felzenszwalb & Mcallester 2007] couple dwith the pruning techniques developed in [Kokkinos 2011a]. We achieve a one order of magnitude speedup in average over the state-of-the-art technique based on Dynamic Programming (DP) coupled with Generalized DistanceTransforms (GDTs) [Felzenszwalb & Huttenlocher 2004]. Third, we incorporate the Hierarchical A∗ algorithm in the ADMM scheme to guarantee an efficient optimization of the underlying chain structured subproblems. The resulting algorithm is naturally adapted to solve the loss-augmented inference problem in structured prediction learning, and hence is used during training and inference. In Appendix A, we consider the case of 3D data and we develop an efficientmethod to find the mode of a 3D kernel density distribution. Our algorithm has guaranteed convergence to the global optimum, and scales logarithmically in the volume size by virtue of recursively subdividing the search space. We use this method to rapidly initialize 3D brain tumor segmentation where we demonstrate substantial acceleration with respect to a standard mean-shift implementation. In Appendix B, we describe in more details our extension of the Hierarchical A∗ search algorithm of [Felzenszwalb & Mcallester 2007] to inference on chain-structured graphs.
6

Recalage/Fusion d'images multimodales à l'aide de graphes d'ordres supérieurs / Registration/Fusion of multimodal images using higher order graphs

Fécamp, Vivien 12 January 2016 (has links)
L’objectif principal de cette thèse est l’exploration du recalage d’images à l’aide de champs aléatoires de Markov d’ordres supérieurs, et plus spécifiquement d’intégrer la connaissance de transformations globales comme une transformation rigide, dans la structure du graphe. Notre cadre principal s’applique au recalage 2D-2D ou 3D-3D et utilise une approche hiérarchique d’un modèle de champ de Markov dont le graphe est une grille régulière. Les variables cachées sont les vecteurs de déplacements des points de contrôle de la grille.Tout d’abord nous expliciterons la construction du graphe qui permet de recaler des images en cherchant entre elles une transformation affine, rigide, ou une similarité, tout en ne changeant qu’un potentiel sur l’ensemble du graphe, ce qui assure une flexibilité lors du recalage. Le choix de la métrique est également laissée à l’utilisateur et ne modifie pas le fonctionnement de notre algorithme. Nous utilisons l’algorithme d’optimisation de décomposition duale qui permet de gérer les hyper-arêtes du graphe et qui garantit l’obtention du minimum exact de la fonction pourvu que l’on ait un accord entre les esclaves. Un graphe similaire est utilisé pour réaliser du recalage 2D-3D.Ensuite, nous fusionnons le graphe précédent avec un autre graphe construit pour réaliser le recalage déformable. Le graphe résultant de cette fusion est plus complexe et, afin d’obtenir un résultat en un temps raisonnable, nous utilisons une méthode d’optimisation appelée ADMM (Alternating Direction Method of Multipliers) qui a pour but d’accélérer la convergence de la décomposition duale. Nous pouvons alors résoudre simultanément recalage affine et déformable, ce qui nous débarrasse du biais potentiel issu de l’approche classique qui consiste à recaler affinement puis de manière déformable. / The main objective of this thesis is the exploration of higher order Markov Random Fields for image registration, specifically to encode the knowledge of global transformations, like rigid transformations, into the graph structure. Our main framework applies to 2D-2D or 3D-3D registration and use a hierarchical grid-based Markov Random Field model where the hidden variables are the displacements vectors of the control points of the grid.We first present the construction of a graph that allows to perform linear registration, which means here that we can perform affine registration, rigid registration, or similarity registration with the same graph while changing only one potential. Our framework is thus modular regarding the sought transformation and the metric used. Inference is performed with Dual Decomposition, which allows to handle the higher order hyperedges and which ensures the global optimum of the function is reached if we have an agreement among the slaves. A similar structure is also used to perform 2D-3D registration.Second, we fuse our former graph with another structure able to perform deformable registration. The resulting graph is more complex and another optimisation algorithm, called Alternating Direction Method of Multipliers is needed to obtain a better solution within reasonable time. It is an improvement of Dual Decomposition which speeds up the convergence. This framework is able to solve simultaneously both linear and deformable registration which allows to remove a potential bias created by the standard approach of consecutive registrations.
7

Distributed Model Predictive Operation Control of Interconnected Microgrids

Forel, Alexandre January 2017 (has links)
The upward trends in renewable energy deployment in recent years brings new challengesto the development of electrical networks. Interconnected microgrids appear as a novelbottom-up approach to the production and integration of renewable energy.Using model predictive control (MPC), the energy management of several interconnectedmicrogrids is investigated. An optimisation problem is formulated and distributed ontothe individual units using the alternating direction method of multipliers (ADMM). Themicrogrids cooperate to reach a global optimum using neighbour-to-neighbour communications.The benefits of using distributed operation control for microgrids are analysed and a controlarchitecture is proposed. Two algorithms are implemented to solve the optimisationproblem and their advantages or differences are confronted. / Förnybara energikällor har ökat under senaste åren. Det innebär nya utmaningar förevolutionen av elektriska nät. Microgrids är en bottom-up ansats för produktion ochintegrering av förnybar energi.Energiförsörjning av flera sammankoppladeMicrogrids studeras in detta arbete genommodellbaserad prediktiv kontroll (MPC). Ett optimeringsproblem formuleras på de enskildaenheterna med Alternating DirectionMethod ofMultipliers (ADMM) och parallellberäkningar härledas.Microgrids samarbetar för att nå en global lösning av neighbourto-neighbour kommunikation.Distribuerad energiförsörjning av microgrids analyseras och två kontroll algorithmerutformas.
8

Solution of Large-scale Structured Optimization Problems with Schur-complement and Augmented Lagrangian Decomposition Methods

Jose S Rodriguez (6760907) 02 August 2019 (has links)
<pre>In this dissertation we develop numerical algorithms and software tools to facilitate parallel solutions of nonlinear programming (NLP) problems. In particular, we address large-scale, block-structured problems with an intrinsic decomposable configuration. These problems arise in a great number of engineering applications, including parameter estimation, optimal control, network optimization, and stochastic programming. The structure of these problems can be leveraged by optimization solvers to accelerate solutions and overcome memory limitations, and we propose variants to two classes of optimization algorithms: augmented Lagrangian (AL) schemes and Schur-complement interior-point methods. </pre> <pre><br></pre> <pre>The convergence properties of augmented Lagrangian decomposition schemes like the alternating direction method of multipliers (ADMM) and progressive hedging (PH) are well established for convex optimization but convergence guarantees in non-convex settings are still poorly understood. In practice, however, ADMM and PH often perform satisfactorily in complex non-convex NLPs. In this work, we study connections between the method of multipliers (MM), ADMM, and PH to derive benchmarking metrics that explain why PH and ADMM work in practice. We illustrate the concepts using challenging dynamic optimization problems. Our exposition seeks to establish more formalism in benchmarking ADMM, PH, and AL schemes and to motivate algorithmic improvements.</pre> <pre><br></pre> <pre>The effectiveness of nonlinear interior-point solvers for solving large-scale problems relies quite heavily on the solution of the underlying linear algebra systems. The schur-complement decomposition is very effective for parallelizing the solution of linear systems with modest coupling. However, for systems with large number of coupling variables the schur-complement method does not scale favorably. We implement an approach that uses a Krylov solver (GMRES) preconditioned with ADMM to solve block-structured linear systems that arise in the interior-point method. We show that this ADMM-GMRES approach overcomes the well-known scalability issues of Schur decomposition.</pre> <pre><br></pre> <pre>One important drawback of using decomposition approaches like ADMM and PH is their convergence rate. Unlike Schur-complement interior-point algorithms that have super-linear convergence, augmented Lagrangian approaches typically exhibit linear and sublinear rates. We exploit connections between ADMM and the Schur-complement decomposition to derive an accelerated version of ADMM. Specifically, we study the effectiveness of performing a Newton-Raphson algorithm to compute multiplier estimates for augmented Lagrangian methods. We demonstrate using two-stage stochastic programming problems that our multiplier update achieves convergence in fewer iterations for MM on general nonlinear problems. In the case of ADMM, the newton update significantly reduces the number of subproblem solves for convex quadratic programs (QPs). Moreover, we show that using newton multiplier updates makes the method robust to the selection of the penalty parameter.</pre> <pre><br></pre> <pre>Traditionally, state-of-the-art optimization solvers are implemented in low-level programming languages. In our experience, the development of decomposition algorithms in these frameworks is challenging. They present a steep learning curve and can slow the development and testing of new numerical algorithms. To mitigate these challenges, we developed PyNumero, a new open source framework implemented in Python and C++. The package seeks to facilitate development of optimization algorithms for large-scale optimization within a high-level programming environment while at the same time minimizing the computational burden of using Python. The efficiency of PyNumero is illustrated by implementing algorithms for problems arising in stochastic programming and optimal control. Timing results are presented for both serial and parallel implementations. Our computational studies demonstrate that with the appropriate balance between compiled code and Python, efficient implementations of optimization algorithms are achievable in these high-level languages.</pre>
9

Accelerating Convergence of Large-scale Optimization Algorithms

Ghadimi, Euhanna January 2015 (has links)
Several recent engineering applications in multi-agent systems, communication networks, and machine learning deal with decision problems that can be formulated as optimization problems. For many of these problems, new constraints limit the usefulness of traditional optimization algorithms. In some cases, the problem size is much larger than what can be conveniently dealt with using standard solvers. In other cases, the problems have to be solved in a distributed manner by several decision-makers with limited computational and communication resources. By exploiting problem structure, however, it is possible to design computationally efficient algorithms that satisfy the implementation requirements of these emerging applications. In this thesis, we study a variety of techniques for improving the convergence times of optimization algorithms for large-scale systems. In the first part of the thesis, we focus on multi-step first-order methods. These methods add memory to the classical gradient method and account for past iterates when computing the next one. The result is a computationally lightweight acceleration technique that can yield significant improvements over gradient descent. In particular, we focus on the Heavy-ball method introduced by Polyak. Previous studies have quantified the performance improvements over the gradient through a local convergence analysis of twice continuously differentiable objective functions. However, the convergence properties of the method on more general convex cost functions has not been known. The first contribution of this thesis is a global convergence analysis of the Heavy- ball method for a variety of convex problems whose objective functions are strongly convex and have Lipschitz continuous gradient. The second contribution is to tailor the Heavy- ball method to network optimization problems. In such problems, a collection of decision- makers collaborate to find the decision vector that minimizes the total system cost. We derive the optimal step-sizes for the Heavy-ball method in this scenario, and show how the optimal convergence times depend on the individual cost functions and the structure of the underlying interaction graph. We present three engineering applications where our algorithm significantly outperform the tailor-made state-of-the-art algorithms. In the second part of the thesis, we consider the Alternating Direction Method of Multipliers (ADMM), an alternative powerful method for solving structured optimization problems. The method has recently attracted a large interest from several engineering communities. Despite its popularity, its optimal parameters have been unknown. The third contribution of this thesis is to derive optimal parameters for the ADMM algorithm when applied to quadratic programming problems. Our derivations quantify how the Hessian of the cost functions and constraint matrices affect the convergence times. By exploiting this information, we develop a preconditioning technique that allows to accelerate the performance even further. Numerical studies of model-predictive control problems illustrate significant performance benefits of a well-tuned ADMM algorithm. The fourth and final contribution of the thesis is to extend our results on optimal scaling and parameter tuning of the ADMM method to a distributed setting. We derive optimal algorithm parameters and suggest heuristic methods that can be executed by individual agents using local information. The resulting algorithm is applied to distributed averaging problem and shown to yield substantial performance improvements over the state-of-the-art algorithms. / <p>QC 20150327</p>
10

Efficient Workload and Resource Management in Datacenters

Xu, Hong 13 August 2013 (has links)
This dissertation focuses on developing algorithms and systems to improve the efficiency of operating mega datacenters with hundreds of thousands of servers. In particular, it seeks to address two challenges: First, how to distribute the workload among the set of datacenters geographically deployed across the wide area? Second, how to manage the server resources of datacenters using virtualization technology? In the first part, we consider the workload management problem in geo-distributed datacenters. We first present a novel distributed workload management algorithm that jointly considers request mapping, which determines how to direct user requests to an appropriate datacenter for processing, and response routing, which decides how to select a path among the set of ISP links of a datacenter to route the response packets back to a user. In the next chapter, we study some key aspects of cost and workload in geo-distributed datacenters that have not been fully understood before. Through extensive empirical studies of climate data and cooling systems, we make a case for temperature aware workload management, where the geographical diversity of temperature and its impact on cooling energy efficiency can be used to reduce the overall cooling energy. Moreover, we advocate for holistic workload management for both interactive and batch jobs, where the delay-tolerant elastic nature of batch jobs can be exploited to further reduce the energy cost. A consistent 15% to 20% cooling energy reduction, and a 5% to 20% overall cost reduction are observed from extensive trace-driven simulations. In the second part of the thesis, we consider the resource management problem in virtualized datacenters. We design Anchor, a scalable and flexible architecture that efficiently supports a variety of resource management policies. We implement a prototype of Anchor on a small-scale in-house datacenter with 20 servers. Experimental results and trace-driven simulations show that Anchor is effective in realizing various resource management policies, and its simple algorithms are practical to solve virtual machine allocation with thousands of VMs and servers in just ten seconds.

Page generated in 0.0411 seconds