261 |
Load Balancing of Multi-physics Simulation by Multi-criteria Graph Partitioning / Equilibrage de charge pour des simulations multi-physiques par partitionnement multcritères de graphesBarat, Remi 18 December 2017 (has links)
Les simulations dites multi-physiques couplent plusieurs phases de calcul. Lorsqu’elles sont exécutées en parallèle sur des architectures à mémoire distribuée, la minimisation du temps de restitution nécessite dans la plupart des cas d’équilibrer la charge entre les unités de traitement, pour chaque phase de calcul. En outre, la distribution des données doit minimiser les communications qu’elle induit. Ce problème peut être modélisé comme un problème de partitionnement de graphe multi-critères. On associe à chaque sommet du graphe un vecteur de poids, dont les composantes, appelées « critères », modélisent la charge de calcul porté par le sommet pour chaque phase de calcul. Les arêtes entre les sommets, indiquent des dépendances de données, et peuvent être munies d’un poids reflétant le volume de communication transitant entre les deux sommets. L’objectif est de trouver une partition des sommets équilibrant le poids de chaque partie pour chaque critère, tout en minimisant la somme des poids des arêtes coupées, appelée « coupe ». Le déséquilibre maximum toléré entre les parties est prescrit par l’utilisateur. On cherche alors une partition minimisant la coupe, parmi toutes celles dont le déséquilibre pour chaque critère est inférieur à cette tolérance. Ce problème étant NP-Dur dans le cas général, l’objet de cette thèse est de concevoir et d’implanter des heuristiques permettant de calculer efficacement de tels partitionnements. En effet, les outils actuels renvoient souvent des partitions dont le déséquilibre dépasse la tolérance prescrite. Notre étude de l’espace des solutions, c’est-à-dire l’ensemble des partitions respectant les contraintes d’équilibre, révèle qu’en pratique, cet espace est immense. En outre, nous prouvons dans le cas mono-critère qu’une borne sur les poids normalisés des sommets garantit que l’espace des solutions est non-vide et connexe. Nous fondant sur ces résultats théoriques, nous proposons des améliorations de la méthode multi-niveaux. Les outils existants mettent en oeuvre de nombreuses variations de cette méthode. Par l’étude de leurs codes sources, nous mettons en évidence ces variations et leurs conséquences à la lumière de notre analyse sur l’espace des solutions. Par ailleurs, nous définissons et implantons deux algorithmes de partitionnement initial, se focalisant sur l’obtention d’une solution à partir d’une partition potentiellement déséquilibrée, au moyen de déplacements successifs de sommets. Le premier algorithme effectue un mouvement dès que celui-ci améliore l’équilibre, alors que le second effectue le mouvement réduisant le plus le déséquilibre. Nous présentons une structure de données originale, permettant d’optimiser le choix des sommets à déplacer, et conduisant à des partitions de déséquilibre inférieur en moyenne aux méthodes existantes. Nous décrivons la plate-forme d’expérimentation, appelée Crack, que nous avons conçue afin de comparer les différents algorithmes étudiés. Ces comparaisons sont effectuées en partitionnant un ensembles d’instances comprenant un cas industriel et plusieurs cas fictifs. Nous proposons une méthode de génération de cas réalistes de simulations de type « transport de particules ». Nos résultats démontrent la nécessité de restreindre les poids des sommets lors de la phase de contraction de la méthode multi-niveaux. En outre, nous mettons en évidence l’influence de la stratégie d’ordonnancement des sommets, dépendante de la topologie du graphe, sur l’efficacité de l’algorithme d’appariement « Heavy-Edge Matching » dans cette même phase. Les différents algorithmes que nous étudions sont implantés dans un outil de partitionnement libre appelé Scotch. Au cours de nos expériences, Scotch et Crack renvoient une partition équilibrée à chaque exécution, là où MeTiS, l’outil le plus utilisé actuellement, échoue une grande partie du temps. Qui plus est, la coupe des solutions renvoyées par Scotch et Crack est équivalente ou meilleure que celle renvoyée par MeTiS. / Multiphysics simulation couple several computation phases. When they are run in parallel on memory-distributed architectures, minimizing the simulation time requires in most cases to balance the workload across computation units, for each computation phase. Moreover, the data distribution must minimize the induced communication. This problem can be modeled as a multi-criteria graph partitioning problem. We associate with each vertex of the graph a vector of weights, whose components, called “criteria”, model the workload of the vertex for each computation phase. The edges between vertices indicate data dependencies, and can be given a weight representing the communication volume transferred between the two vertices. The goal is to find a partition of the vertices that both balances the weights of each part for each criterion, and minimizes the “edgecut”, that is, the sum of the weights of the edges cut by the partition. The maximum allowed imbalance is provided by the user, and we search for a partition that minimizes the edgecut, among all the partitions whose imbalance for each criterion is smaller than this threshold. This problem being NP-Hard in the general case, this thesis aims at devising and implementing heuristics that allow us to compute efficiently such partitions. Indeed, existing tools often return partitions whose imbalance is higher than the prescribed tolerance. Our study of the solution space, that is, the set of all the partitions respecting the balance constraints, reveals that, in practice, this space is extremely large. Moreover, we prove in the mono-criterion case that a bound on the normalized vertex weights guarantees the existence of a solution, and the connectivity of the solution space. Based on these theoretical results, we propose improvements of the multilevel algorithm. Existing tools implement many variations of this algorithm. By studying their source code, we emphasize these variations and their consequences, in light of our analysis of the solution space. Furthermore, we define and implement two initial partitioning algorithms, focusing on returning a solution. From a potentially imbalanced partition, they successively move vertices from one part to another. The first algorithm performs any move that reduces the imbalance, while the second performs at each step the move reducing the most the imbalance. We present an original data structure that allows us to optimize the choice of the vertex to move, and leads to partitions of imbalance smaller on average than existing methods. We describe the experimentation framework, named Crack, that we implemented in order to compare the various algorithms at stake. This comparison is performed by partitioning a set of instances including an industrial test case, and several fictitious cases. We define a method for generating realistic weight distributions corresponding to “Particles-in-Cells”-like simulations. Our results demonstrate the necessity to coerce the vertex weights during the coarsening phase of the multilevel algorithm. Moreover, we evidence the impact of the vertex ordering, which should depend on the graph topology, on the efficiency of the “Heavy-Edge” matching scheme. The various algorithms that we consider are implemented in an open- source graph partitioning software called Scotch. In our experiments, Scotch and Crack returned a balanced partition for each execution, whereas MeTiS, the current most used partitioning tool, fails regularly. Additionally, the edgecut of the solutions returned by Scotch and Crack is equivalent or better than the edgecut of the solutions returned by MeTiS.
|
262 |
Train Re-scheduling : A Massively Parallel Approach Using CUDAPetersson, Anton January 2015 (has links)
Context. Train re-scheduling during disturbances is a time-consuming task. Modified schedules need to be found, so that trains can meet in suitable locations and delays minimized. Domino effects are difficult to manage. Commercial optimization software has been shown to find optimal solutions, but modied schedules need to be found quickly. Therefore, greedy depth-first algorithms have been designed to find solutions within a limited time-frame. Modern GPUs have a high computational capacity, and have become easier to use for computations unrelated to computer graphics with the development of technologies such as CUDA and OpenCL. Objectives. We explore the feasibility of running a re-scheduling algorithm developed specifically for this problem on a GPU using the CUDA toolkit. The main objective is to find a way of exploiting the computational capacity of modern GPUs to find better re-scheduling solutions within a limited time-frame. Methods. We develop and adapt a sequential algorithm for use on a GPU and run multiple experiments using 16 disturbance scenarios on the single-tracked iron ore line in northern Sweden. Results. Our implementation succeeds in finding re-scheduling solutions without conflicts for all 16 scenarios. The algorithm visits on average 7 times more nodes per time unit than the sequential CPU algorithm when branching at depth 50, and 4 times more when branching at depth 200. Conclusions. The computational performance of our parallel algorithm is promising but the approach is not complete. Our experiments only show that multiple solution branches can be explored fast in parallel, but not how to construct a high level algorithm that systematically searches for better schedules within a certain time limit. Further research is needed for that. We also find that multiple threads explore redundant solutions in our approach.
|
263 |
Models and methods for Traffic Engineering problems with single-path routingBarros Joyce Moniz, Martim 06 October 2016 (has links)
Traffic Engineering (TE) uses methods and models from a variety of mathematical fields, such as statistics and optimization, to improve the performance of telecommunication networks. In this thesis, we study TE problems dealing with networks that impose single-path routing. As the name infers, in this type of routing, the traffic flow of each "commodity" cannot be split in its path between its origin and destination. Given its cheap cost, single-path routing is widely used in today's data centers, where thousands of stored servers perform computations or host Internet services. One common case of single-path routing is the one enforced by the Spanning Tree Protocol (STP) in switched Ethernet networks. The STP requires the network to keep its activated links loop-free, while maintaining the other redundant links ready for back-up, in case of link failure. The Multiple Spanning Tree Protocol (MSTP) extends the STP by installing multiple virtual networks compliant with the STP, over a single physical topology. Therefore, the MSTP is greatly beneficial for network service providers, as it allows for a more efficient use of the existing resources.Network design problems dealing with the MSTP are generally highly combinatorial and very hard to solve. As such, TE literature mainly suggests heuristic methods, which can quickly produce reasonable designs. Notwithstanding, due to a scarce existence of lower bounds to the optimum values of such problems, there is little knowledge about the quality of the solutions provided by these heuristics.In this sense, we propose mathematical programming models and methods that can provide optimal designs for these networks, or at the very least, obtain valid lower bounds. Taking into mind the goal of avoiding congestion in the network, we focus on two problems that deal with the following load-balancing objectives: the minimization of the worst-case link utilization, and the minimization of flow costs given by piecewise linear functions that penalize heavily-loaded links. The study of both these problems yielded relevant by-products: the first is the study of a MSTP network design problem, where we minimize the total load, and the second is the study of a fundamental unsplittable multicommodity flow problem with piecewise linear costs.For all the considered problems, we provide studies of complexity, extensive polyhedral studies to compare the proposed formulations, and a wide array of computational experiments to evaluate the performance of the proposed models and methods. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
264 |
MODELS AND METHODS IN GENOME WIDE ASSOCIATION STUDIESPorretta'S, Luciano 26 January 2018 (has links)
The interdisciplinary field of systems biology has evolved rapidly over the last few years. Different disciplines have contributed to the development of both its experimental and theoretical branches.Although computational biology has been an increasing activity in computer science for more than a two decades, it has been only in the past few years that optimization models have been increasingly developed and analyzed by researchers whose primary background is Operations Research(OR). This dissertation aims at contributing to the field of computational biology by applying mathematical programming to certain problems in molecular biology.Specifically, we address three problems in the domain of Genome Wide Association Studies}:(i) the Pure Parsimony Haplotyping Under uncertatind Data Problem that consists in finding the minimum number of haplotypes necessary to explain a given set of genotypes containing possible reading errors; (ii) the Parsimonious Loss Of Heterozygosity Problem that consists of partitioning suspected polymorphisms from a set of individuals into a minimum number of deletion areas; (iii) and the Multiple Individuals Polymorphic Alu Insertion Recognition Problem that consists of finding the set of locations in the genome where ALU sequences are inserted in some individual(s).All three problems are NP-hard combinatorial optimization problems. Therefore, we analyse their combinatorial structure and we propose an exact approach to solution for each of them. The proposed models are efficient, accurate, compact, polynomial-sized and usable in all those cases for which the parsimony criterion is well suited for estimation. / Option Informatique du Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
265 |
Solution Methods for Multi-Objective Robust Combinatorial OptimizationThom, Lisa 19 April 2018 (has links)
No description available.
|
266 |
Optimisation pour des problèmes industriels de tournées de véhicules : vers une transition énergétique / Optimization for industrial vehicle routinge problems : towards an energy transitionBenantar, Abdelaziz 01 December 2017 (has links)
La thèse porte sur l’étude de problèmes réels de transport et de distribution par voie routière. Il s’agit plus précisément de deux problèmes distincts d’optimisation des tournées de véhicules ; la distribution de produits pétroliers et le transfert des conteneurs. La résolution du premier problème, identifié comme le problème de tournées de véhicules avec compartiments multiples et fenêtres de temps ou MCVRP-TW (Multi-Compartment Vehicle Routing Problem with Time Windows), est basée sur une méthode de recherche tabou. Une adaptation de la méthode de résolution a été appliquée à deux autres problématiques annexes, la première intègre des contraintes supplémentaires liées à l’opération de chargement des produits pétroliers dans les compartiments, et la seconde inclut le concept d’ajustement des quantités demandées. Par ailleurs, dans l’optique d’une transition énergétique, nous nous sommes intéressés au problème de transfert des conteneurs par camions électriques dans la zone industrialo-portuaire du Havre. L’optimisation se situe à deux niveaux, un niveau stratégique pour le dimensionnement de l’infrastructure électrique et un niveau opérationnel pour la construction des tournées de véhicules. Seul le niveau stratégique a été abordé dans le cadre d’un projet de recherche grâce à un couplage de l’optimisation et la simulation. / The thesis focuses on the study of real road transportation and distribution pro-blems. The question concerns in particular the optimization of two different vehicle routing problems arising in the distribution of petroleum products and the transfer of containers. The first problem, modelled as an application of the multi-compartment vehicle routing problem with time windows (MCVRPTW), is solved by using a tabu search method. The same method is then applied to two other variants. One introduces additional constraints related to loading operations for petroleum products on the compartments, while the other one includes the ad-justment concept in quantities applied for. Moreover, in the context of an energy transition, we addressed the container transfer problem using a fleet of electric trucks in the industrial port zone of Le Havre. The optimization involves two levels : the strategic level for dimensioning electrical infrastructures and the operational level for constructing the vehicle routes. Only the strategic level is tackled with a research project thanks to a coupling of optimization and simulation.
|
267 |
Chemin optimal, conception et amélioration de réseaux sous contrainte de distance / Optimal path, design and improvement of networks with distance constraintNakache, Elie 01 July 2016 (has links)
Cette thèse porte sur différents problèmes d'optimisation combinatoire dont nous avons caractérisé la difficulté en décrivant des réductions et des algorithmes polynomiaux exacts ou approchés.En particulier, nous étudions le problème de trouver, dans un graphe orienté sans cycle dont les sommets sont étiquetés, un chemin qui passe par un maximum d'étiquettes différentes. Nous établissons qu'il n'existe pas d'algorithme polynomial avec un facteur constant pour ce problème. Nous présentons aussi un schéma qui permet d'obtenir, pour tout $epsilon >0$, un algorithme polynomial qui calcule un chemin collectant $ O(OPT^{1-epsilon})$ étiquettes.Nous étudions ensuite des variantes du problème de l'arbre couvrant de poids minimum auquel nous ajoutons des contraintes de distance et d'intermédiarité. Nous prouvons que certaines variantes se résolvent en temps polynomial comme des problèmes de calcul d'un libre de poids minimum commun à deux matroïdes. Pour une autre variante, nous présentons un algorithme d'approximation facteur 2 et nous prouvons qu'il n'existe pas d'algorithme polynomial avec un meilleur facteur constant.Enfin, nous étudions un problème d'améliorations de réseaux du point de vue du partage des coûts. Nous montrons que la fonction de coût associée à ce problème est sous-modulaire et nous utilisons ce résultat pour déduire un mécanisme de partage des coûts qui possède plusieurs bonnes propriétés. / In this thesis, we investigate several combinatorial optimization problems and characterize their computational complexity and approximability by providing polynomial reductions and exact or approximation algorithms.In particular, we study the problem of finding, in a vertex-labeled directed acyclic graph, a path collecting a maximum number of distinct labels. We prove that no polynomial time constant factor approximation algorithm exists for this problem. Furthermore, we describe a scheme that produces, for any $epsilon >0$, a polynomial time algorithm that computes a solution collecting $O(OPT^{1-epsilon})$ labels. Then, we study several variants of the minimum cost spanning tree problem that take into account distance and betweenness constraints. We prove that most of these problems can be solved in polynomial time using a reduction to the weighted matroid intersection problem. For an other problem, we give a factor 2 approximation algorithm and prove the optimality of this ratio.Finally, we study a network improvement problem from a cost sharing perspective. We establish that the cost function corresponding to this problem is submodular and use this result to derive a cost sharing mechanism having several good properties.
|
268 |
An optimization model using the Assignment Problem to manage the location of parts : Master Thesis at the engine assembly at Scania CV ABLundquist, Josefin, O'Hara, Linnéa January 2017 (has links)
A key challenge for manufacturing companies is to store parts in an efficient way atthe lowest cost possible. As the demand of differentiated products increases, togetherwith the fact that old products are not phased out at the same pace, the need of usingstorage space as dynamically as possible becomes vital.Scania’s engine assembly manufactures engines for various automotive vehicles andmarine & industry applications. The variation in engine range in Scania’s offeringleads to the need of holding a vast, and increasing, assortment of parts in the produc-tion. As a consequence, this puts more pressure on the logistics and furnishing withinthe engine assembly.This master thesis aims to facilitate the process of assigning parts’ storage locationsin the most profitable manner through an optimization model, the Location Model, inExcel VBA. Together with the model, suggestions of work methods are presented.By implementing the Location Model at Scania’s engine assembly, 4,98 % of all keptparts are recommended location changes, while resulting in cost savings, for the chosen30-day period. These location changes result in a cost saving of 6,73 % of the totallogistic costs for the same time period.
|
269 |
[en] COMBINING METAHEURISTICS WITH MP SOLVERS, WITH APPLICATIONS TO THE GENERALIZED ASSIGNMENT PROBLEM (GAP) / [pt] COMBINANDO METAURÍSTICAS COM RESOLVEDORES MIP, COM APLICAÇÕES AO GENERALIZED ASSIGNMENT PROBLEM (GAP)DANIEL AMARAL DE MEDEIROS ROCHA 08 March 2010 (has links)
[pt] Métodos que combinam estratégias normalmente encontradas em algoritmos metaeurísticos com técnicas para resolver problemas de programação inteira mista (MIP) têm apresentado ótimos resultados nos últimos anos. Este trabalho propõe dois novos algoritmos nessa linha: um algoritmo que faz pós-processamento nas soluções encontradas pelo resolvedor MIP. Os dois algoritmos utilizam um novo tipo de vizinhança, chamada de vizinhança elipsoidal, que possui fortes semelhanças com as técnicas de relinking de algoritmos PR e que neste trabalho é generalizada e extendida para múltiplas soluções. O problema generalizado de alocação (GAP) é usado para os experimentos. São testados também um resolvedor MIP puro (ILOG CPLEX versão 11) e um algoritmo branch and price que utiliza as heurísticas RINS e guided dives. Os algoritmos testados são comparados entre e com heurísticas específicas para o GAP. Os resultados são satisfatórios e indicam que as vizinhanças elipsoidais conseguem frequentemente melhorar as soluções encontradas pelo resolvedor MIP, encontrando a melhor solução para algumas instâncias. / [en] Methods that mix strategies usually found in metaheristic algorithms with techniques to solve mixed integer programming problems (MIPs) have had great results over the past few years. This wprk proposes two new algorithms in this philosophy: one is based on the Path Relink (PR) metaheuristc, while the other one is a simple algorithm that does post-processing in the solutions found by the MIP solver. Both algorithms use a new neighborhood structure, called ellipsoidal neighborhood, that has strong resemblances with the relinking step from PR algorithms and that, in this work, is generalized and extended for multiple solutions. The generalized assignment problem (GAP) is used for the computational experiments. Also tested are MIP solver (ILOG CPLEX version 11) and a branch and price algorithm that uses the RINS and guides dives heuristics. The tested algorithms are compared among themselves and with GAP-specific heuristics. The results are satisfactory and show that the ellipsoidal neighborhood can frequently improve the solutions found by the MIP solver, even finding the best result for some instances.
|
270 |
[en] ON THE SIMULTANEOUS MINIMIZATION OF WORST TESTING COST AND EXPECTED TESTING COST WITH DECISION TREES / [pt] MINIMIZAÇÃO SIMULTÂNEA DO PIOR CUSTO E DO CUSTO MÉDIO EM ÁRVORES DE DECISÃOALINE MEDEIROS SAETTLER 25 January 2017 (has links)
[pt] O problema de minimizar o custo de avaliar uma função discreta lendo sequencialmente as suas variáveis é um problema que surge em diversas aplicações, entre elas sistemas de diagnóstico automático e aprendizado ativo. Neste problema, cada variável da função está associada a um custo, que se deve pagar para checar o seu valor. Além disso, pode existir uma distribuição de probabilidades associadas aos pontos onde a função está definida. A maioria dos trabalhos nesta área se concentra ou na minimização do custo máximo ou na minimização do custo esperado gasto para avaliar a função. Nesta dissertação, mostramos como obter uma Ômicron logaritmo de N aproximação em relação à minimização do pior custo (a melhor aproximação possível assumindo que P é diferente de NP). Nós também mostramos um procedimento polinomial para avaliar uma função otimizando simultaneamente o pior custo e o custo esperado. / [en] The problem of minimizing the cost of evaluating a discrete function by sequentially reading its variables is a problem that arises in several applications, among them automatic diagnosis design and active learning. In this problem, each variable of the function is associated with a cost, that we have to pay in order to check its value. In addition, there may exist a probability distribution associated with the points where the function is defined. Most of the work in the area has focussed either on the minimization of the maximum cost or on the minimization of the expected cost spent to evaluate the function. In this dissertation, we show how to obtain an Ômicron logarithm of N approximation with respect to the worst case minimization (the best possible approximation under the assumption that P is different from NP). We also show a polynomial time procedure for evaluate a function that simultaneously optimizes both the worst and the expected costs.
|
Page generated in 0.0319 seconds