Spelling suggestions: "subject:"dantzig golfe decomposition"" "subject:"dantzig wolff decomposition""
1 |
Résolution exacte de problèmes de couverture par arborescences sous contraintes de capacité / Exact methods for solving covering problems with trees subject to capacity constraintsGuillot, Jérémy 18 December 2018 (has links)
Dans ce document, nous étudions deux problèmes de sectorisation et proposons plusieurs méthodes de résolution exactes basées sur la décomposition de Dantzig-Wolfe et la génération de colonnes. Nous proposons deux modélisations en fonction de la manière d’appréhender l’objectif du problème qui consiste à obtenir des secteurs compacts. Pour chacune des modélisations, nous comparons des approches de résolution exactes basées sur des formulations compactes ou sur des formulations étendues obtenues par la décomposition de Dantzig-Wolfe. Le premier type de modèles proposé définit la fonction objectif à la manière d’un problème de p-median. Concernant les méthodes de résolution pour ce type de modèle, l’accent est mis sur l’accélération de la convergence de l’algorithme de génération de colonnes en mettant en place des techniques d’agrégation de contraintes afin de réduire la dégénérescence de l’algorithme du simplexe. Les expérimentations numériques montrent que la méthode d’agrégation de contraintes proposée permet effectivement de réduire le nombre d’itérations dégénérées. Cependant, elle ne suffit pas à accélérer l’algorithme de branch-and-price. Le choix d’utilisation de la formulation compacte ou de la formulation étendue dépend du type d’instances résolu. Le second type de modèles formule l’objectif d’une manière assez proche de celui des problèmes de p-centre. L’utilisation d’un tel objectif complexifie la résolution des sous-problèmes de génération de colonnes. L’accent est donc mis sur la conception d’algorithmes de branch-and-bound et de programmation dynamique pour les résoudre efficacement. Les expériences montrent que l’algorithme de branch-and-price surpasse les approches de résolution utilisant une formulation compacte du problème. / In this document, we study two districting problems and propose several exact methods, based on Dantzig-Wolfe decomposition and column generation, to solve them. For each model, we compare exact approaches based either on compact formulations or on extended formulations obtained using Dantzig-Wolfe decomposition. The first type of model that we propose defines the objective function in a p-median problem fashion. Regarding the methods used to solve that kind of model, we emphasize accelerating the convergence of the column generation algorithm by designing constraint aggregation techniques in order to reduce the degeneracy in the simplex algorithm. Numerical experiments show that this constraint aggregation method indeed reduces the proportion of degenerated iterations. However, it is not enough to speed up the branch-and-price algorithm. Choosing to tackle the problem through either a compact formulation or an extended formulation depends on the structure of the instances to solve. The second type of model formulates the objective function in a way quite similar to that of p-centre problems. Using such an objective function induces complex column generation subproblems. We focus on designing branch-and-bound and dynamic programming algorithms in order to solve them efficiently. Experiments show that the branch-and-price approach surpasses any proposed method based on compact formulations of the problem.
|
2 |
Empirical Analysis of Algorithms for Block-Angular Linear ProgramsDang, Jiarui January 2007 (has links)
This thesis aims to study the theoretical complexity and empirical performance of decomposition algorithms. We focus on linear programs with a block-angular structure. Decomposition algorithms used to be the only way to solve large-scale special structured problems, in terms of memory limit and CPU time. However, with the advances in computer technology over the past few decades, many large-scale problems can now be solved simply by using some general purpose LP software, without exploiting the problems' inner structures. A question arises naturally, should we solve a structured problem with decomposition, or directly solve it as a whole? We try to understand how a problem's characteristics influence its computational performance, and compare the relative efficiency of algorithms with and without decomposition. Two comparisons are conducted in our research: first, the Dantzig-Wolfe decomposition method (DW) versus the simplex method (simplex); second, the analytic center cutting plane method (ACCPM) versus the interior point method (IPM). These comparisons fall into the two main solution approaches in linear programming: simplex-based algorithms and IPM-based algorithms. Motivated by our observations of ACCPM and DW decomposition, we devise a hybrid algorithm combining ACCPM and DW, which are the counterparts of IPM and simplex in the decomposition framework, to take the advantages of both: the quick convergence rate of IPM-based methods, as well as the accuracy of simplex-based algorithms. A large set of 316 instances is incorporated in our experiments, so that different dimensioned problems with primal or dual block-angular structures are covered to test our conclusions.
|
3 |
Empirical Analysis of Algorithms for Block-Angular Linear ProgramsDang, Jiarui January 2007 (has links)
This thesis aims to study the theoretical complexity and empirical performance of decomposition algorithms. We focus on linear programs with a block-angular structure. Decomposition algorithms used to be the only way to solve large-scale special structured problems, in terms of memory limit and CPU time. However, with the advances in computer technology over the past few decades, many large-scale problems can now be solved simply by using some general purpose LP software, without exploiting the problems' inner structures. A question arises naturally, should we solve a structured problem with decomposition, or directly solve it as a whole? We try to understand how a problem's characteristics influence its computational performance, and compare the relative efficiency of algorithms with and without decomposition. Two comparisons are conducted in our research: first, the Dantzig-Wolfe decomposition method (DW) versus the simplex method (simplex); second, the analytic center cutting plane method (ACCPM) versus the interior point method (IPM). These comparisons fall into the two main solution approaches in linear programming: simplex-based algorithms and IPM-based algorithms. Motivated by our observations of ACCPM and DW decomposition, we devise a hybrid algorithm combining ACCPM and DW, which are the counterparts of IPM and simplex in the decomposition framework, to take the advantages of both: the quick convergence rate of IPM-based methods, as well as the accuracy of simplex-based algorithms. A large set of 316 instances is incorporated in our experiments, so that different dimensioned problems with primal or dual block-angular structures are covered to test our conclusions.
|
4 |
Geração de colunas para o problema de dimensionamento de lotes de produção com limitações de capacidade / Column generation heuristics for capacitated lotsizing problemBaldo, Tamara Angélica 29 May 2009 (has links)
O problema de dimensionamento de lotes com restrições de capacidade (CLSP) consiste em determinar um plano de produção que satisfaça a demanda requerida, respeitando as limitações de capacidade, com o menor custo possível, ou seja, minimizando os custos de produção, estocagem e preparação de máquina. Encontrar uma solução factível para o CLSP, considerando tempo de preparação de máquina, é NP-completo. Nesta dissertação, para a resolução do CLSP, utiliza-se a decomposição de Dantzig-Wolfe e o procedimento de geração de colunas, encontrando bons limitantes inferiores. Duas diferentes estratégias de decomposição são exploradas, decomposição por itens e períodos. Para a obtenção de uma solução inteira para o problema (limitante superior) foram exploradas heurísticas lagrangianas, onde a solução inicial para as heurísticas provém da geração de colunas. Os limitantes obtidos podem ser utilizados em métodos exatos, como por exemplo, em algoritmos do tipo branch-and-price. Experimentos computacionais, baseados em exemplares gerados aleatoriamente, foram realizados e os resultados analisados, as variações dos parâmetros das instâncias foram sugeridas na literatura / The Capacitated Lot Sizing Problem (CLSP) consists in determining a production plan such that all demands are met and the total costs of production, inventory and setup are minimized. Since the problem to find a feasible solution to the CLSP with setup times is NP-complete, large problem instances have been solved by heuristic methods. In this dissertation, we are particularly concerned in using the methodology of Dantzig-Wolfe decomposition and column generation to generate good bounds to the CLSP with setup times and costs. Here, we analyse two types of decomposition which are based on items and time periods (lower bound) and some lagrangian-based heuristics (upper bound). Numerical results based on randomly generated intances suggest that highquality lower bounds are obtained by column generation algorithms, such as well as upper bounds by heuristics. These bounds are useful in exact solution methods, such as branch-and-price algorithms
|
5 |
Decomposition of Variational Inequalities with Applications to Nash-Cournot Models in Time of Use Electricity MarketsCelebi, Emre January 2011 (has links)
This thesis proposes equilibrium models to link the wholesale and retail electricity markets which allow for reconciliation of the differing time scales of responses of producers (e.g., hourly) and consumers (e.g., monthly) to changing prices. Electricity market equilibrium models with time of use (TOU) pricing scheme are formulated as large-scale variational inequality (VI) problems, a unified and concise approach for modeling the equilibrium. The demand response is dynamic in these models through a dependence on the lagged demand. Different market structures are examined within this context. With an illustrative example, the welfare gains/losses are analyzed after an implementation of TOU pricing scheme over the single pricing scheme. An approximation of the welfare change for this analysis is also presented. Moreover, break-up of a large supplier into smaller parts is investigated.
For the illustrative examples presented in the dissertation, overall welfare gains for consumers and lower prices closer to the levels of perfect competition can be realized when the retail pricing scheme is changed from single pricing to TOU pricing. These models can be useful policy tools for regulatory bodies i) to forecast future retail prices (TOU or single prices), ii) to examine the market power exerted by suppliers and iii) to measure welfare gains/losses with different retail pricing schemes (e.g., single versus TOU pricing).
With the inclusion of linearized DC network constraints into these models, the problem size grows considerably. Dantzig-Wolfe (DW) decomposition algorithm for VI problems is used to alleviate the computational burden and it also facilitates model management and maintenance. Modification of the DW decomposition algorithm and approximation of the DW master problem significantly improve the computational effort required to find the equilibrium. These algorithms are applied to a two-region energy model for Canada and a realistic Ontario electricity test system. In addition to empirical analysis, theoretical results for the convergence properties of the master problem approximation are presented for DW decomposition of VI problems.
|
6 |
Decomposition of Variational Inequalities with Applications to Nash-Cournot Models in Time of Use Electricity MarketsCelebi, Emre January 2011 (has links)
This thesis proposes equilibrium models to link the wholesale and retail electricity markets which allow for reconciliation of the differing time scales of responses of producers (e.g., hourly) and consumers (e.g., monthly) to changing prices. Electricity market equilibrium models with time of use (TOU) pricing scheme are formulated as large-scale variational inequality (VI) problems, a unified and concise approach for modeling the equilibrium. The demand response is dynamic in these models through a dependence on the lagged demand. Different market structures are examined within this context. With an illustrative example, the welfare gains/losses are analyzed after an implementation of TOU pricing scheme over the single pricing scheme. An approximation of the welfare change for this analysis is also presented. Moreover, break-up of a large supplier into smaller parts is investigated.
For the illustrative examples presented in the dissertation, overall welfare gains for consumers and lower prices closer to the levels of perfect competition can be realized when the retail pricing scheme is changed from single pricing to TOU pricing. These models can be useful policy tools for regulatory bodies i) to forecast future retail prices (TOU or single prices), ii) to examine the market power exerted by suppliers and iii) to measure welfare gains/losses with different retail pricing schemes (e.g., single versus TOU pricing).
With the inclusion of linearized DC network constraints into these models, the problem size grows considerably. Dantzig-Wolfe (DW) decomposition algorithm for VI problems is used to alleviate the computational burden and it also facilitates model management and maintenance. Modification of the DW decomposition algorithm and approximation of the DW master problem significantly improve the computational effort required to find the equilibrium. These algorithms are applied to a two-region energy model for Canada and a realistic Ontario electricity test system. In addition to empirical analysis, theoretical results for the convergence properties of the master problem approximation are presented for DW decomposition of VI problems.
|
7 |
Geração de colunas para o problema de dimensionamento de lotes de produção com limitações de capacidade / Column generation heuristics for capacitated lotsizing problemTamara Angélica Baldo 29 May 2009 (has links)
O problema de dimensionamento de lotes com restrições de capacidade (CLSP) consiste em determinar um plano de produção que satisfaça a demanda requerida, respeitando as limitações de capacidade, com o menor custo possível, ou seja, minimizando os custos de produção, estocagem e preparação de máquina. Encontrar uma solução factível para o CLSP, considerando tempo de preparação de máquina, é NP-completo. Nesta dissertação, para a resolução do CLSP, utiliza-se a decomposição de Dantzig-Wolfe e o procedimento de geração de colunas, encontrando bons limitantes inferiores. Duas diferentes estratégias de decomposição são exploradas, decomposição por itens e períodos. Para a obtenção de uma solução inteira para o problema (limitante superior) foram exploradas heurísticas lagrangianas, onde a solução inicial para as heurísticas provém da geração de colunas. Os limitantes obtidos podem ser utilizados em métodos exatos, como por exemplo, em algoritmos do tipo branch-and-price. Experimentos computacionais, baseados em exemplares gerados aleatoriamente, foram realizados e os resultados analisados, as variações dos parâmetros das instâncias foram sugeridas na literatura / The Capacitated Lot Sizing Problem (CLSP) consists in determining a production plan such that all demands are met and the total costs of production, inventory and setup are minimized. Since the problem to find a feasible solution to the CLSP with setup times is NP-complete, large problem instances have been solved by heuristic methods. In this dissertation, we are particularly concerned in using the methodology of Dantzig-Wolfe decomposition and column generation to generate good bounds to the CLSP with setup times and costs. Here, we analyse two types of decomposition which are based on items and time periods (lower bound) and some lagrangian-based heuristics (upper bound). Numerical results based on randomly generated intances suggest that highquality lower bounds are obtained by column generation algorithms, such as well as upper bounds by heuristics. These bounds are useful in exact solution methods, such as branch-and-price algorithms
|
8 |
Optimization Methods for Distribution Systems: Market Design and Resiliency EnhancementBedoya Ceballos, Juan Carlos 05 August 2020 (has links)
The increasing penetration of proactive agents in distribution systems (DS) has opened new possibilities to make the grid more resilient and to increase participation of responsive loads (RL) and non-conventional generation resources. On the resiliency side, plug-in hybrid electric vehicles (PHEV), energy storage systems (ESS), microgrids (MG), and distributed energy resources (DER), can be leveraged to restore critical load in the system when the utility system is not available for extended periods of time. Critical load restoration is a key factor to achieve a resilient distribution system. On the other hand, existing DERs and responsive loads can be coordinated in a market environment to contribute to efficiency of electricity consumption and fair electricity tariffs, incentivizing proactive agents' participation in the distribution system.
Resiliency and market applications for distribution systems are highly complex decision-making problems that can be addressed using modern optimization techniques. Complexities of these problems arise from non-linear relations, integer decision variables, scalability, and asynchronous information. On the resiliency side, existing models include optimization approaches that consider system's available information and neglect asynchrony of data arrival. As a consequence, these models can lead to underutilization of critical resources during system restoration. They can also become computationally intractable for large-scale systems. In the market design problem, existing approaches are based on centralized or computational distributed approaches that are not only limited by hardware requirements but also restrictive for active participation of the market agents.
In this context, the work of this dissertation results in major contributions regarding new optimization algorithms for market design and resiliency improvement in distribution systems. In the DS market side, two novel contribution are presented: 1) A computational distributed coordination framework based on bilateral transactions where social welfare is maximized, and 2) A fully decentralized transactive framework where power suppliers, in a simultaneous auction environment, strategically bid using a Markowitz portfolio optimization approach. On the resiliency side, this research proposed a system restoration approach, taking into account uncertain devices and associated asynchronous information, by means of a two-module optimization models based on binary programming and three phase unbalanced optimal power flow. Furthermore, a Reinforcement Learning (RL) method along with a Monte Carlo tree search algorithm has been proposed to solve the scalability problem for resiliency enhancement. / Doctor of Philosophy / Distribution systems (DS) are evolving from traditional centralized and fossil fuel generation resources to networks with large scale deployment of responsive loads and distributed energy resources. Optimization-based decision-making methods to improve resiliency and coordinate DS participants are required. Prohibitive costs due to extended power outages require efficient mechanisms to avoid interruption of service to critical load during catastrophic power outages. Coordination mechanisms for various generation resources and proactive loads are in great need.
Existing optimization-based approaches either neglect the asynchronous nature of the information arrival or are computationally intractable for large scale system. The work of this dissertation results in major contributions regarding new optimization methods for market design, coordination of DS participants, and improvement of DS resiliency. Four contributions toward the application of optimization approaches for DS are made: 1) A distributed optimization algorithm based on decomposition and best approximation techniques to maximize social welfare in a market environment, 2) A simultaneous auction mechanism and portfolio optimization method in a fully decentralized market framework, 3) Binary programming and nonlinear unbalanced power flow, considering asynchronous information, to enhance resiliency in a DS, and 4) A reinforcement learning method together with an efficient search algorithm to support large scale resiliency improvement models incorporating asynchronous information.
|
9 |
Origin Destination Problem for Traffic ControlFransholm, Elin, Hallberg, Alexander January 2024 (has links)
A typical problem in traffic control is the steering over a network of vehicles with different origins and destinations. In this report this scenario is formulated as a multi-commodity network flow problem, a linear programming problem whose objective is to transport, with minimum cost, different commodities from their respective sources to their sinks through a network, while respecting the capacity constraints of the roads. The dynamic network flow formulation of the problem is also presented, extending the network over time to incorporate the temporal dimension. Different algorithms for solving the multi-commodity network flow problem are examined. First, the simplex method, more precisely its revised version, is considered, and then the Dantzig-Wolfe decomposition is illustrated, an optimization algorithm which exploits specific block structures in the constraints. These methods are applied using state-of-the-art linear programming solvers and evaluated with a simulation based on the road network in central Stockholm. The results show that both methods allow for solving the traffic flow problem, with limitations given by the specifics of the solvers and by the space and time discretization of the problem. In particular, the revised simplex algorithm results the faster method.
|
10 |
Abordagens de otimização para o problema de alocação dinâmica de veículos no contexto de transporte rodoviário de carga no BrasilAlvarez Cruz, Cesar Dario 10 March 2017 (has links)
Submitted by Aelson Maciera (aelsoncm@terra.com.br) on 2017-09-26T19:15:52Z
No. of bitstreams: 1
DissCDAC.pdf: 10114021 bytes, checksum: b3e4f52846924539caadab8587fe2250 (MD5) / Approved for entry into archive by Ronildo Prado (bco.producao.intelectual@gmail.com) on 2018-01-26T18:36:20Z (GMT) No. of bitstreams: 1
DissCDAC.pdf: 10114021 bytes, checksum: b3e4f52846924539caadab8587fe2250 (MD5) / Approved for entry into archive by Ronildo Prado (bco.producao.intelectual@gmail.com) on 2018-01-26T18:36:39Z (GMT) No. of bitstreams: 1
DissCDAC.pdf: 10114021 bytes, checksum: b3e4f52846924539caadab8587fe2250 (MD5) / Made available in DSpace on 2018-01-26T18:42:45Z (GMT). No. of bitstreams: 1
DissCDAC.pdf: 10114021 bytes, checksum: b3e4f52846924539caadab8587fe2250 (MD5)
Previous issue date: 2017-03-10 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / This work aims at treating the Dynamic Vehicle Allocation Problem (DVAP) in the context of the Brazilian Freight Transportation system. The problem consists of allocating empty vehicles to different terminals so as to attend the demand of freight transport during a predetermined planning horizon while maximizing the profit from these services. These type of decisions arise in customized freight transport services and in between-terminals operations of consolidation freight services. Given the size of the resulting models of real life problems confronted by third party logistics operators are large for using exact solution methods, heuristic methods have been used for giving good quality solution at the expense of optimality guarantee. In this context, the objective of this work is to contribute with solution methods that provide optimality guarantee or quality solution certificates for treating large-scale problems in reasonable computational times. The methods utilized are lagrangean relaxation, using subgradient optimization, and DantzigWolfe decomposition together with a lagrangian heuristic and factibilization method, respectively. Computational experiments are presented and analyzed for randomly generated instances and real-world instances from a brasilian freight operator. The latter method shows great potential for treating large-scale problems. / Este trabalho aborda o problema de Alocação Dinâmica de Veículos (PADV) no contexto de Transporte Rodoviário de Carga. O problema envolve alocar veículos de carga para atender a demanda de transporte de carga prevista entre terminais durante um horizonte de tempo multiperíodos e finito. O objetivo e maximizar o lucro gerado pelos serviços completados. Este tipo de decisões surge nos serviços de transporte de carga de lotação e na parcela de transporte de transferência dos serviços de transporte de carga consolidada. Dado que o tamanho dos problemas que enfrentam as transportadoras logísticas sÃo consideravelmente grandes parase resolver com métodos exatos em tempos computacionais aceitáveis, tem-se utilizado métodos heurísticos para dar boas soluções sem garantia de otimalidade mas em tempos toleráveis a estes problemas. Neste contexto, pretende-se contribuir com métodos de solução que proporcionem garantia de otimalidade e/ou boas soluções aproximadas, acompanhadas de certificados de otimalidade ou de qualidade de solução, para tratar problemas de porte em tempos razoáveis. Os métodos propostos estao baseados em relaxação lagrangiana, utilizando o método de otimização do subgradiente, e na decomposto de Dantzig Wolfe, utilizando a técnica de geração de colunas, além de heurísticas lagrangianas e de factibilização acopladas nestes métodos. Experimentos computacionais usando instâncias geradas aleatoriamente e baseados em dados reais de transportadoras brasileiras sao apresentados e analisados, para as duas abordagens, mostrando seus potenciais de aplicação pratica, principalmente para problemas de grande porte.
|
Page generated in 0.366 seconds