• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 5
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 33
  • 33
  • 14
  • 13
  • 11
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Synthèse de réseaux à composantes connexes unicycliques / On the design of networks with unicyclic connected components

Hadji, Makhlouf 24 September 2009 (has links)
Cette thèse s'inscrit dans le domaine de l'optimisation combinatoire. Elle utilise l'approche polyèdrale pour résoudre des problèmes combinatoires qui se posent dans le contexte des réseaux de télécommunications. Nous introduisons et étudions le problème de synthèse de réseaux à composantes connexes unicycliques. Après avoir rappelé que le problème est facile à résoudre en absence d'autres contraintes, nous étudions de nouvelles variantes en intégrant de nouvelles contraintes techniques. Nous commençons par une contrainte portant sur la taille des cycles. Nous souhaitons interdire tous les cycles contenant au plus $p$ sommets. Le problème est alors NP-Difficile. Des inégalités valides sont alors proposées pour ce problème. On montre sous des conditions bien précises que ces inégalités peuvent être des facettes. Plusieurs algorithmes polynomiaux ont été proposés pour la séparation des inégalités valides. Ces algorithme sont mis en oeuvre et des résultats numériques sont donnés. Nous nous focalisons par la suite sur un nouveau problème dit de Steiner consistant à partitionner un réseau en composantes unicycliques tout en imposant que certains sommets soient sur les cycles. On montre alors que ce problème est facile au sens de la complexité algorithmique en proposant un algorithme polynomial et une formulation étendue du problème. On présente également une description partielle de l'enveloppe convexe des vecteurs d'incidence de ces réseaux. La séparation des inégalités est également étudiée. Nous proposons notamment une généralisation de l'algorithme de Padberg-Rao pour séparer les inégalités Blossom. D'autres contraintes techniques sont prises en compte : contraintes de degrés, contrainte sur le nombre de composantes connexes, appartenance de certains sommets à une même composante connexe et enfin la séparation de certains sommets qui doivent être sur des composantes différentes. Enfin, nous faisons une étude spectrale de deux classes spécifiques de graphes unicycliques. / In this thesis, we use the polyhedral approach to solve combinatorial problems in telecommunications context. First, we introduce the problem of network design with unicyclic connected components. We recall that without other constraints, our problem is easy to solve, and we propose a study with new technical constraints. We start our study by adding constraints on the size of cycles. We aim to obtain unicyclic components such that the size of each cycle is not lower than a certain p. This problem is NP-Hard. We describe some valid inequalities for the design of unicyclic graphs with girth constraints. The faces induced by these valid inequalities are also studied. Some of them can be separated in polynomial time. A cutting plane algorithm based on these inequalities is implemented to solve the problem. Furthermore, we focus on a Steiner type problem, which consists in partitioning the graph to unicyclic components, such that some given vertices belong to a cycle. We prove then that our problem is easy to solve, and we propose an exact extended formulation and a partial description of the convex hull of the incidence vectors of our Steiner network problem. Polynomial time separation algorithms are described. One of them is a generalization of the Padberg&Rao algorithm to separate blossom inequalities. Other technical constraints are proposed such as degree constraints, a bound of the number of unicyclic components, constraints related to whether some given pairs of vertices belong to the same component or to different components. Finally, we study the spectra of two specified classes of unicyclic graphs.
12

Tight Discrete Formulations to Enhance Solvability with Applications to Production, Telecommunications, and Air Transportation Problems

Smith, J. Cole 20 April 2000 (has links)
In formulating discrete optimization problems, it is not only important to have a correct mathematical model, but to have a well structured model that can be solved effectively. Two important characteristics of a general integer or mixed-integer program are its size (the number of constraints and variables in the problem), and its strength or tightness (a measure of how well it approximates the convex hull of feasible solutions). In designing model formulations, it is critical to ensure a proper balance between compactness of the representation and the tightness of its linear relaxation, in order to enhance its solvability. In this dissertation, we consider these issues pertaining to the modeling of mixed-integer 0-1 programming problems in general, as well as in the context of several specific real-world applications, including a telecommunications network design problem and an airspace management problem. We first consider the Reformulation-Linearization Technique (RLT) of Sherali and Adams and explore the generation of reduced first-level representations for mixed-integer 0-1 programs that tend to retain the strength of the full first-level linear programming relaxation. The motivation for this study is provided by the computational success of the first-level RLT representation (in full or partial form) experienced by several researchers working on various classes of problems. We show that there exists a first-level representation having only about half the RLT constraints that yields the same lower bound value via its relaxation. Accordingly, we attempt to a priori predict the form of this representation and identify many special cases for which this prediction is accurate. However, using various counter-examples, we show that this prediction as well as several variants of it are not accurate in general, even for the case of a single binary variable. Since the full first-level relaxation produces the convex hull representation for the case of a single binary variable, we investigate whether this is the case with respect to the reduced first-level relaxation as well, and show similarly that it holds true only for some special cases. Empirical results on the prediction capability of the reduced, versus the full, first-level representation demonstrate a high level of prediction accuracy on a set of random as well as practical, standard test problems. Next, we focus on a useful modeling concept that is frequently ignored while formulating discrete optimization problems. Very often, there exists a natural symmetry inherent in the problem itself that, if propagated to the model, can hopelessly mire a branch-and-bound solver by burdening it to explore and eliminate such alternative symmetric solutions. We discuss three applications where such a symmetry arises. For each case, we identify the indistinguishable objects in the model which create the problem symmetry, and show how imposing certain decision hierarchies within the model significantly enhances its solvability. These hierarchies render an otherwise virtually intractable formulation computationally viable using commercial software. For the first problem, we consider a problem of minimizing the maximum dosage of noise to which workers are exposed while working on a set of machines. We next examine a problem of minimizing the cost of acquiring and utilizing machines designed to cool large facilities or buildings, subject to minimum operational requirements. For each of these applications, we generate realistic test beds of problems. The decision hierarchies allow all previously intractable problems to be solved relatively quickly, and dramatically decrease the required computational time for all other problems. For the third problem, we investigate a network design problem arising in the context of deploying synchronous optical networks (SONET) using a unidirectional path switched ring architecture, a standard of transmission using optical fiber technology. Given several rings of this type, the problem is to find a placement of nodes to possibly multiple rings, and to determine what portion of demand traffic between node pairs spanned by each ring should be allocated to that ring. The constraints require that the demand traffic between each node pair should be satisfiable given the ring capacities, and that no more than a specified maximum number of nodes should be assigned to each ring. The objective function is to minimize the total number of node-to-ring assignments, and hence, the capital investment in add-drop multiplexer equipments. We formulate the problem as a mixed-integer programming model, and propose several alternative modeling techniques designed to improve the mathematical representation of this problem. We then develop various classes of valid inequalities for the problem along with suitable separation procedures for tightening the representation of the model, and accordingly, prescribe an algorithmic approach that coordinates tailored routines with a commercial solver (CPLEX). We also propose a heuristic procedure which enhances the solvability of the problem and provides bounds within 5-13% of the optimal solution. Promising computational results that exhibit the viability of the overall approach and that lend insights into various modeling and algorithmic constructs are presented. Following this we turn our attention to the modeling and analysis of several issues related to airspace management. Currently, commercial aircraft are routed along certain defined airspace corridors, where safe minimum separation distances between aircraft may be routinely enforced. However, this mode of operation does not fully utilize the available airspace resources, and may prove to be inadequate under future National Airspace (NAS) scenarios involving new concepts such as Free-Flight. This mode of operation is further compounded by the projected significant increase in commercial air traffic. (Free-Flight is a paradigm of aircraft operations which permits the selection of more cost-effective routes for flights rather than simple traversals between designated way-points, from various origins to different destinations.) We begin our study of Air Traffic Management (ATM) by first developing an Airspace Sector Occupancy Model (AOM) that identifies the occupancies of flights within three dimensional (possibly nonconvex) regions of space called sectors. The proposed iterative procedure effectively traces each flight's progress through nonconvex sector modules which comprise the sectors. Next, we develop an Aircraft Encounter Model (AEM), which uses the information obtained from AOM to efficiently characterize the number and nature of blind-conflicts (i.e., conflicts under no avoidance or resolution maneuvers) resulting from a selected mix of flight-plans. Besides identifying the existence of a conflict, AEM also provides useful information on the severity of the conflict, and its geometry, such as the faces across which an intruder enters and exits the protective shell or envelope of another aircraft, the duration of intrusion, its relative heading, and the point of closest approach. For purposes of evaluation and assessment, we also develop an aggregate metric that provides an overall assessment of the conflicts in terms of their individual severity and resolution difficulty. We apply these models to real data provided by the Federal Aviation Administration (FAA) for evaluating several Free-Flight scenarios under wind-optimized and cruise-climb conditions. We digress at this point to consider a more general collision detection problem that frequently arises in the field of robotics. Given a set of bodies with their initial positions and trajectories, we wish to identify the first collision that occurs between any two bodies, or to determine that none exists. For the case of bodies having linear trajectories, we construct a convex hull representation of the integer programming model of Selim and Almohamad, and exhibit the relative effectiveness of solving this problem via the resultant linear program. We also extend this analysis to model a situation in which bodies move along piecewise linear trajectories, possibly rotating at the end of each linear translation. For this case, we again compare an integer programming approach with its linear programming convex hull representation, and exhibit the relative effectiveness of solving a sequence of problems based on applying the latter construct to each time segment. Returning to Air Traffic Management, another future difficulty in airspace resource utilization stems from a projected increase in commercial space traffic, due to the advent of Reusable Launch Vehicle (RLV) technology. Currently, each shuttle launch cordons off a large region of Special Use Airspace (SUA) in which no commercial aircraft are permitted to enter for the specified duration. Of concern to airspace planners is the expense of routinely disrupting air traffic, resulting in circuitous diversions and delays, while enforcing such SUA restrictions. To provide a tool for tactical and planning purposes in such a context within the framework of a coordinated decision making process between the FAA and commercial airlines, we develop an Airspace Planning Model (APM). Given a set of flights for a particular time horizon, along with (possibly several) alternative flight-plans for each flight that are based on delays and diversions due to special-use airspace (SUA) restrictions prompted by launches at spaceports or weather considerations, this model prescribes a set of flight-plans to be implemented. The model formulation seeks to minimize a delay and fuel cost based objective function, subject to the constraints that each flight is assigned one of the designated flight-plans, and that the resulting set of flight-plans satisfies certain specified workload, safety, and equity criteria. These requirements ensure that the workload for air-traffic controllers in each sector is held under a permissible limit, that any potential conflicts which may occur are routinely resolvable, and that the various airlines involved derive equitable levels of benefits from the overall implemented schedule. In order to solve the resulting 0-1 mixed-integer programming problem more effectively using commercial software (CPLEX-MIP), we explore the use of various facetial cutting planes and reformulation techniques designed to more closely approximate the convex hull of feasible solutions to the problem. We also prescribe a heuristic procedure which is demonstrated to provide solutions to the problem that are either optimal or are within 0.01% of optimality. Computational results are reported on several scenarios based on actual flight data obtained from the Federal Aviation Administration (FAA) in order to demonstrate the efficacy of the proposed approach for air traffic management (ATM) purposes. In addition to the evaluation of these various models, we exhibit the usefulness of this airspace planning model as a strategic planning tool for the FAA by exploring the sensitivity of the solution provided by the model to changes both in the radius of the SUA formulated around the spaceport, and in the duration of the launch-window during which the SUA is activated. / Ph. D.
13

Investigating Robustness, Public Transport Optimization, and their Interface / Mathematical Models and Solution Algorithms

Pätzold, Julius 28 June 2019 (has links)
No description available.
14

Risk optimization with p-order conic constraints

Soberanis, Policarpio Antonio 01 December 2009 (has links)
My dissertation considers solving of linear programming problems with p-order conic constraints that are related to a class of stochastic optimization models with risk objective or constraints that involve higher moments of loss distributions. The general proposed approach is based on construction of polyhedral approximations for p-order cones, thereby approximating the non-linear convex p-order conic programming problems using linear programming models. It is shown that the resulting LP problems possess a special structure that makes them amenable to efficient decomposition techniques. The developed algorithms are tested on the example of portfolio optimization problem with higher moment coherent risk measures that reduces to a p-order conic programming problem. The conducted case studies on real financial data demonstrate that the proposed computational techniques compare favorably against a number of benchmark methods, including second-order conic programming methods.
15

指數基金追蹤模型的最佳化 / A Tracking Model for Index Fund Portfolio Optimization

白惠琦 Unknown Date (has links)
指數基金係提供投資者追隨市場指數成長的投資工具,且投資者僅需考量市場風險即可,其建構方式有完全複製法、分層法、抽樣法、及最佳化法。本論文使用目標規劃模型建構指數基金,此法可歸類為最佳化法。由於模型中每種股票的投資數量設為整數變數,加上控制股票種類數量的0-1變數,因此所建構的目標規劃模型為混合型整數線性規劃問題。此問題在大尺度模型時往往無法求得其最佳解,我們研究此模型的結構提出一組縮小解集合空間的合理不等式,應用切面法加入必需的不等式後再根據本模型的對偶性質發展出有效率的啟發式演算法,最後將此模型及演算法應用在模擬台灣發行量加權股價指數。 / Index fund is an investment tool which tracks a stock-market index and thus is associated with market risk only. Its attraction to investors is low investment risk and low administrative expenses. Four different approaches to index fund construction can be classified as full replication, stratification, sampling, and optimizing respectively. In this thesis, we construct an index fund via the goal programming model with the optimizing approach. The model can be formulated as a mixed integer linear programming. The exact optimal solution can not be obtained when the model becomes large. We then develop a valid inequality and use this valid inequality to develop a cutting plane method. We also propose an efficient heuristic by adopting the dual property. Finally, an empirical study applying to the Taiwan Stock Exchange Capitalization Weighted Stock Index is given to show the efficiency of the algorithm.
16

Towards the Solution of Large-Scale and Stochastic Traffic Network Design Problems

Hellman, Fredrik January 2010 (has links)
<p>This thesis investigates the second-best toll pricing and capacity expansion problems when stated as mathematical programs with equilibrium constraints (MPEC). Three main questions are rised: First, whether conventional descent methods give sufficiently good solutions, or whether global solution methods are to prefer. Second, how the performance of the considered solution methods scale with network size. Third, how a discretized stochastic mathematical program with equilibrium constraints (SMPEC) formulation of a stochastic network design problem can be practically solved. An attempt to answer these questions is done through a series ofnumerical experiments.</p><p>The traffic system is modeled using the Wardrop’s principle for user behavior, separable cost functions of BPR- and TU71-type. Also elastic demand is considered for some problem instances.</p><p>Two already developed method approaches are considered: implicit programming and a cutting constraint algorithm. For the implicit programming approach, several methods—both local and global—are applied and for the traffic assignment problem an implementation of the disaggregate simplicial decomposition (DSD) method is used. Regarding the first question concerning local and global methods, our results don’t give a clear answer.</p><p>The results from numerical experiments of both approaches on networks of different sizes shows that the implicit programming approach has potential to solve large-scale problems, while the cutting constraint algorithm scales worse with network size.</p><p>Also for the stochastic extension of the network design problem, the numerical experiments indicate that implicit programming is a good approach to the problem.</p><p>Further, a number of theorems providing sufficient conditions for strong regularity of the traffic assignment solution mapping for OD connectors and BPR cost functions are given.</p>
17

On the Efficient Solution of Variational Inequalities; Complexity and Computational Efficiency

Perakis, Georgia, Zaretsky, M. (Marina) 01 1900 (has links)
In this paper we combine ideas from cutting plane and interior point methods in order to solve variational inequality problems efficiently. In particular, we introduce a general framework that incorporates nonlinear as well as linear "smarter" cuts. These cuts utilize second order information on the problem through the use of a gap function. We establish convergence as well as complexity results for this framework. Moreover, in order to devise more practical methods, we consider an affine scaling method as it applies to symmetric, monotone variationalinequality problems and demonstrate its convergence. Finally, in order to further improve the computational efficiency of the methods in this paper, we combine the cutting plane approach with the affine scaling approach.
18

Empirical Analysis of Algorithms for Block-Angular Linear Programs

Dang, Jiarui January 2007 (has links)
This thesis aims to study the theoretical complexity and empirical performance of decomposition algorithms. We focus on linear programs with a block-angular structure. Decomposition algorithms used to be the only way to solve large-scale special structured problems, in terms of memory limit and CPU time. However, with the advances in computer technology over the past few decades, many large-scale problems can now be solved simply by using some general purpose LP software, without exploiting the problems' inner structures. A question arises naturally, should we solve a structured problem with decomposition, or directly solve it as a whole? We try to understand how a problem's characteristics influence its computational performance, and compare the relative efficiency of algorithms with and without decomposition. Two comparisons are conducted in our research: first, the Dantzig-Wolfe decomposition method (DW) versus the simplex method (simplex); second, the analytic center cutting plane method (ACCPM) versus the interior point method (IPM). These comparisons fall into the two main solution approaches in linear programming: simplex-based algorithms and IPM-based algorithms. Motivated by our observations of ACCPM and DW decomposition, we devise a hybrid algorithm combining ACCPM and DW, which are the counterparts of IPM and simplex in the decomposition framework, to take the advantages of both: the quick convergence rate of IPM-based methods, as well as the accuracy of simplex-based algorithms. A large set of 316 instances is incorporated in our experiments, so that different dimensioned problems with primal or dual block-angular structures are covered to test our conclusions.
19

Towards the Solution of Large-Scale and Stochastic Traffic Network Design Problems

Hellman, Fredrik January 2010 (has links)
This thesis investigates the second-best toll pricing and capacity expansion problems when stated as mathematical programs with equilibrium constraints (MPEC). Three main questions are rised: First, whether conventional descent methods give sufficiently good solutions, or whether global solution methods are to prefer. Second, how the performance of the considered solution methods scale with network size. Third, how a discretized stochastic mathematical program with equilibrium constraints (SMPEC) formulation of a stochastic network design problem can be practically solved. An attempt to answer these questions is done through a series ofnumerical experiments. The traffic system is modeled using the Wardrop’s principle for user behavior, separable cost functions of BPR- and TU71-type. Also elastic demand is considered for some problem instances. Two already developed method approaches are considered: implicit programming and a cutting constraint algorithm. For the implicit programming approach, several methods—both local and global—are applied and for the traffic assignment problem an implementation of the disaggregate simplicial decomposition (DSD) method is used. Regarding the first question concerning local and global methods, our results don’t give a clear answer. The results from numerical experiments of both approaches on networks of different sizes shows that the implicit programming approach has potential to solve large-scale problems, while the cutting constraint algorithm scales worse with network size. Also for the stochastic extension of the network design problem, the numerical experiments indicate that implicit programming is a good approach to the problem. Further, a number of theorems providing sufficient conditions for strong regularity of the traffic assignment solution mapping for OD connectors and BPR cost functions are given.
20

Empirical Analysis of Algorithms for Block-Angular Linear Programs

Dang, Jiarui January 2007 (has links)
This thesis aims to study the theoretical complexity and empirical performance of decomposition algorithms. We focus on linear programs with a block-angular structure. Decomposition algorithms used to be the only way to solve large-scale special structured problems, in terms of memory limit and CPU time. However, with the advances in computer technology over the past few decades, many large-scale problems can now be solved simply by using some general purpose LP software, without exploiting the problems' inner structures. A question arises naturally, should we solve a structured problem with decomposition, or directly solve it as a whole? We try to understand how a problem's characteristics influence its computational performance, and compare the relative efficiency of algorithms with and without decomposition. Two comparisons are conducted in our research: first, the Dantzig-Wolfe decomposition method (DW) versus the simplex method (simplex); second, the analytic center cutting plane method (ACCPM) versus the interior point method (IPM). These comparisons fall into the two main solution approaches in linear programming: simplex-based algorithms and IPM-based algorithms. Motivated by our observations of ACCPM and DW decomposition, we devise a hybrid algorithm combining ACCPM and DW, which are the counterparts of IPM and simplex in the decomposition framework, to take the advantages of both: the quick convergence rate of IPM-based methods, as well as the accuracy of simplex-based algorithms. A large set of 316 instances is incorporated in our experiments, so that different dimensioned problems with primal or dual block-angular structures are covered to test our conclusions.

Page generated in 0.078 seconds