Spelling suggestions: "subject:"integer deprogram"" "subject:"integer ramprogram""
1 |
ZI Round, a MIP Rounding HeuristicWallace, Chris 01 October 2010 (has links)
We introduce a new pure integer rounding heuristic, ZI Round, and compare this heuristic to recent extremely fast pure integer rounding heuristic Simple Rounding. Simple Rounding was introduced in the non-commercial code SCIP. ZI Round attempts to round each fractional variable while using row slacks to maintain primal feasibility. We use the MIPLIB 2003 library for the test set. The average time in our run per instance for both Simple Rounding and ZI Round was 0.8 milliseconds, but ZI Round found more feasible solutions with a the same or better objective value. Also the average time to solve the lp relaxation per instance was 2.2 seconds, so these two rounding heuristics are several magnitudes faster than other heuristics which must use the lp solver, including diving heuristics.We also show that ZI Round performs well on a set covering class and a railway crew scheduling class.
|
2 |
Mathematical Models and Solution Approach for Staff Scheduling with Cross-Training at CallCenters.Kilincli Taskiran, Gamze 31 August 2015 (has links)
No description available.
|
3 |
Stochastic Programming Approaches to Multi-product Inventory Management Problems with SubstitutionZhang, Jie 29 October 2019 (has links)
The presence of substitution among multiple similar products plays an important role in inventory management. It has been observed in the literature that incorporating the impact of substitution among products can substantially improve the profit and reduce the understock or overstock risk. This thesis focuses on exploring and exploiting the impact of substitution on inventory management problems by theoretically analyzing mathematical models and developing efficient solution approaches.
To that end, we address four problems. In the first problem, we study different pricing strategies and the role of substitution for new and remanufactured products.
Our work presents a two-stage model for an original equipment manufacturer (OEM) in this regard. A closed-form one-to-one mapping of product designs onto the optimal product strategies is developed, which provides useful information for the retailer.
Our second problem is a multi-product newsvendor problem with customer-driven demand substitution. We completely characterize the optimal order policy when the demand is known and reformulate this nonconvex problem as a binary quadratic program. When the demand is stochastic, we formulate the problem as a two-stage stochastic program with mixed integer recourse, derive several necessary optimality conditions, prove the submodularity of the profit function, develop polynomial-time approximation algorithms, and show their performance guarantees. Our numerical investigation demonstrates the effectiveness of the proposed algorithms and, furthermore, reveals several useful findings and managerial insights.
In the third problem, we study a robust multi-product newsvendor model with substitution (R-MNMS), where both demand and substitution rates are uncertain and are subject to cardinality-constrained uncertainty set. We show that for given order quantities, computing the worst-case total profit, in general, is NP-hard, and therefore, address three special cases for which we provide closed-form solutions.
In practice, placing an order might incur a fixed cost. Motivated by this fact, our fourth problem extends the R-MNMS by incorporating fixed cost (denoted as R-MNMSF) and develop efficient approaches for its solution. In particular, we propose an exact branch-and-cut algorithm to solve small- or medium-sized problem instances of the R-MNMSF, and for large-scale problem instances, we develop an approximation algorithm. We further study the effects of the fixed cost and show how to tune the parameters of the uncertainty set. / Doctor of Philosophy / In a multi-product supply chain, the substitution of products arises if a customer's first-choice product is out-of-stock, and she/he have to turn to buy another similar product. It has been shown in the literature that the presence of product substitution reduces the assortment size, and thus, brings in more profit. %and reduce the inventory level.
However, how to quantitatively study and analyze substitution effects has not been addressed in the literature. This thesis fills this gap by developing and analyzing the profit model, and therefore, providing judicious decisions for the retailer to make in order to maximize their profit.
In our first problem, we consider substitution between new products and remanufactured products. We provide closed-form solutions, and a mapping that can help the retailer in choosing optimal prices and end-of-life options given a certain product design.
In our second problem, we study multi-product newsvendor model with substitution. We first show that, when the probability distribution of customers' demand is known, we can tightly approximate the proposed model as a stochastic integer program under discrete support. Next, we provide effective solution approaches to solve the multi-product newsvendor model with substitution.
In practice, typically, there is a limited information available on the customers' demand or substitution rates, and therefore, for our third problem, we study a robust model with a cardinality uncertainty set to account for these stochastic demand and substitution rates. We give closed-form solutions for the following three special cases: (1) there are only two products, (2) there is no substitution among different products, and (3) the budget of uncertainty is equal to the number of products.
Finally, similar to many inventory management problems, we include a fixed cost in the robust model and develop efficient approaches for its solution. The numerical study demonstrates the effectiveness of the proposed methods and the robustness of our model. We further illustrate the effects of the fixed cost and how to tune the parameters of the uncertainty set.
|
4 |
Simultaneously lifting multiple sets in binary knapsack integer programsKubik, Lauren Ashley January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems
Engineering / Todd W. Easton / Integer programs (IPs) are mathematical models that can provide organizations with
the ability to optimally obtain their goals through appropriate utilization and allocation
of available resources. Unfortunately, IPs are NP-complete in the strong sense, and
many integer programs cannot be solved.
Introduced by Gomory, lifting is a technique that takes a valid inequality and strengthens
it. Lifting can result in facet defining inequalities, which are the theoretically
strongest inequalities; because of this, lifting techniques are commonly used in commercial
IP software to reduce the time required to solve an IP.
This thesis introduces two new algorithms for exact simultaneous up lifting multiple
sets into binary knapsack problems and introduces sequential simultaneous lifting.
The Dynamic Programming Multiple Lifting Set Algorithm (DPMLSA) is a pseudopolynomial
time algorithm bounded by O(nb) effort that can exactly uplift an arbitrary
number of sets. The Three Set Simultaneous Lifting Algorithm (TSSLA) is a polynomial
time algorithm bounded by O(n2) and can exact simultaneously up lift three sets.
The simultaneously lifted inequalities generated by the DPMLSA and the TSSLA can
be facet defining, and neither algorithm requires starting with a minimal cover.
A brief computational study shows that the DPMLSA is fast and required an average
of only 0.070 seconds. The computational study also shows these sequential simultaneously
lifted inequalities are useful, as the solution time decreased by an overall average
of 18.4%. Therefore, implementing the DPMLSA should be beneficial for large IPs.
|
5 |
A Case Study of Network Design for Middle East Water DistributionBullene, Rachel 28 May 2010 (has links)
The Middle Eastern region encompassing Israel, Jordan, and the Palestinian Territories (West Bank and Gaza) is an arid region with fast growing populations. Adequate and equitable access to water for all the people of the region is crucial to the future of Middle East peace. However, the current water distribution system not only fails to provide an adequate and equitable allocation of water, but also results adverse impacts on the environment. This project involves building a mathematical model to aid decision-makers in designing an optimal water distribution network. A new method for incorporating uncertainty in optimization that is based on Bayesian simulation of posterior predictive distributions is used to represent uncertainty in demands and costs. The output of the model is a most-probable least-cost modication to the existing water distribution infrastructure. Additionally, the model output includes the probability that a network component (new desalination plant, new pipe, new canal) is part of a least-cost installation.
|
6 |
A mixed-integer model for optimal grid-scale energy storage allocationHarris, Chioke Bem 03 January 2011 (has links)
To meet ambitious upcoming state renewable portfolio standards (RPSs), respond to customer demand for “green” electricity choices and to move towards more renewable, domestic and clean sources of energy, many utilities and power producers are accelerating deployment of wind, solar photovoltaic and solar thermal generating facilities. These sources of electricity, particularly wind power, are highly variable and difficult to forecast. To manage this variability, utilities can increase availability of fossil fuel-dependent backup generation, but this approach will eliminate some of the emissions benefits associated with renewable energy. Alternately, energy storage could provide needed ancillary services for renewables. Energy storage could also support other operational needs for utilities, providing greater system resiliency, zero emission ancillary services for other generators, faster responses than current backup generation and lower marginal costs than some fossil fueled alternatives. These benefits might justify the high capital cost associated with energy storage. Quantitative analysis of the role energy storage can have in improving economic dispatch, however, is limited. To examine the potential benefits of energy storage availability, a generalized unit commitment model of thermal generating units and energy storage facilities is developed. Initial study will focus on the city of Austin, Texas. While Austin Energy’s proximity to and collaborative partnerships with The University of Texas at Austin facilitated collaboration, their ambitious goal to produce 30-35% of their power from renewable sources by 2020, as well as their continued leadership in smart grid technology implementation makes them an excellent initial test case. The model developed here will be sufficiently flexible that it can be used to study other utilities or coherent regions. Results from the energy storage deployment scenarios studied here show that if all costs are ignored, large quantities of seasonal storage are preferred, enabling storage of plentiful wind generation during winter months to be dispatched during high cost peak periods in the summer. Such an arrangement can yield as much as $94 million in yearly operational cost savings, but might cost hundreds of billions to implement. Conversely, yearly cost reductions of $40 million can be achieved with one CAES facility and a small fleet of electrochemical storage devices. These results indicate that small quantities of storage could have significant operational benefit, as they manage only the highest cost hours of the year, avoiding the most expensive generators while improving utilization of renewable generation throughout the year. Further study using a modified unit commitment model can help to narrow the performance requirements of storage, clarify optimal storage portfolios and determine the optimal siting of this storage within the grid. / text
|
7 |
Probabilistic covering problemsQiu, Feng 25 February 2013 (has links)
This dissertation studies optimization problems that involve probabilistic covering constraints. A probabilistic constraint evaluates and requires that the probability that a set of constraints involving random coefficients with known distributions hold satisfy a minimum requirement. A covering constraint involves a linear inequality on non-negative variables with a greater or equal to sign and non-negative coefficients. A variety of applications, such as set cover problems, node/edge cover problems, crew scheduling, production planning, facility location, and machine learning, in uncertain settings involve probabilistic covering constraints.
In the first part of this dissertation we consider probabilistic covering linear programs. Using the sampling average approximation (SAA) framework, a probabilistic covering linear program can be approximated by a covering k-violation linear program (CKVLP), a deterministic covering linear program in which at most k constraints are allowed to be violated. We show that CKVLP is strongly NP-hard. Then, to improve the performance of standard mixed-integer programming (MIP) based schemes for CKVLP, we (i) introduce and analyze a coefficient strengthening scheme, (ii) adapt and analyze an existing cutting plane technique, and (iii) present a branching technique. Through computational experiments, we empirically verify that these techniques are significantly effective in improving solution times over the CPLEX MIP solver. In particular, we observe that the proposed schemes can cut down solution times from as much as six days to under four hours in some instances. We also developed valid inequalities arising from two subsets of the constraints in the original formulation. When incorporating them with a modified coefficient strengthening procedure, we are able to solve a difficult probabilistic portfolio optimization instance listed in MIPLIB 2010, which cannot be solved by existing approaches.
In the second part of this dissertation we study a class of probabilistic 0-1 covering problems, namely probabilistic k-cover problems. A probabilistic k-cover problem is a stochastic version of a set k-cover problem, which is to seek a collection of subsets with a minimal cost whose union covers each element in the set at least k times. In a stochastic setting, the coefficients of the covering constraints are modeled as Bernoulli random variables, and the probabilistic constraint imposes a minimal requirement on the probability of k-coverage. To account for absence of full distributional information, we define a general ambiguous k-cover set, which is ``distributionally-robust." Using a classical linear program (called the Boolean LP) to compute the probability of events, we develop an exact deterministic reformulation to this ambiguous k-cover problem. However, since the boolean model consists of exponential number of auxiliary variables, and hence not useful in practice, we use two linear program based bounds on the probability that at least k events occur, which can be obtained by aggregating the variables and constraints of the Boolean model, to develop tractable deterministic approximations to the ambiguous k-cover set. We derive new valid inequalities that can be used to strengthen the linear programming based lower bounds. Numerical results show that these new inequalities significantly improve the probability bounds. To use standard MIP solvers, we linearize the multi-linear terms in the approximations and develop mixed-integer linear programming formulations. We conduct computational experiments to demonstrate the quality of the deterministic reformulations in terms of cost effectiveness and solution robustness. To demonstrate the usefulness of the modeling technique developed for probabilistic k-cover problems, we formulate a number of problems that have up till now only been studied under data independence assumption and we also introduce a new applications that can be modeled using the probabilistic k-cover model.
|
8 |
Optimal randomized and non-randomized procedures for multinomial selection problemsTollefson, Eric Sander 20 March 2012 (has links)
Multinomial selection problem procedures are ranking and selection techniques that aim to select the best (most probable) alternative based upon a sequence of multinomial observations. The classical formulation of the procedure design problem is to find a decision rule for terminating sampling. The decision rule should minimize the expected number of observations taken while achieving a specified indifference zone requirement on the prior probability of making a correct selection when the alternative configurations are in a particular subset of the probability space called the preference zone. We study the constrained version of the design problem in which there is a given maximum number of allowed observations.
Numerous procedures have been proposed over the past 50 years, all of them suboptimal. In this thesis, we find via linear programming the optimal selection procedure for any given probability configuration. The optimal procedure turns out to be necessarily randomized in many cases. We also find via mixed integer programming the optimal non-randomized procedure. We demonstrate the performance of the methodology on a number of examples.
We then reformulate the mathematical programs to make them more efficient to implement, thereby significantly expanding the range of computationally feasible problems. We prove that there exists an optimal policy which has at most one randomized decision point and we develop a procedure for finding such a policy. We also extend our formulation to replicate existing procedures.
Next, we show that there is very little difference between the relative performances of the optimal randomized and non-randomized procedures. Additionally, we compare existing procedures using the optimal procedure as a benchmark, and produce updated tables for a number of those procedures.
Then, we develop a methodology that guarantees the optimal randomized and non-randomized procedures for a broad class of variable observation cost functions -- the first of its kind. We examine procedure performance under a variety of cost functions, demonstrating that incorrect assumptions regarding marginal observation costs may lead to increased total costs.
Finally, we investigate and challenge key assumptions concerning the indifference zone parameter and the conditional probability of correct selection, revealing some interesting implications.
|
9 |
New approaches to integer programmingChandrasekaran, Karthekeyan 28 June 2012 (has links)
Integer Programming (IP) is a powerful and widely-used formulation for combinatorial problems. The study of IP over the past several decades has led to fascinating theoretical developments, and has improved our ability to solve discrete optimization problems arising in practice. This thesis makes progress on algorithmic solutions for IP by building on combinatorial, geometric and Linear Programming (LP) approaches.
We use a combinatorial approach to give an approximation algorithm for the feedback vertex set problem (FVS) in a recently developed Implicit Hitting Set framework. Our algorithm is a simple online algorithm which finds a nearly optimal FVS in random graphs. We also propose a planted model for FVS and show that an optimal hitting set for a polynomial number of subsets is sufficient to recover the planted subset.
Next, we present an unexplored geometric connection between integer feasibility and the classical notion of discrepancy of matrices. We exploit this connection to show a phase transition from infeasibility to feasibility in random IP instances. A recent algorithm for small discrepancy solutions leads to an efficient algorithm to find an integer point for random IP instances that are feasible with high probability.
Finally, we give a provably efficient implementation of a cutting-plane algorithm for perfect matchings. In our algorithm, cuts separating the current optimum are easy to derive while a small LP is solved to identify the cuts that are to be retained for later iterations. Our result gives a rigorous theoretical explanation for the practical efficiency of the cutting plane approach for perfect matching evident from implementations.
In summary, this thesis contributes to new models and connections, new algorithms and rigorous analysis of well-known approaches for IP.
|
10 |
Aplicação do modelo de roteamento de veículos no planejamento da colheita florestal / Application of vehicle routing problem on the forest harvest planningCezana, Diego Piva 22 February 2013 (has links)
Made available in DSpace on 2016-12-23T13:51:47Z (GMT). No. of bitstreams: 1
Diego Piva Cezana.pdf: 6990911 bytes, checksum: 41e1f23199e3e7a20de70f3d940a42d3 (MD5)
Previous issue date: 2013-02-22 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O objetivo deste trabalho foi avaliar a viabilidade de se utilizar o modelo de roteamento de veículos no planejamento da colheita florestal. Para isto, foi proposto um problema exemplo envolvendo dez talhões a serem colhidos, duas frentes de colheita, dois modais de transporte da madeira e quatro possíveis atividades a serem realizadas nos talhões (primeiro ou segundo desbastes, corte raso para serraria ou corte raso para celulose) com diferentes objetivos a fim de avaliar a eficiência do modelo em diferentes cenários de planejamento. O problema foi desenhado com onze restrições e com a função objetivo no sentido de minimizar os custos totais. Assim, com a base de dados disponível foi possível idealizar um problema de planejamento florestal representativo da realidade e desenvolver um modelo de otimização eficaz que apresentou resultados adequados em todos os cenários avaliados / The aim of this study was to evaluate the feasibility of using vehicle routing problem in planning of forest harvesting. For this, it was proposed an example problem involving ten stands to be harvested, two harvest teams, two modes of wood transportation and four possible activities to be performed in stands (first or second thinning, clear cutting for timber or pulpwood clearcutting) with different goals in order to evaluate the efficiency of the model in different planning scenarios. The problem was designed with eleven constraints and the objective function to minimize the total costs. Thus, with the available database it was possible to design a forest management problem representative of reality and develop an optimization model which showed effective results suitable in all scenarios evaluated
|
Page generated in 0.0471 seconds