Spelling suggestions: "subject:"mixed integer programming"" "subject:"mixed nteger programming""
51 |
The therapist scheduling problem for patients with fixed appointment timesWang, Huan, master of science in engineering 27 February 2012 (has links)
This report presents a series of models that can be used to find weekly schedules for therapists who provide ongoing treatment to patients scattered around a geographical region. In all cases, the patients’ appointment times and visit days are known prior to the beginning of the planning horizon. Variations in the model include single vs. multiple home bases, homogeneous vs. heterogeneous therapists, lunch break requirements, and a nonlinear cost structure for mileage reimbursement and overtime. The single home base and homogeneous therapist cases proved to be easy to solve and so were not investigated. This left two cases of interest: the first includes only lunch breaks while the second adds overtime and mileage reimbursement. In all, 40 randomly generated data sets were solved that consisted of either 15 or 20 therapists and between roughly 300 and 540 visits over five days. For each instance, we were able to obtain the minimum cost of providing home healthcare services for both models using CPLEX 12.2. The results showed that CPU time increases more rapidly than total cost as the total number of visits grows. In general, data sets with therapists who have different starting and ending locations are more difficult to solve than those whose therapists have the same home base. / text
|
52 |
Petroleum refinery scheduling with consideration for uncertaintyHamisu, Aminu Alhaji 07 1900 (has links)
Scheduling refinery operation promises a big cut in logistics cost, maximizes efficiency, organizes allocation of material and resources, and ensures that production meets targets set by planning team. Obtaining accurate and reliable schedules for execution in refinery plants under different scenarios has been a serious challenge. This research was undertaken with the aim to develop robust methodologies and solution procedures to address refinery scheduling problems with uncertainties in process parameters.
The research goal was achieved by first developing a methodology for short-term crude oil unloading and transfer, as an extension to a scheduling model reported by Lee et al. (1996). The extended model considers real life technical issues not captured in the original model and has shown to be more reliable through case studies. Uncertainties due to disruptive events and low inventory at the end of scheduling horizon were addressed. With the extended model, crude oil scheduling problem was formulated under receding horizon control framework to address demand uncertainty. This work proposed a strategy called fixed end horizon whose efficiency in terms of performance was investigated and found out to be better in comparison with an existing approach.
In the main refinery production area, a novel scheduling model was developed. A large scale refinery problem was used as a case study to test the model with scheduling horizon discretized into a number of time periods of variable length. An equivalent formulation with equal interval lengths was also presented and compared with the variable length formulation. The results obtained clearly show the advantage of using variable timing. A methodology under self-optimizing control (SOC) framework was then developed to address uncertainty in problems involving mixed integer formulation. Through case study and scenarios, the approach has proven to be efficient in dealing with uncertainty in crude oil composition.
|
53 |
Stochastic Programming Approaches for the Placement of Gas Detectors in Process FacilitiesLegg, Sean W 16 December 2013 (has links)
The release of flammable and toxic chemicals in petrochemical facilities is a major concern when designing modern process safety systems. While the proper selection of the necessary types of gas detectors needed is important, appropriate placement of these detectors is required in order to have a well-functioning gas detection system. However, the uncertainty in leak locations, gas composition, process and weather conditions, and process geometries must all be considered when attempting to determine the appropriate number and placement of the gas detectors. Because traditional approaches are typically based on heuristics, there exists the need to develop more rigorous optimization based approaches to handling this problem. This work presents several mixed-integer programming formulations to address this need.
First, a general mixed-integer linear programming problem is presented. This formulation takes advantage of precomputed computational fluid dynamics (CFD) simulations to determine a gas detector placement that minimizes the expected detection time across all scenarios. An extension to this formulation is added that considers the overall coverage in a facility in order to improve the detector placement when enough scenarios may not be available. Additionally, a formulation considering the Conditional-Value-at-Risk is also presented. This formulation provides some control over the shape of the tail of the distribution, not only minimizing the expected detection time across all scenarios, but also improving the tail behavior.
In addition to improved formulations, procedures are introduced to determine confidence in the placement generated and to determine if enough scenarios have been used in determining the gas detector placement. First, a procedure is introduced to analyze the performance of the proposed gas detector placement in the face of “unforeseen” scenarios, or scenarios that were not necessarily included in the original formulation. Additionally, a procedure for determine the confidence interval on the optimality gap between a placement generated with a sample of scenarios and its estimated performance on the entire uncertainty space. Finally, a method for determining if enough scenarios have been used and how much additional benefit is expected by adding more scenarios to the optimization is proposed.
Results are presented for each of the formulations and methods presented using three data sets from an actual process facility. The use of an off-the-shelf toolkit for the placement of detectors in municipal water networks from the EPA, known as TEVA-SPOT, is explored. Because this toolkit was not designed for placing gas detectors, some adaptation of the files is necessary, and the procedure for doing so is presented.
|
54 |
Economic Dispatch using Advanced Dynamic Thermal RatingMilad, Khaki Unknown Date
No description available.
|
55 |
Facility Location and Transportation in Two Free Trade ZonesMatuk, Tiffany Amber 06 November 2014 (has links)
In any supply chain, the location of facilities and the routing of material are important decisions that contribute a significant amount of costs, lowering a corporation's overall profits. These choices become more important when dealing with a global supply chain, whose players span multiple countries and continents. International factors, such as tax rates and transfer prices, must be carefully considered, while the advantages of timely delivery versus cost-effective transportation must be carefully weighed to ensure that customer demands are met at the best possible price.
We examine an international supply chain with plants, distribution centers (DCs), and customers in the North American Free Trade Agreement (NAFTA) and the European Union (EU) regions. The company in question manufactures two sub-assemblies at its plant in Mexico, and then assembles them into a final product at DCs in North America and Europe. To better serve its European customers, the company wishes to locate a new plant in the EU, as well as determine the modes of transportation used to distribute products between nodes, while maximizing overall profit.
The problem is formulated as a mixed integer linear program and is solved in two stages using a Strategic Model (SM) and an Operational Model (OM). In SM, each time period represents one month and we determine the optimal facility locations over a 12-month time horizon. With transportation lead times expressed in days, we can be certain that demand will be fulfilled within a single period, and for this reason, lead times are not considered in SM. At the operational level, however, each time period represents one day, and so lead times must be included as they will affect the choice of mode for a given route. The location results from SM are used as input for OM, which then gives the optimal modal and routing decisions for the network.
A number of cases are tested to determine how the optimal network is affected by changes in fixed and variable costs of facilities, transfer prices charged by plants to DCs, and the differing tax rates of each country.
|
56 |
Optimization Models and Algorithms for Workforce Scheduling with Uncertain DemandDhaliwal, Gurjot January 2012 (has links)
A workforce plan states the number of workers required at any point in time. Efficient workforce plans can help companies achieve their organizational goals while keeping costs low. In ever increasing globalized work market, companies need a competitive edge over their competitors. A competitive edge can be achieved by lowering costs. Labour costs can be one of the significant costs faced by the companies. Efficient workforce plans can provide companies with a competitive edge by finding low cost options to meet customer demand.
This thesis studies the problem of determining the required number of workers when there are two categories of workers. Workers belonging to the first category are trained to work on one type of task (called Specialized Workers); whereas, workers in the second category are trained to work in all the tasks (called Flexible Workers). This thesis makes the following three main contributions.
First, it addresses this problem when the demand is deterministic and stochastic. Two different models for deterministic demand cases have been proposed. To study the effects of uncertain demand, techniques of Robust Optimization and Robust Mathemat- ical Programming were used.
The thesis also investigates methods to solve large instances of this problem; some of the instances we considered have more than 600,000 variables and constraints. As most of the variables are integer, and objective function is nonlinear, a commercial solver was not able to solve the problem in one day. Initially, we tried to solve the problem by using Lagrangian relaxation and Outer approximation techniques but these approaches were not successful. Although effective in solving small problems, these tools were not able to generate a bound within run time limit for the large data set. A number of heuristics were proposed using projection techniques.
Finally this thesis develops a genetic algorithm to solve large instances of this prob- lem. For the tested population, the genetic algorithm delivered results within 2-3% of optimal solution.
|
57 |
Machine Learning Methods for Annual Influenza Vaccine UpdateTang, Lin 26 April 2013 (has links)
Influenza is a public health problem that causes serious illness and deaths all over the world. Vaccination has been shown to be the most effective mean to prevent infection. The primary component of influenza vaccine is the weakened strains. Vaccination triggers the immune system to develop antibodies against those strains whose viral surface glycoprotein hemagglutinin (HA) is similar to that of vaccine strains. However, influenza vaccine must be updated annually since the antigenic structure of HA is constantly mutation.
Hemagglutination inhibition (HI) assay is a laboratory procedure frequently applied to evaluate the antigenic relationships of the influenza viruses. It enables the World Health Organization (WHO) to recommend appropriate updates on strains that will most likely be protective against the circulating influenza strains. However, HI assay is labour intensive and time-consuming since it requires several controls for standardization. We use two machine-learning methods, i.e. Artificial Neural Network (ANN) and Logistic Regression, and a Mixed-Integer Optimization Model to predict antigenic variety. The ANN generalizes the input data to patterns inherent in the data, and then uses these patterns to make predictions. The logistic regression model identifies and selects the amino acid positions, which contribute most significantly to antigenic difference. The output of the logistic regression model will be used to predict the antigenic variants based on the predicted probability. The Mixed-Integer Optimization Model is formulated to find hyperplanes that enable binary classification. The performances of our models are evaluated by cross validation.
|
58 |
Single-row mixed-integer programs: theory and computationsFukasawa, Ricardo 02 July 2008 (has links)
Single-row mixed-integer programming (MIP) problems have been studied thoroughly
under many different perspectives over the years. While not many practical
applications can be modeled as a single-row MIP, their importance lies in the
fact that they are simple, natural and very useful relaxations of generic MIPs.
This thesis will focus on such MIPs and present theoretical and computational
advances in their study.
Chapter 1 presents an introduction to single-row MIPs, a historical overview
of results and a motivation of why
we will be studying them. It will also contain a brief review of the topics studied in this thesis
as well as our contribution to them.
In Chapter 2, we introduce a generalization of a very
important structured single-row MIP: Gomory's master cyclic group
polyhedron (MCGP). We will show a structural result for the
generalization, characterizing all facet-defining inequalities for it.
This structural result allows us to develop relationships with MCGP,
extend it to the mixed-integer case and show how
it can be used to generate new valid inequalities for MIPs.
Chapter 3 presents research on an algorithmic view on how to maximally lift
continuous and integer variables. Connections to tilting and fractional
programming will also be presented. Even though lifting is not particular to
single-row MIPs, we envision that the general use of the techniques presented
should be on easily solvable MIP relaxations such as single-row MIPs. In fact,
Chapter 4 uses the lifting algorithm presented.
Chapter 4 presents an extension to the work of Goycoolea (2006)
which attempts to evaluate the effectiveness of Mixed Integer Rounding (MIR) and Gomory mixed-integer (GMI)
inequalities.
By extending his work, natural benchmarks arise, against which any class of cuts
derived from single-row MIPs can be compared.
Finally, Chapter 5 is dedicated to dealing with an important computational problem when
developing any computer code for solving MIPs, namely the problem of numerical accuracy. This problem arises due to the intrinsic arithmetic errors in computer floating-point arithmetic. We propose a
simple approach to deal with this issue in the context of generating MIR/GMI inequalities.
|
59 |
Draw Control in Block Caving Using Mixed Integer Linear ProgrammingDavid Rahal Unknown Date (has links)
Draw management is a critical part of the successful recovery of mineral reserves by cave mining. This thesis presents a draw control model that indirectly increases resource value by controlling production based on geotechnical constraints. The mixed integer linear programming (MILP) model is formulated as a goal programming model that includes seven general constraint types. These constraints model the mining system and drive the operation towards the dual strategic targets of total monthly production tonnage and cave shape. This approach increases value by ensuring that reserves are not lost due to poor draw practice. The model also allows any number of processing plants to feed from multiple sources (caves, stockpiles, and dumps). The ability to blend material allows the model to be included in strategic level studies that target corporate objectives while emphasising production control within each cave. There are three main production control constraints in the MILP. The first of these, the draw maturity rules, is designed to balance drawpoint production with cave propagation rates. The maturity rules are modelled using disjunctive constraints. The constraint regulates production based on drawpoint depletion. Drawpoint production increases from 100 mm/d to 404 mm/d once the drawpoint reaches 6.5% depletion. Draw can continue at this maximum rate until drawpoint ramp-down begins as 93.5% depletion. The maximum draw rate decreases to 100 mm/d at drawpoint closure in the three maturity rule systems included in the thesis. The maturity rule constraints combine with the minimum draw rate constraint to limit production based on the difference between the actual and ideal drawpoint depletion. Drawpoints which lag behind their ideal depletion are restricted by the maturity rules while those that exceeded the ideal depletion were forced to mine at their minimum rate to ensure that cave porosity was maintained. The third production control constraint, relative draw rate (RDR), prohibits isolated draw by ensuring that extraction is uniform across the cave. It does this by controlling the relative draw difference between adjacent drawpoints. It is apparent in this thesis that production from a drawpoint can have an indirect effect on remote drawpoints because the relative draw rate constraints pass from one neighbour to the next within the cave. Tightening the RDR constraint increases production variation during cave ramp-up. This variation occurs because the maturity rules dictate that new drawpoints must produce at a lower draw rate than mature drawpoints. As a result, newly opened drawpoints limit production from the mature drawpoints within their region of the cave (not just their immediate neighbours). The MILP is also used to quantify production changes caused by varying geotechnical constraints, limiting haulage capacity, and reversing mining direction. It has been shown that tightening the RDR constraint decreases total cave production. The ramp-up duration also increased by eighteen months compared to the control RDR scenario. Tighter relative draw also made it difficult to maintain cave shape during ramp-up. However, once ramp-up was complete, the tighter control produced a better depletion surface. The trial with limited haulage capacity identified bottlenecks in the materials handling system. The main bottlenecks occur in the production drives with the greatest tonnage associated with their drawpoints. There also appears to be an average haulage capacity threshold for the extraction drives of 2000 tonnes per drawpoint. Only one drive with a capacity below this threshold achieves its target production in each period. Reversing the cave advance to initiate in the South-East shows the greatest potential for achieving total production and cave shape targets. The greater number of drawpoints available early in the schedule provides more production capacity. This ability to distribute production over a greater number of drawpoints reduces the total production lag during ramp-up. In addition to its role in feasibility studies, the MILP is well suited for use as a production guidance tool. It has been shown in three case studies that the model can be used to evaluate production performance and to establish long term production targets. The first of the studies shows the analysis of historical production data by comparison to the MILP optimised schedule. The second shows that the model produces an optimised production plan irrespective of the current cave state. The final case study emulates the draw control cycle used by the Premier Diamond Mine. The series of optimised production schedules mirror that of the life-of-mine schedule generated at the start of the iterative process. The results illustrate how the MILP can be used by a draw control engineer to analyse production data and to develop long term production targets both before and after a cave is brought into full production.
|
60 |
Gestion robuste de la production électrique à horizon court terme / Robust modelization of short term power generation problemBen Salem, Sinda 11 March 2011 (has links)
Dans un marché électrique concurrentiel, EDF a adapté ses outils de gestion de production pour permettre une gestion optimale de son portefeuille, particulièrement sur les horizons journaliers et infra-journaliers, derniers leviers pour une gestion optimisée de la production. Et plus l'horizon d'optimisation s'approche du temps réel, plus les décisions prises aux instants précédents deviennent structurantes voire limitantes en terme d'actions. Ces décisions sont aujourd'hui prises sans tenir compte du caractère aléatoire de certaines entrées du modèle. En effet, pour les décisions à court-terme, la finesse et la complexité des modèles déjà dans le cas déterministe ont souvent été un frein à des travaux sur des modèles tenant compte de l'incertitude. Pour se prémunir face à ces aléas, des techniques d'optimisation en contexte incertain ont fait l'objet des travaux de cette thèse. Nous avons ainsi proposé un modèle robuste de placement de la production tenant compte des incertitudes sur la demande en puissance. Nous avons construit pour cette fin un ensemble d'incertitude permettant une description fine de l'aléa sur les prévisions de demande en puissance. Le choix d'indicateurs fonctionnels et statistiques a permis d'écrire cet ensemble comme un polyèdre d'incertitude. L'approche robuste prend en compte la notion de coût d'ajustement face à l'aléa. Le modèle a pour objectif de minimiser les coûts de production et les pires coûts induits par l'incertitude. Ces coûts d'ajustement peuvent décrire différents contextes opérationnels. Une application du modèle robuste à deux contextes métier est menée avec un calcul du coût d'ajustement approprié à chaque contexte. Enfin, le présent travail de recherche se situe, à notre connaissance, comme l'un des premiers dans le domaine de la gestion optimisée de la production électrique à court terme avec prise en compte de l'incertitude. Les résultats sont par ailleurs susceptibles d'ouvrir la voie vers de nouvelles approches du problème. / Robust Optimization is an approach typically offered as a counterpoint to Stochastic Programming to deal with uncertainty, especially because it doesn't require any precise information on stochastic distributions of data. In the present work, we deal with challenging unit-commitment problem for the French daily electricity production under demand uncertainty. Our contributions concern both uncertainty modelling and original robust formulation of unit-commitment problem. We worked on a polyhedral set to describe demand uncertainty, using statistical tools and operational indicators. In terms of modelling, we proposed robust solutions that minimize production and worst adjustment costs due to uncertainty observation. We study robust solutions under two different operational contexts. Encouraging results to the convex unit-commitment problems under uncertainty are thus obtained, with intersting research topics for future work.
|
Page generated in 0.1024 seconds