Spelling suggestions: "subject:"stochastic programming"" "subject:"stochastic erogramming""
161 |
Developing Optimization Techniques for Logistical Tendering Using Reverse Combinatorial AuctionsKiser, Jennifer 01 August 2018 (has links) (PDF)
In business-to-business logistical sourcing events, companies regularly use a bidding process known as tendering in the procurement of transportation services from third-party providers. Usually in the form of an auction involving a single buyer and one or more sellers, the buyer must make decisions regarding with which suppliers to partner and how to distribute the transportation lanes and volume among its suppliers; this is equivalent to solving the optimization problem commonly referred to as the Winner Determination Problem. In order to take into account the complexities inherent to the procurement problem, such as considering a supplier’s network, economies of scope, and the inclusion of business rules and preferences on the behalf of the buyer, we present the development of a mixed-integer linear program to model the reverse combinatorial auction for logistical tenders.
|
162 |
Multi-stage Stochastic Capacity Expansion: Models and AlgorithmsTaghavi, Majid 11 1900 (has links)
In this dissertation, we study several stochastic capacity expansion models in the presence of permanent, spot market, and contract capacity for acquisition. Using a scenario tree approach to handle the data uncertainty of the problems, we develop multi-stage stochastic integer programming formulations for these models. First, we study multi-period single resource stochastic capacity expansion problems, where different sources of capacity are available to the decision maker. We develop efficient algorithms that can solve these models to optimality in polynomial time. Second, we study multi-period stochastic network capacity expansion problems with different sources for capacity. The proposed models are NP-hard multi-stage stochastic integer programs and we develop an efficient, asymptotically convergent approximation algorithm to solve them. Third, we consider some decomposition algorithms to solve the proposed multi-stage stochastic network capacity expansion problem. We propose an enhanced Benders' decomposition algorithm to solve the problem, and a Benders' decomposition-based heuristic algorithm to find tight bounds for it. Finally, we extend the stochastic network capacity expansion model by imposing budget restriction on permanent capacity acquisition cost. We design a Lagrangian relaxation algorithm to solve the model, including heuristic methods to find tight upper bounds for it. / Thesis / Doctor of Philosophy (PhD)
|
163 |
Operation of Networked Microgrids in the Electrical Distribution SystemZhang, Fan 13 September 2016 (has links)
No description available.
|
164 |
Dynamic Probabilistic Lot-Sizing with Service Level ConstraintsGoel, Saumya 27 July 2011 (has links)
No description available.
|
165 |
NOVEL STOCHASTIC PROGRAMMING FORMULATIONS FOR ASSEMBLE-TO-ORDER SYSTEMSLIANG, HONGFENG January 2017 (has links)
We study a periodic review assemble-to-order (ATO) system introduced by Akcay
and Xu (2004) which jointly optimizes the base stock levels and the component allocation
with an independent base stock policy and a first-come- first-served allocation
rule. The formulation is a non-smooth and thus theoretically and computationally
challenging. In their computational experiments, Akcay and Xu (2004) modified the
right hand side of the inventory availability constraints by substituting linear functions
for piece-wise linear ones. This modification may have a significant impact on
low budget levels. The optimal solutions obtained via the original formulation, i.e.,
the formulation without modification, include zero base stock levels for some components
and thus indicate a bias against component commonality. We study the impact
of component commonality on periodic review ATO systems. We show that lowering
component commonality may yield a higher type-II service level. The lower degree of
component commonality is achieved via separating inventories of the same component
for different products. We substantiate this property via computational and theoretical
approaches. We show that for low budget levels the use of separate inventories
of the same component for different products can achieve a higher reward than with
shared inventories. Finally, considering a simple ATO system with one component
shared by two products, we characterize the budget ranges such that either separate
or shared inventory component (i.e., component commonality) is beneficial. / Thesis / Doctor of Philosophy (PhD)
|
166 |
Quantitative Decision Models for Humanitarian LogisticsFalasca, Mauro 21 September 2009 (has links)
Humanitarian relief and aid organizations all over the world implement efforts aimed at recovering from disasters, reducing poverty and promoting human rights. The purpose of this dissertation is to develop a series of quantitative decision models to help address some of the challenges faced by humanitarian logistics.
The first study discusses the development of a spreadsheet-based multicriteria scheduling model for a small development aid organization in a South American developing country. Development aid organizations plan and execute efforts that are primarily directed towards promoting human welfare. Because these organizations rely heavily on the use of volunteers to carry out their social mission, it is important that they manage their volunteer workforce efficiently. In this study, we demonstrate not only how the proposed model helps to reduce the number of unfilled shifts and to decrease total scheduling costs, but also how it helps to better satisfy the volunteers’ scheduling preferences, thus supporting long-term retention and effectiveness of the workforce.
The purpose of the second study is to develop a decision model to assist in the management of humanitarian relief volunteers. One of the challenges faced by humanitarian organizations is that there exist limited decision technologies that fit their needs while it has also been pointed out that those organizations experience coordination difficulties with volunteers willing to help. Even though employee workforce management models have been the topic of extensive research over the past decades, no work has focused on the problem of managing humanitarian relief volunteers. In this study, we discuss a series of principles from the field of volunteer management and develop a multicriteria optimization model to assist in the assignment of both individual volunteers and volunteer groups to tasks. We present illustrative examples and analyze two complementary solution methodologies that incorporate the decision maker's preferences and knowledge and allow him/her to trade-off conflicting objectives.
The third study discusses the development of a decision model for the procurement of goods in humanitarian efforts. Despite the prevalence of procurement expenditures in humanitarian efforts, procurement in humanitarian contexts is a topic that has only been discussed in a qualitative manner in the literature. In our paper, we introduce a two stage decision model with recourse to improve the procurement of goods in humanitarian relief supply chains and present an illustrative example. Conclusions, limitations, and directions for future research are also discussed. / Ph. D.
|
167 |
Analysis of the Benefits of Resource Flexibility, Considering Different Flexibility StructuresHong, Seong-Jong 28 May 2004 (has links)
We study the benefits of resource flexibility, considering two different flexibility structures. First, we want to understand the impact of the firm's pricing strategy on its resource investment decision, considering a partially flexible resource. Secondly, we study the benefits of a flexible resource strategic approach, considering a resource flexibility structure that has not been studied in the previous literature.
First, we study the capacity investment decision faced by a firm that offers two products/services and that is a price-setter for both products/services. The products offered by the firm are of varying levels (complexities), such that the resources that can be used to produce the higher level product can also be used to produce the lower level one. Although the firm needs to make its capacity investment decision under high demand uncertainty, it can utilize this limited (downward) resource flexibility, in addition to pricing, to more effectively match its supply with demand. Sample applications include a service company, whose technicians are of different capabilities, such that a higher level technician can perform all tasks performed by a lower level technician; a firm that owns a main plant, satisfying both end-product and intermediate-product demand, and a subsidiary, satisfying the intermediate-product demand only. We formulate this decision problem as a two-stage stochastic programming problem with recourse, and characterize the structural properties of the firm's optimal resource investment strategy when resource flexibility and pricing flexibility are considered in the investment decision.
We show that the firm's optimal resource investment strategy follows a threshold policy. This structure allows us to understand the impact of coordinated decision-making, when the resource flexibility is taken into account in the investment decision, on the firm's optimal investment strategy, and establish the conditions under which the firm invests in the flexible resource. We also study the impact of demand correlation on the firm's optimal resource investment strategy, and show that it may be optimal for the firm to invest in both flexible and dedicated resources when product demand patterns are perfectly positively correlated. Our results offer managerial principles and insights on the firm's optimal resource investment strategy as well as extend the newsvendor problem with pricing, by allowing for multiple resources (suppliers), multiple products, and resource pooling.
Secondly, we study the benefits of a delayed decision making strategy under demand uncertainty, considering a system that satisfies two demand streams with two capacitated and flexible resources. Resource flexibility allows the firm to delay its resource allocation decision to a time when partial information on demands is obtained and demand uncertainty is reduced. We characterize the structure of the firm's optimal delayed resource allocation strategy. This characterization allows us to study how the revenue benefits of the delayed resource allocation strategy depend on demand and capacity parameters, and the length of the selling season. Our study shows that the revenue benefits of this strategy can be significant, especially when demand rates of the different types are close, while resource capacities are much different. Based on our analysis, we provide guidelines on the utilization of such strategies.
Finally, we incorporate the uncertainty in demand parameters into our models and study the effectiveness of several delayed capacity allocation mechanisms that utilize the resource flexibility. In particular, we consider that demand forecasts are uncertain at the start of the selling season and are updated using a Bayesian framework as early demand figures are observed. We propose several heuristic capacity allocation policies that are easy to implement as well as a heuristic procedure that relies on a stochastic dynamic programming formulation and perform a numerical study. Our study determines the conditions under which each policy is effective. / Ph. D.
|
168 |
Optimizing loblolly pine management with stochastic dynamic programmingHäring, Thomas W. 02 October 2007 (has links)
This study examines effects of unpredictable price fluctuations and possible catastrophic losses on the optimal site preparation intensity of un thinned loblolly pine plantations under the assumption of lisk aversion. It concentrates exclusively on financial motives and does not take non-market values and portfolio considerations into account. The results should be interpreted with these limitations in mind.
Two approaches are taken to compare site preparation intensities: a quasideterministic approach, where expected cash flows are discounted with risk-adjusted discount rates, and a stochastic approach, where probability functions of cash flows are used to maximize expected utility from net present values. The stochastic approach is further divided into non-adaptive scenarios and adaptive scenarios, where the investor can gather additional price information during the life of a stand to optimize the harvest decision. The adaptive management problem is solved with stochastic dynamic programming. For each possible harvest age, an optimal reservation price below which the forest landowner should not sell the stumpage is calculated.
The study shows that the use of a single risk-adjusted discount rate is generally inadequate to compare different management intensities. The stochastic approaches reveal that the optimal management intensity depends on the degree of risk aversion, with increasing risk aversion leading to a lower intensity level. Given the possibility of catastrophic losses, the adoption of a feedback harvesting policy strengthens the already dominant influence of risk aversion and does not generally lead to an increase in management intensity.
The study's results suggest that even if the landowner is managing the forest solely for financial reasons, some of the reluctance to invest in intensive forestry may not indicate a lack of interest or information but simply an economic reaction to risk, especially in regions with a high potential danger of catastrophic losses. / Ph. D.
|
169 |
Comparative Statics Analysis of Some Operations Management ProblemsZeng, Xin 19 September 2012 (has links)
We propose a novel analytic approach for the comparative statics analysis of operations management problems on the capacity investment decision and the influenza (flu) vaccine composition decision. Our approach involves exploiting the properties of the underlying mathematical models, and linking those properties to the concept of stochastic orders relationship. The use of stochastic orders allows us to establish our main results without restriction to a specific distribution. A major strength of our approach is that it is "scalable," i.e., it applies to capacity investment decision problem with any number of non-independent (i.e., demand or resource sharing) products and resources, and to the influenza vaccine composition problem with any number of candidate strains, without a corresponding increase in computational effort. This is unlike the current approaches commonly used in the operations management literature, which typically involve a parametric analysis followed by the use of the implicit function theorem. Providing a rigorous framework for comparative statics analysis, which can be applied to other problems that are not amenable to traditional parametric analysis, is our main contribution.
We demonstrate this approach on two problems: (1) Capacity investment decision, and (2) influenza vaccine composition decision. A comparative statics analysis is integral to the study of these problems, as it allows answers to important questions such as, "does the firm acquire more or less of the different resources available as demand uncertainty increases? does the firm benefit from an increase in demand uncertainty? how does the vaccine composition change as the yield uncertainty increases?" Using our proposed approach, we establish comparative statics results on how the newsvendor's expected profit and optimal capacity decision change with demand risk and demand dependence in multi-product multi-resource newsvendor networks; and how the societal vaccination benefit, the manufacturer's profit, and the vaccine output change with the risk of random yield of strains. / Ph. D.
|
170 |
Stochastic Scheduling for a Network of MEMS Job ShopsVaradarajan, Amrusha 31 January 2007 (has links)
This work is motivated by the pressing need for operational control in the fabrication of Microelectromechanical systems or MEMS. MEMS are miniature three-dimensional integrated electromechanical systems with the ability to absorb information from the environment, process this information and suitably react to it. These devices offer tremendous advantages owing to their small size, low power consumption, low mass and high functionality, which makes them very attractive in applications with stringent demands on weight, functionality and cost. While the system''s "brain" (device electronics) is fabricated using traditional IC technology, the micromechanical components necessitate very intricate and sophisticated processing of silicon or other suitable substrates. A dearth of fabrication facilities with micromachining capabilities and a lengthy gestation period from design to mass fabrication and commercial acceptance of the product in the market are factors most often implicated in hampering the growth of MEMS. These devices are highly application specific with low production volumes and the few fabs that do possess micromachining capabilities are unable to offer a complete array of fabrication processes in order to be able to cater to the needs of the MEMS R&D community. A distributed fabrication network has, therefore, emerged to serve the evolving needs of this high investment, low volume MEMS industry. Under this environment, a central facility coordinates between a network of fabrication centers (Network of MEMS job shops -- NMJS) containing micromachining capabilities. These fabrication centers include commercial, academic and government fabs, which make their services available to the ordinary customer. Wafers are shipped from one facility to another until all processing requirements are met. The lengthy and intricate process sequences that need to be performed over a network of capital intensive facilities are complicated by dynamic job arrivals, stochastic processing times, sequence-dependent set ups and travel between fabs. Unless the production of these novel devices is carefully optimized, the benefits of distributed fabrication could be completely overshadowed by lengthy lead times, chaotic routings and costly processing. Our goal, therefore, is to develop and validate an approach for optimal routing (assignment) and sequencing of MEMS devices in a network of stochastic job shops with the objective of minimizing the sum of completion times and the cost incurred, given a set of fabs, machines and an expected product mix.
In view of our goal, we begin by modeling the stochastic NMJS problem as a two-stage stochastic program with recourse where the first-stage variables are binary and the second-stage variables are continuous. The key decision variables are binary and pertain to the assignment of jobs to machines and their sequencing for processing on the machines. The assignment variables essentially fix the route of a job as it travels through the network because these variables specify the machine on which each job-operation must be performed out of several candidate machines. Once the assignment is decided upon, sequencing of job-operations on each machine follows. The assignment and sequencing must be such that they offer the best solution (in terms of the objective) possible in light of all the processing time scenarios that can be realized. We present two approaches for solving the stochastic NMJS problem. The first approach is based on the L-shaped method (credited to van Slyke and Wets, 1969). Since the NMJS problem lacks relatively complete recourse, the first-stage solution can be infeasible to the second-stage problem in that the first stage solution may either violate the reentrant flow conditions or it may create a deadlock. In order to alleviate these infeasibilities, we develop feasibility cuts which when appended to the master problem eliminate the infeasible solution. Alternatively, we also develop constraints to explicitly address these infeasibilities directly within the master problem. We show how a deadlock involving 2 or 3 machines arises if and only if a certain relationship between operations and a certain sequence amongst them exists. We generalize this argument to the case of m machines, which forms the basis for our deadlock prevention constraints. Computational results at the end of Chapter 3 compare the relative merits of a model which relies solely on feasibility cuts with models that incorporate reentrant flow and deadlock prevention constraints within the master problem. Experimental evidence reveals that the latter offers appreciable time savings over the former. Moreover, in a majority of instances we see that models that carry deadlock prevention constraints in addition to the reentrant flow constraints provide at par or better performance than those that solely carry reentrant flow constraints.
We, next, develop an optimality cut which when appended to the master problem helps in eliminating the suboptimal master solution. We also present alternative optimality and feasibility cuts obtained by modifying the disjunctive constraints in the subproblem so as to eliminate the big H terms in it. Although any large positive number can be used as the value of H, a conservative estimate may improve computational performance. In light of this, we develop a conservative upper bound for operation completion times and use it as the value of H. Test instances have been generated using a problem generator written in JAVA. We present computational results to evaluate the impact of a conservative estimate for big H on run time, analyze the effect of the different optimality cuts and demonstrate the performance of the multicut method (Wets, 1981) which differs from the L-shaped method in that the number of optimality cuts it appends is equal to the number of scenarios in each iteration. Experimentation indicates that Model 2, which uses the standard optimality cut in conjunction with the conservative estimate for big H, almost always outperforms Model 1, which also uses the standard optimality cut but uses a fixed value of 1000 for big H. Model 3, which employs the alternative optimality cut with the conservative estimate for big H, requires the fewest number of iterations to converge to the optimum but it also incurs the maximum premium in terms of computational time. This is because the alternative optimality cut adds to the complexity of the problem in that it appends additional variables and constraints to the master as well as the subproblems. In the case of Model 4 (multicut method), the segregated optimality cuts accurately reflect the shape of the recourse function resulting in fewer overall iterations but the large number of these cuts accumulate over the iterations making the master problem sluggish and so this model exhibits a variable performance for the various datasets. These experiments reveal that a compact master problem and a conservative estimate for big H positively impact the run time performance of a model. Finally, we develop a framework for a branch-and-bound scheme within which the L-shaped method, as applied to the NMJS problem, can be incorporated so as to further enhance its performance.
Our second approach for solving the stochastic NMJS problem relies on the tight LP relaxation observed for the deterministic equivalent of the model. We, first, solve the LP relaxation of the deterministic equivalent problem, and then, fix certain binary assignment variables that take on a value of either a 0 or a 1 in the relaxation. Based on this fixing of certain assignment variables, additional logical constraints have been developed that lead to the fixing of some of the sequencing variables too. Experimental results, comparing the performance of the above LP heuristic procedure with CPLEX over the generated test instances, illustrate the effectiveness of the heuristic procedure. For the largest problems (5 jobs, 10 operations/job, 12 machines, 7 workcenters, 7 scenarios) solved in this experiment, an average savings of as much as 4154 seconds and 1188 seconds was recorded in a comparison with Models 1 and 2, respectively. Both of these models solve the deterministic equivalent of the stochastic NMJS problem but differ in that Model 1 uses a big H value of 1000 whereas Model 2 uses the conservative upper bound for big H developed in this work. The maximum optimality gap observed for the LP heuristic over all the data instances solved was 1.35%. The LP heuristic, therefore, offers a powerful alternative to solving these problems to near-optimality with a very low computational burden. We also present results pertaining to the value of the stochastic solution for various data instances. The observed savings of up to 8.8% over the mean value approach underscores the importance of using a solution that is robust over all scenarios versus a solution that approximates the randomness through expected values.
We, next, present a dynamic stochastic scheduling approach (DSSP) for the NMJS problem. The premise behind this undertaking is that in a real-life implementation that is faithful to the two-stage procedure, assignment (routing) and sequencing decisions will be made for all the operations of all the jobs at the outset and these will be followed through regardless of the actual processing times realized for individual operations. However, it may be possible to refine this procedure if information on actual processing time realizations for completed operations could be utilized so that assignment and sequencing decisions for impending operations are adjusted based on the evolving scenario (which may be very different from the scenarios modeled) while still hedging against future uncertainty. In the DSSP approach, the stochastic programming model for the NMJS problem is solved at each decision point using the LP heuristic in a rolling horizon fashion while incorporating constraints that model existing conditions in the shop floor and the actual processing times realized for the operations that have been completed.
The implementation of the DSSP algorithm is illustrated through an example problem. The results of the DSSP approach as applied to two large problem instances are presented. The performance of the DSSP approach is evaluated on three fronts; first, by using the LP heuristic at each decision point, second, by using an optimal algorithm at each decision point, and third, against the two-stage stochastic programming approach. Results from the experimentation indicate that the DSSP approach using the LP heuristic at each decision point generates superior assignment and sequencing decisions than the two-stage stochastic programming approach and provides solutions that are near-optimal with a very low computational burden. For the first instance involving 40 operations, 12 machines and 3 processing time scenarios, the DSSP approach using the LP heuristic yields the same solution as the optimal algorithm with a total time savings of 71.4% and also improves upon the two-stage stochastic programming solution by 1.7%. In the second instance, the DSSP approach using the LP heuristic yields a solution with an optimality gap of 1.77% and a total time savings of 98% over the optimal algorithm. In this case, the DSSP approach with the LP heuristic improves upon the two-stage stochastic programming solution by 6.38%. We conclude by presenting a framework for the DSSP approach that extends the basic DSSP algorithm to accommodate jobs whose arrival times may not be known in advance. / Ph. D.
|
Page generated in 0.0829 seconds