• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 14
  • 13
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

DISTRIBUTION SYSTEM OPTIMIZATION WITH INTEGRATED DISTRIBUTED GENERATION

Ibrahim, Sarmad Khaleel 01 January 2018 (has links)
In this dissertation, several volt-var optimization methods have been proposed to improve the expected performance of the distribution system using distributed renewable energy sources and conventional volt-var control equipment: photovoltaic inverter reactive power control for chance-constrained distribution system performance optimisation, integrated distribution system optimization using a chance-constrained formulation, integrated control of distribution system equipment and distributed generation inverters, and coordination of PV inverters and voltage regulators considering generation correlation and voltage quality constraints for loss minimization. Distributed generation sources (DGs) have important benefits, including the use of renewable resources, increased customer participation, and decreased losses. However, as the penetration level of DGs increases, the technical challenges of integrating these resources into the power system increase as well. One such challenge is the rapid variation of voltages along distribution feeders in response to DG output fluctuations, and the traditional volt-var control equipment and inverter-based DG can be used to address this challenge. These methods aim to achieve an optimal expected performance with respect to the figure of merit of interest to the distribution system operator while maintaining appropriate system voltage magnitudes and considering the uncertainty of DG power injections. The first method is used to optimize only the reactive power output of DGs to improve system performance (e.g., operating profit) and compensate for variations in active power injection while maintaining appropriate system voltage magnitudes and considering the uncertainty of DG power injections over the interval of interest. The second method proposes an integrated volt-var control based on a control action ahead of time to find the optimal voltage regulation tap settings and inverter reactive control parameters to improve the expected system performance (e.g., operating profit) while keeping the voltages across the system within specified ranges and considering the uncertainty of DG power injections over the interval of interest. In the third method, an integrated control strategy is formulated for the coordinated control of both distribution system equipment and inverter-based DG. This control strategy combines the use of inverter reactive power capability with the operation of voltage regulators to improve the expected value of the desired figure of merit (e.g., system losses) while maintaining appropriate system voltage magnitudes. The fourth method proposes a coordinated control strategy of voltage and reactive power control equipment to improve the expected system performance (e.g., system losses and voltage profiles) while considering the spatial correlation among the DGs and keeping voltage magnitudes within permissible limits, by formulating chance constraints on the voltage magnitude and considering the uncertainty of PV power injections over the interval of interest. The proposed methods require infrequent communication with the distribution system operator and base their decisions on short-term forecasts (i.e., the first and second methods) and long-term forecasts (i.e., the third and fourth methods). The proposed methods achieve the best set of control actions for all voltage and reactive power control equipment to improve the expected value of the figure of merit proposed in this dissertation without violating any of the operating constraints. The proposed methods are validated using the IEEE 123-node radial distribution test feeder.
2

Probabilistic security management for power system operations with large amounts of wind power

Hamon, Camille January 2015 (has links)
Power systems are critical infrastructures for the society. They are therefore planned and operated to provide a reliable eletricity delivery. The set of tools and methods to do so are gathered under security management and are designed to ensure that all operating constraints are fulfilled at all times. During the past decade, raising awareness about issues such as climate change, depletion of fossil fuels and energy security has triggered large investments in wind power. The limited predictability of wind power, in the form of forecast errors, pose a number of challenges for integrating wind power in power systems. This limited predictability increases the uncertainty already existing in power systems in the form of random occurrences of contingencies and load forecast errors. It is widely acknowledged that this added uncertainty due to wind power and other variable renewable energy sources will require new tools for security management as the penetration levels of these energy sources become significant. In this thesis, a set of tools for security management under uncertainty is developed. The key novelty in the proposed tools is that they build upon probabilistic descriptions, in terms of distribution functions, of the uncertainty. By considering the distribution functions of the uncertainty, the proposed tools can consider all possible future operating conditions captured in the probabilistic forecasts, as well as the likeliness of these operating conditions. By contrast, today's tools are based on the deterministic N-1 criterion that only considers one future operating condition and disregards its likelihood. Given a list of contingencies selected by the system operator and probabilitistic forecasts for the load and wind power, an operating risk is defined in this thesis as the sum of the probabilities of the pre- and post-contingency violations of the operating constraints, weighted by the probability of occurrence of the contingencies. For security assessment, this thesis proposes efficient Monte-Carlo methods to estimate the operating risk. Importance sampling is used to substantially reduce the computational time. In addition, sample-free analytical approximations are developed to quickly estimate the operating risk. For security enhancement, the analytical approximations are further embedded in an optimization problem that aims at obtaining the cheapest generation re-dispatch that ensures that the operating risk remains below a certain threshold. The proposed tools build upon approximations, developed in this thesis, of the stable feasible domain where all operating constraints are fulfilled. / <p>QC 20150508</p>
3

Optimization and Decision Making under Uncertainty for Distributed Generation Technologies

Marino, Carlos Antonio 09 December 2016 (has links)
This dissertation studies two important models in the field of the distributed generation technologies to provide resiliency to the electric power distribution system. In the first part of the dissertation, we study the impact of assessing a Combined Cooling Heating Power system (CCHP) on the optimization and management of an on-site energy system under stochastic settings. These mathematical models propose a scalable stochastic decision model for large-scale microgrid operation formulated as a two-stage stochastic linear programming model. The model is solved enhanced algorithm strategies for Benders decomposition are introduced to find an optimal solution for larger instances efficiently. Some observations are made with different capacities of the power grid, dynamic pricing mechanisms with various levels of uncertainty, and sizes of power generation units. In the second part of the dissertation, we study a mathematical model that designs a Microgrid (MG) that integrates conventional fuel based generating (FBG) units, renewable sources of energy, distributed energy storage (DES) units, and electricity demand response. Curtailment of renewable resources generation during the MG operation affects the long-term revenues expected and increases the greenhouses emission. Considering the variability of renewable resources, researchers should pay more attention to scalable stochastic models for MG for multiple nodes. This study bridges the research gap by developing a scalable chance-constrained two-stage stochastic program to ensure that a significant portion of the renewable resource power output at each operating hour will be utilized. Finally, some managerial insights are drawn into the operation performance of the Combined Cooling Heating Power and a Microgrid.
4

Measuring the efficiency of two stage network processes: a satisficing DEA approach

Mehdizadeh, S., Amirteimoori, A., Vincent, Charles, Behzadi, M.H., Kordrostami, S. 24 March 2020 (has links)
No / Regular Network Data Envelopment Analysis (NDEA) models deal with evaluating the performance of a set of decision-making units (DMUs) with a two-stage construction in the context of a deterministic data set. In the real world, however, observations may display a stochastic behavior. To the best of our knowledge, despite the existing research done with different data types, studies on two-stage processes with stochastic data are still very limited. This paper proposes a two-stage network DEA model with stochastic data. The stochastic two-stage network DEA model is formulated based on the satisficing DEA models of chance-constrained programming and the leader-follower concepts. According to the probability distribution properties and under the assumption of the single random factor of the data, the probabilistic form of the model is transformed into its equivalent deterministic linear programming model. In addition, the relationship between the two stages as the leader and the follower, respectively, at different confidence levels and under different aspiration levels, is discussed. The proposed model is further applied to a real case concerning 16 commercial banks in China in order to confirm the applicability of the proposed approach at different confidence levels and under different aspiration levels.
5

Coping Uncertainty in Wireless Network Optimization

Li, Shaoran 24 October 2022 (has links)
Network optimization plays an important role in 5G/next-G networks, which requires knowledge of network parameters (e.g., channel state information). The majority of existing works assume that all network parameters are either given a prior or can be accurately estimated. However, in many practical scenarios, some parameters are uncertain at the time of allocating resources and can only be modeled by random variables. Further, we only have limited knowledge of those uncertain parameters. For instance, channel gains are not exactly known due to channel estimation errors, network delay, limited feedback, and a lack of cooperation (between networks). Therefore, a practical solution to network optimization must address such uncertainty inside wireless networks. There are three approaches to address such a network uncertainty: stochastic programming, worst-case optimization, and chance-constrained programming (CCP). Among the three, CCP has some unique benefits compared to the other two approaches. Stochastic programming explicitly requires full distribution knowledge, which is usually unavailable in practice. In comparison, CCP can work with various settings of available knowledge such as first and second order statistics, symmetric properties, or limited data samples. Therefore, CCP is more flexible to handle different network settings, which is important to address problems in 5G/next-G networks. Further, worst-case optimization assumes upper or lower bounds (i.e., worst cases) for the uncertain parameters and it is known to be conservative due to its focus on extreme cases. In contrast, CCP allows occasional and controllable violations for some constraints and thus offers much better performance in resource utilization compared to worst-case optimization. The only drawback of CCP is that it may lead to intractability due to its probabilistic formulation and limited knowledge of the underlying random variables. To date, CCP has not been well utilized in the wireless communication and networking community. The goal of this dissertation is to extend the state-of-the-art of CCP techniques and address a number of challenging network optimization problems. This dissertation is correspondingly organized into two parts. In the first part, we assume the uncertain parameters are only known by their mean and covariance (without distribution knowledge). We assume these statistics are rather stationary (i.e., time-invariant for a sufficiently long time) and thus can be accurately estimated. In this setting, we introduce a novel reformulation technique based on the mean and covariance to derive a solution. In the second part, we assume these statistics are time-varying and thus cannot be accurately estimated.In this setting, we employ limited data samples that are collected in a small time window and use them to derive a solution. For the first part, we investigate four research problems based on the mean and covariance of the uncertain parameters: - In the first problem, we study how to maximize spectrum efficiency in underlay coexistence.The interference from all secondary users to each primary user must be kept below a given threshold. However, there is much uncertainty about the channel gains between the primary users and the second users due to a lack of cooperation between them. We formulate probabilistic interference constraints using CCP for the primary users. For tractability, we introduce a novel and powerful reformulation technique called Exact Conic Reformulation (ECR). With limited knowledge of mean and covariance, ECR offers an equivalent reformulation for the intractable chance constraints with tractable deterministic constraints without relaxation errors. After reformulation, we employ linearization techniques to the mixed-integer non-linear problem to reduce the computation complexity. We show that our proposed approach can achieve near-optimal performance and stands as a performance benchmark for the underlay coexistence problem. - To find a solution for the same underlay coexistence problem that can be used in the real world, we need to find a solution in "real-time". The real-time requirement here refers to finding a solution in 125 us (the minimum time slot for small cells in 5G). Our proposed solution has three steps. First, it employs ECR to reformulate the original CCP into a deterministic optimization problem. Then it decomposes the problem and narrows down the search space into a smaller but promising one. By random sampling inside the promising search space and through local search, our proposed solution can meet the 125 us requirement in 5G while achieving 90% optimality on average. - We further apply CCP, predicated on the reformulation technique ECR, to two other problems. * We study the problem of power control in concurrent transmissions. Our objective is to maximize energy efficiency for all transmitter-receiver pairs with capacity requirements. This problem is challenging due to mutual interference among different transmitter-receiver pairs and the uncertain channel gain between any transmitter and receiver. We formulate a CCP and reformulate it into a deterministic problem using ECR. Then we employ Geometric Programming (GP) with a tight approximation to derive a near-optimal solution. * We study task offloading in Mobile Edge Computing (MEC) where the number of processing cycles of a task is unknown until completion. The goal is to minimize the energy consumption of the users while meeting probabilistic deadlines for the tasks. We formulate the probabilistic deadlines into chance constraints and then use ECR to reformulate them into deterministic constraints. We propose a solution that consists of periodic scheduling and schedule updates to choose the offloaded tasks and task-to-processor assignments at the base station. In the second part, we investigate two research problems based on limited data samples of the uncertain parameters: - We study MU-MIMO beamforming based on Channel State Information (CSI). The goal is to derive a beamforming solution---minimizing power consumption at the BS while meeting the probabilistic data rate requirements of the users---by using very limited CSI data samples. For our CCP formulation, we explore the idea of Wasserstein ambiguity set to quantify the distance between the true (but unknown) distribution and the empirical distribution based on the limited data samples. Our proposed solution---Data-Driven Beamforming (D^2BF)---reformulates the CCP into a non-convex deterministic optimization problem based on the properties of Wasserstein ambiguity set. Then D^2BF employs a novel convex approximation to the non-convex deterministic problem, which can be directly solved by commercial solvers. - For a solution to the MU-MIMO beamforming to be useful in the real world, it must meet the "real-time" requirement. Here, the real-time requirement refers to 1 ms, which is one transmission time interval (TTI) under 5G numerology 0. We present ReDBeam---a Real-time Data-driven Beamforming solution for the MU-MIMO beamforming problem (minimizing power consumption while offering probabilistic data rate guarantees to the users) with limited CSI data samples. RedBeam is a parallel algorithm and is purposefully designed to take advantage of the vast parallel processing capability offered by GPU. ReDBeam generates a large number of initial solutions from a promising search space and then refines each solution by a local search. We show that ReDBeam meets the 1 ms real-time requirement on a commercial GPU and is orders of magnitude faster than other state-of-the-art algorithms for the same problem. / Doctor of Philosophy / Network optimization plays an important role in 5G/next-G networks. In a wireless network optimization problem, we typically want to maximize or minimize an objective function under a set of performance or resource constraints. Knowledge of network parameters is typically required in these problems. The majority of existing works assume that all network parameters are either given a prior or can be accurately estimated. However, in many practical scenarios, some parameters are uncertain in nature and cannot be accurately estimated beforehand. This dissertation addresses uncertainty in wireless network optimizations using chance-constrained programming (CCP). CCP can work with limited knowledge of uncertain parameters such as statistics or data samples, instead of full distribution information. In a CCP formulation, violations of certain target performance or requirement thresholds are expressed as probabilistic constraints and the frequency of such violations is bounded through a risk parameter. By changing this risk level, CCP offers a unique trade-off between the guaranteed threshold violation probabilities and the achieved objective value. The only drawback of CCP is that it may lead to intractability due to its probabilistic formulation and limited knowledge of the underlying random variables. The goal of this dissertation is to extend the state-of-the-art of CCP techniques to address a number of challenging network optimization problems. This dissertation is organized into two parts. In the first part, the mean and covariance of the uncertain parameters are assumed to be stationary and thus can be accurately estimated. Our main contribution is a novel reformulation technique for CCP called Exact Conic Reformulation (ECR). Based on knowledge of mean and covariance, ECR is able to offer an equivalent reformulation for the intractable chance constraints with tractable deterministic constraints without relaxation errors. We apply CCP, predicated on ECR, to address three problems: (i) scheduling and power control in underlay coexistence; (ii) power control in concurrent transmissions, and (iii) task offloading in Mobile Edge Computing (MEC). For the first problem, we further address the "real-time" requirement in a solution and propose a solution that can meet the stringent timing requirement. In the second part, when the uncertain parameters are non-stationary and their statistics cannot be accurately estimated, we propose to employ limited data samples that are collected over a small window and use them to develop a solution. To demonstrate the efficacy of this approach, we investigate the MU-MIMO beamforming problem that minimizes the power consumption of the base station while providing probabilistic guarantees to users' data rates. We further address the timing requirement for such a solution in practice, and present a real-time data-driven beamforming solution for MU-MIMO.
6

An evidential answer for the capacitated vehicle routing problem with uncertain demands / Une réponse évidentielle pour le problème de tournée de véhicules avec contrainte de capacité et demandes incertaines

Helal, Nathalie 20 December 2017 (has links)
Le problème de tournées de véhicules avec contrainte de capacité est un problème important en optimisation combinatoire. L'objectif du problème est de déterminer l'ensemble des routes, nécessaire pour servir les demandes déterministes des clients ayant un cout minimal, tout en respectant la capacité limite des véhicules. Cependant, dans de nombreuses applications réelles, nous sommes confrontés à des incertitudes sur les demandes des clients. La plupart des travaux qui ont traité ce problème ont supposé que les demandes des clients étaient des variables aléatoires. Nous nous proposons dans cette thèse de représenter l'incertitude sur les demandes des clients dans le cadre de la théorie de l'évidence - un formalisme alternatif pour modéliser les incertitudes. Pour résoudre le problème d'optimisation qui résulte, nous généralisons les approches de modélisation classiques en programmation stochastique. Précisément, nous proposons deux modèles pour ce problème. Le premier modèle, est une extension de l'approche chance-constrained programming, qui impose des bornes minimales pour la croyance et la plausibilité que la somme des demandes sur chaque route respecte la capacité des véhicules. Le deuxième modèle étend l'approche stochastic programming with recourse: l'incertitude sur les recours (actions correctives) possibles sur chaque route est représentée par une fonction de croyance et le coût d'une route est alors son coût classique (sans recours) additionné du pire coût espéré des recours. Certaines propriétés de ces deux modèles sont étudiées. Un algorithme de recuit simulé est adapté pour résoudre les deux modèles et est testé expérimentalement. / The capacitated vehicle routing problem is an important combinatorial optimisation problem. Its objective is to find a set of routes of minimum cost, such that a fleet of vehicles initially located at a depot service the deterministic demands of a set of customers, while respecting capacity limits of the vehicles. Still, in many real-life applications, we are faced with uncertainty on customer demands. Most of the research papers that handled this situation, assumed that customer demands are random variables. In this thesis, we propose to represent uncertainty on customer demands using evidence theory - an alternative uncertainty theory. To tackle the resulting optimisation problem, we extend classical stochastic programming modelling approaches. Specifically, we propose two models for this problem. The first model is an extension of the chance-constrained programming approach, which imposes certain minimum bounds on the belief and plausibility that the sum of the demands on each route respects the vehicle capacity. The second model extends the stochastic programming with recourse approach: it represents by a belief function for each route the uncertainty on its recourses (corrective actions) and defines the cost of a route as its classical cost (without recourse) plus the worst expected cost of its recourses. Some properties of these two models are studied. A simulated annealing algorithm is adapted to solve both models and is experimentally tested.
7

Employees Provident Fund (EPF) Malaysia : generic models for asset and liability management under uncertainty

Sheikh Hussin, Siti Aida January 2012 (has links)
We describe Employees Provident Funds (EPF) Malaysia. We explain about Defined Contribution and Defined Benefit Pension Funds and examine their similarities and differences. We also briefly discuss and compare EPF schemes in four Commonwealth countries. A family of Stochastic Programming Models is developed for the Employees Provident Fund Malaysia. This is a family of ex-ante decision models whose main aim is to manage, that is, balance assets and liabilities. The decision models comprise Expected Value Linear Programming, Two Stage Stochastic Programming with recourse, Chance Constrained Programming and Integrated Chance Constraints Programming. For the last three decision models we use scenario generators which capture the uncertainties of asset returns, salary contributions and lump sum liabilities payments. These scenario generation models for Assets and liabilities were developed and calibrated using historical data. The resulting decisions are evaluated with in-sample analysis using typical risk adjusted performance measures. Out- of- sample testing is also carried out with a larger set of generated scenarios. The benefits of two stage stochastic programming over deterministic approaches on asset allocation as well as the amount of borrowing needed for each pre-specified growth dividend are demonstrated. The contributions of this thesis are i) an insightful overview of EPF ii) construction of scenarios for assets returns and liabilities with different values of growth dividend, that combine the Markov population model with the salary growth model and retirement payments iii) construction and analysis of generic ex-ante decision models taking into consideration uncertain asset returns and uncertain liabilities iv) testing and performance evaluation of these decisions in an ex-post setting.
8

Optimal dispatch of uncertain energy resources

Amini, Mahraz 01 January 2019 (has links)
The future of the electric grid requires advanced control technologies to reliably integrate high level of renewable generation and residential and small commercial distributed energy resources (DERs). Flexible loads are known as a vital component of future power systems with the potential to boost the overall system efficiency. Recent work has expanded the role of flexible and controllable energy resources, such as energy storage and dispatchable demand, to regulate power imbalances and stabilize grid frequency. This leads to the DER aggregators to develop concepts such as the virtual energy storage system (VESS). VESSs aggregate the flexible loads and energy resources and dispatch them akin to a grid-scale battery to provide flexibility to the system operator. Since the level of flexibility from aggregated DERs is uncertain and time varying, the VESSs’ dispatch can be challenging. To optimally dispatch uncertain, energy-constrained reserves, model predictive control offers a viable tool to develop an appropriate trade-off between closed-loop performance and robustness of the dispatch. To improve the system operation, flexible VESSs can be formulated probabilistically and can be realized with chance-constrained model predictive control. The large-scale deployment of flexible loads needs to carefully consider the existing regulation schemes in power systems, i.e., generator droop control. In this work first, we investigate the complex nature of system-wide frequency stability from time-delays in actuation of dispatchable loads. Then, we studied the robustness and performance trade-offs in receding horizon control with uncertain energy resources. The uncertainty studied herein is associated with estimating the capacity of and the estimated state of charge from an aggregation of DERs. The concept of uncertain flexible resources in markets leads to maximizing capacity bids or control authority which leads to dynamic capacity saturation (DCS) of flexible resources. We show there exists a sensitive trade-off between robustness of the optimized dispatch and closed-loop system performance and sacrificing some robustness in the dispatch of the uncertain energy capacity can significantly improve system performance. We proposed and formulated a risk-based chance constrained MPC (RB-CC-MPC) to co-optimize the operational risk of prematurely saturating the virtual energy storage system against deviating generators from their scheduled set-point. On a fast minutely timescale, the RB-CC-MPC coordinates energy-constrained virtual resources to minimize unscheduled participation of ramp-rate limited generators for balancing variability from renewable generation, while taking into account grid conditions. We show under the proposed method it is possible to improve the performance of the controller over conventional distributionally robust methods by more than 20%. Moreover, a hardware-in-the-loop (HIL) simulation of a cyber-physical system consisting of packetized energy management (PEM) enabled DERs, flexible VESSs and transmission grid is developed in this work. A predictive, energy-constrained dispatch of aggregated PEM-enabled DERs is formulated, implemented, and validated on the HIL cyber-physical platform. The experimental results demonstrate that the existing control schemes, such as AGC, dispatch VESSs without regard to their energy state, which leads to unexpected capacity saturation. By accounting for the energy states of VESSs, model-predictive control (MPC) can optimally dispatch conventional generators and VESSs to overcome disturbances while avoiding undesired capacity saturation. The results show the improvement in dynamics by using MPC over conventional AGC and droop for a system with energy-constrained resources.
9

Probabilistic covering problems

Qiu, Feng 25 February 2013 (has links)
This dissertation studies optimization problems that involve probabilistic covering constraints. A probabilistic constraint evaluates and requires that the probability that a set of constraints involving random coefficients with known distributions hold satisfy a minimum requirement. A covering constraint involves a linear inequality on non-negative variables with a greater or equal to sign and non-negative coefficients. A variety of applications, such as set cover problems, node/edge cover problems, crew scheduling, production planning, facility location, and machine learning, in uncertain settings involve probabilistic covering constraints. In the first part of this dissertation we consider probabilistic covering linear programs. Using the sampling average approximation (SAA) framework, a probabilistic covering linear program can be approximated by a covering k-violation linear program (CKVLP), a deterministic covering linear program in which at most k constraints are allowed to be violated. We show that CKVLP is strongly NP-hard. Then, to improve the performance of standard mixed-integer programming (MIP) based schemes for CKVLP, we (i) introduce and analyze a coefficient strengthening scheme, (ii) adapt and analyze an existing cutting plane technique, and (iii) present a branching technique. Through computational experiments, we empirically verify that these techniques are significantly effective in improving solution times over the CPLEX MIP solver. In particular, we observe that the proposed schemes can cut down solution times from as much as six days to under four hours in some instances. We also developed valid inequalities arising from two subsets of the constraints in the original formulation. When incorporating them with a modified coefficient strengthening procedure, we are able to solve a difficult probabilistic portfolio optimization instance listed in MIPLIB 2010, which cannot be solved by existing approaches. In the second part of this dissertation we study a class of probabilistic 0-1 covering problems, namely probabilistic k-cover problems. A probabilistic k-cover problem is a stochastic version of a set k-cover problem, which is to seek a collection of subsets with a minimal cost whose union covers each element in the set at least k times. In a stochastic setting, the coefficients of the covering constraints are modeled as Bernoulli random variables, and the probabilistic constraint imposes a minimal requirement on the probability of k-coverage. To account for absence of full distributional information, we define a general ambiguous k-cover set, which is ``distributionally-robust." Using a classical linear program (called the Boolean LP) to compute the probability of events, we develop an exact deterministic reformulation to this ambiguous k-cover problem. However, since the boolean model consists of exponential number of auxiliary variables, and hence not useful in practice, we use two linear program based bounds on the probability that at least k events occur, which can be obtained by aggregating the variables and constraints of the Boolean model, to develop tractable deterministic approximations to the ambiguous k-cover set. We derive new valid inequalities that can be used to strengthen the linear programming based lower bounds. Numerical results show that these new inequalities significantly improve the probability bounds. To use standard MIP solvers, we linearize the multi-linear terms in the approximations and develop mixed-integer linear programming formulations. We conduct computational experiments to demonstrate the quality of the deterministic reformulations in terms of cost effectiveness and solution robustness. To demonstrate the usefulness of the modeling technique developed for probabilistic k-cover problems, we formulate a number of problems that have up till now only been studied under data independence assumption and we also introduce a new applications that can be modeled using the probabilistic k-cover model.
10

Chance-constrained Optimization Models for Agricultural Seed Development and Selection

January 2019 (has links)
abstract: Breeding seeds to include desirable traits (increased yield, drought/temperature resistance, etc.) is a growing and important method of establishing food security. However, besides breeder intuition, few decision-making tools exist that can provide the breeders with credible evidence to make decisions on which seeds to progress to further stages of development. This thesis attempts to create a chance-constrained knapsack optimization model, which the breeder can use to make better decisions about seed progression and help reduce the levels of risk in their selections. The model’s objective is to select seed varieties out of a larger pool of varieties and maximize the average yield of the “knapsack” based on meeting some risk criteria. Two models are created for different cases. First is the risk reduction model which seeks to reduce the risk of getting a bad yield but still maximize the total yield. The second model considers the possibility of adverse environmental effects and seeks to mitigate the negative effects it could have on the total yield. In practice, breeders can use these models to better quantify uncertainty in selecting seed varieties / Dissertation/Thesis / Masters Thesis Industrial Engineering 2019

Page generated in 0.1676 seconds