Spelling suggestions: "subject:"chanceconstrained programming"" "subject:"channelunstrained programming""
1 |
DISTRIBUTION SYSTEM OPTIMIZATION WITH INTEGRATED DISTRIBUTED GENERATIONIbrahim, Sarmad Khaleel 01 January 2018 (has links)
In this dissertation, several volt-var optimization methods have been proposed to improve the expected performance of the distribution system using distributed renewable energy sources and conventional volt-var control equipment: photovoltaic inverter reactive power control for chance-constrained distribution system performance optimisation, integrated distribution system optimization using a chance-constrained formulation, integrated control of distribution system equipment and distributed generation inverters, and coordination of PV inverters and voltage regulators considering generation correlation and voltage quality constraints for loss minimization. Distributed generation sources (DGs) have important benefits, including the use of renewable resources, increased customer participation, and decreased losses. However, as the penetration level of DGs increases, the technical challenges of integrating these resources into the power system increase as well. One such challenge is the rapid variation of voltages along distribution feeders in response to DG output fluctuations, and the traditional volt-var control equipment and inverter-based DG can be used to address this challenge.
These methods aim to achieve an optimal expected performance with respect to the figure of merit of interest to the distribution system operator while maintaining appropriate system voltage magnitudes and considering the uncertainty of DG power injections. The first method is used to optimize only the reactive power output of DGs to improve system performance (e.g., operating profit) and compensate for variations in active power injection while maintaining appropriate system voltage magnitudes and considering the uncertainty of DG power injections over the interval of interest. The second method proposes an integrated volt-var control based on a control action ahead of time to find the optimal voltage regulation tap settings and inverter reactive control parameters to improve the expected system performance (e.g., operating profit) while keeping the voltages across the system within specified ranges and considering the uncertainty of DG power injections over the interval of interest. In the third method, an integrated control strategy is formulated for the coordinated control of both distribution system equipment and inverter-based DG. This control strategy combines the use of inverter reactive power capability with the operation of voltage regulators to improve the expected value of the desired figure of merit (e.g., system losses) while maintaining appropriate system voltage magnitudes. The fourth method proposes a coordinated control strategy of voltage and reactive power control equipment to improve the expected system performance (e.g., system losses and voltage profiles) while considering the spatial correlation among the DGs and keeping voltage magnitudes within permissible limits, by formulating chance constraints on the voltage magnitude and considering the uncertainty of PV power injections over the interval of interest.
The proposed methods require infrequent communication with the distribution system operator and base their decisions on short-term forecasts (i.e., the first and second methods) and long-term forecasts (i.e., the third and fourth methods). The proposed methods achieve the best set of control actions for all voltage and reactive power control equipment to improve the expected value of the figure of merit proposed in this dissertation without violating any of the operating constraints. The proposed methods are validated using the IEEE 123-node radial distribution test feeder.
|
2 |
Coping Uncertainty in Wireless Network OptimizationLi, Shaoran 24 October 2022 (has links)
Network optimization plays an important role in 5G/next-G networks, which requires knowledge of network parameters (e.g., channel state information). The majority of existing works assume that all network parameters are either given a prior or can be accurately estimated. However, in many practical scenarios, some parameters are uncertain at the time of allocating resources and can only be modeled by random variables. Further, we only have limited knowledge of those uncertain parameters. For instance, channel gains are not exactly known due to channel estimation errors, network delay, limited feedback, and a lack of cooperation (between networks). Therefore, a practical solution to network optimization must address such uncertainty inside wireless networks.
There are three approaches to address such a network uncertainty: stochastic programming, worst-case optimization, and chance-constrained programming (CCP). Among the three, CCP has some unique benefits compared to the other two approaches. Stochastic programming explicitly requires full distribution knowledge, which is usually unavailable in practice. In comparison, CCP can work with various settings of available knowledge such as first and second order statistics, symmetric properties, or limited data samples. Therefore, CCP is more flexible to handle different network settings, which is important to address problems in 5G/next-G networks. Further, worst-case optimization assumes upper or lower bounds (i.e., worst cases) for the uncertain parameters and it is known to be conservative due to its focus on extreme cases. In contrast, CCP allows occasional and controllable violations for some constraints and thus offers much better performance in resource utilization compared to worst-case optimization. The only drawback of CCP is that it may lead to intractability due to its probabilistic formulation and limited knowledge of the underlying random variables.
To date, CCP has not been well utilized in the wireless communication and networking community. The goal of this dissertation is to extend the state-of-the-art of CCP techniques and address a number of challenging network optimization problems. This dissertation is correspondingly organized into two parts. In the first part, we assume the uncertain parameters are only known by their mean and covariance (without distribution knowledge). We assume these statistics are rather stationary (i.e., time-invariant for a sufficiently long time) and thus can be accurately estimated. In this setting, we introduce a novel reformulation technique based on the mean and covariance to derive a solution. In the second part, we assume these statistics are time-varying and thus cannot be accurately estimated.In this setting, we employ limited data samples that are collected in a small time window and use them to derive a solution.
For the first part, we investigate four research problems based on the mean and covariance of the uncertain parameters:
- In the first problem, we study how to maximize spectrum efficiency in underlay coexistence.The interference from all secondary users to each primary user must be kept below a given threshold. However, there is much uncertainty about the channel gains between the primary users and the second users due to a lack of cooperation between them. We formulate probabilistic interference constraints using CCP for the primary users. For tractability, we introduce a novel and powerful reformulation technique called Exact Conic Reformulation (ECR). With limited knowledge of mean and covariance, ECR offers an equivalent reformulation for the intractable chance constraints with tractable deterministic constraints without relaxation errors. After reformulation, we employ linearization techniques to the mixed-integer non-linear problem to reduce the computation complexity. We show that our proposed approach can achieve near-optimal performance and stands as a performance benchmark for the underlay coexistence problem.
- To find a solution for the same underlay coexistence problem that can be used in the real world, we need to find a solution in "real-time". The real-time requirement here refers to finding a solution in 125 us (the minimum time slot for small cells in 5G). Our proposed solution has three steps. First, it employs ECR to reformulate the original CCP into a deterministic optimization problem. Then it decomposes the problem and narrows down the search space into a smaller but promising one. By random sampling inside the promising search space and through local search, our proposed solution can meet the 125 us requirement in 5G while achieving 90% optimality on average.
- We further apply CCP, predicated on the reformulation technique ECR, to two other problems.
* We study the problem of power control in concurrent transmissions. Our objective is to maximize energy efficiency for all transmitter-receiver pairs with capacity requirements. This problem is challenging due to mutual interference among different transmitter-receiver pairs and the uncertain channel gain between any transmitter and receiver. We formulate a CCP and reformulate it into a deterministic problem using ECR. Then we employ Geometric Programming (GP) with a tight approximation to derive a near-optimal solution.
* We study task offloading in Mobile Edge Computing (MEC) where the number of processing cycles of a task is unknown until completion. The goal is to minimize the energy consumption of the users while meeting probabilistic deadlines for the tasks. We formulate the probabilistic deadlines into chance constraints and then use ECR to reformulate them into deterministic constraints. We propose a solution that consists of periodic scheduling and schedule updates to choose the offloaded tasks and task-to-processor assignments at the base station.
In the second part, we investigate two research problems based on limited data samples of the uncertain parameters:
- We study MU-MIMO beamforming based on Channel State Information (CSI). The goal is to derive a beamforming solution---minimizing power consumption at the BS while meeting the probabilistic data rate requirements of the users---by using very limited CSI data samples. For our CCP formulation, we explore the idea of Wasserstein ambiguity set to quantify the distance between the true (but unknown) distribution and the empirical distribution based on the limited data samples. Our proposed solution---Data-Driven Beamforming (D^2BF)---reformulates the CCP into a non-convex deterministic optimization problem based on the properties of Wasserstein ambiguity set. Then D^2BF employs a novel convex approximation to the non-convex deterministic problem, which can be directly solved by commercial solvers.
- For a solution to the MU-MIMO beamforming to be useful in the real world, it must meet the "real-time" requirement. Here, the real-time requirement refers to 1 ms, which is one transmission time interval (TTI) under 5G numerology 0. We present ReDBeam---a Real-time Data-driven Beamforming solution for the MU-MIMO beamforming problem (minimizing power consumption while offering probabilistic data rate guarantees to the users) with limited CSI data samples. RedBeam is a parallel algorithm and is purposefully designed to take advantage of the vast parallel processing capability offered by GPU. ReDBeam generates a large number of initial solutions from a promising search space and then refines each solution by a local search. We show that ReDBeam meets the 1 ms real-time requirement on a commercial GPU and is orders of magnitude faster than other state-of-the-art algorithms for the same problem. / Doctor of Philosophy / Network optimization plays an important role in 5G/next-G networks. In a wireless network optimization problem, we typically want to maximize or minimize an objective function under a set of performance or resource constraints. Knowledge of network parameters is typically required in these problems. The majority of existing works assume that all network parameters are either given a prior or can be accurately estimated. However, in many practical scenarios, some parameters are uncertain in nature and cannot be accurately estimated beforehand.
This dissertation addresses uncertainty in wireless network optimizations using chance-constrained programming (CCP). CCP can work with limited knowledge of uncertain parameters such as statistics or data samples, instead of full distribution information. In a CCP formulation, violations of certain target performance or requirement thresholds are expressed as probabilistic constraints and the frequency of such violations is bounded through a risk parameter. By changing this risk level, CCP offers a unique trade-off between the guaranteed threshold violation probabilities and the achieved objective value. The only drawback of CCP is that it may lead to intractability due to its probabilistic formulation and limited knowledge of the underlying random variables.
The goal of this dissertation is to extend the state-of-the-art of CCP techniques to address a number of challenging network optimization problems. This dissertation is organized into two parts. In the first part, the mean and covariance of the uncertain parameters are assumed to be stationary and thus can be accurately estimated. Our main contribution is a novel reformulation technique for CCP called Exact Conic Reformulation (ECR). Based on knowledge of mean and covariance, ECR is able to offer an equivalent reformulation for the intractable chance constraints with tractable deterministic constraints without relaxation errors. We apply CCP, predicated on ECR, to address three problems: (i) scheduling and power control in underlay coexistence; (ii) power control in concurrent transmissions, and (iii) task offloading in Mobile Edge Computing (MEC). For the first problem, we further address the "real-time" requirement in a solution and propose a solution that can meet the stringent timing requirement.
In the second part, when the uncertain parameters are non-stationary and their statistics cannot be accurately estimated, we propose to employ limited data samples that are collected over a small window and use them to develop a solution. To demonstrate the efficacy of this approach, we investigate the MU-MIMO beamforming problem that minimizes the power consumption of the base station while providing probabilistic guarantees to users' data rates. We further address the timing requirement for such a solution in practice, and present a real-time data-driven beamforming solution for MU-MIMO.
|
3 |
An evidential answer for the capacitated vehicle routing problem with uncertain demands / Une réponse évidentielle pour le problème de tournée de véhicules avec contrainte de capacité et demandes incertainesHelal, Nathalie 20 December 2017 (has links)
Le problème de tournées de véhicules avec contrainte de capacité est un problème important en optimisation combinatoire. L'objectif du problème est de déterminer l'ensemble des routes, nécessaire pour servir les demandes déterministes des clients ayant un cout minimal, tout en respectant la capacité limite des véhicules. Cependant, dans de nombreuses applications réelles, nous sommes confrontés à des incertitudes sur les demandes des clients. La plupart des travaux qui ont traité ce problème ont supposé que les demandes des clients étaient des variables aléatoires. Nous nous proposons dans cette thèse de représenter l'incertitude sur les demandes des clients dans le cadre de la théorie de l'évidence - un formalisme alternatif pour modéliser les incertitudes. Pour résoudre le problème d'optimisation qui résulte, nous généralisons les approches de modélisation classiques en programmation stochastique. Précisément, nous proposons deux modèles pour ce problème. Le premier modèle, est une extension de l'approche chance-constrained programming, qui impose des bornes minimales pour la croyance et la plausibilité que la somme des demandes sur chaque route respecte la capacité des véhicules. Le deuxième modèle étend l'approche stochastic programming with recourse: l'incertitude sur les recours (actions correctives) possibles sur chaque route est représentée par une fonction de croyance et le coût d'une route est alors son coût classique (sans recours) additionné du pire coût espéré des recours. Certaines propriétés de ces deux modèles sont étudiées. Un algorithme de recuit simulé est adapté pour résoudre les deux modèles et est testé expérimentalement. / The capacitated vehicle routing problem is an important combinatorial optimisation problem. Its objective is to find a set of routes of minimum cost, such that a fleet of vehicles initially located at a depot service the deterministic demands of a set of customers, while respecting capacity limits of the vehicles. Still, in many real-life applications, we are faced with uncertainty on customer demands. Most of the research papers that handled this situation, assumed that customer demands are random variables. In this thesis, we propose to represent uncertainty on customer demands using evidence theory - an alternative uncertainty theory. To tackle the resulting optimisation problem, we extend classical stochastic programming modelling approaches. Specifically, we propose two models for this problem. The first model is an extension of the chance-constrained programming approach, which imposes certain minimum bounds on the belief and plausibility that the sum of the demands on each route respects the vehicle capacity. The second model extends the stochastic programming with recourse approach: it represents by a belief function for each route the uncertainty on its recourses (corrective actions) and defines the cost of a route as its classical cost (without recourse) plus the worst expected cost of its recourses. Some properties of these two models are studied. A simulated annealing algorithm is adapted to solve both models and is experimentally tested.
|
4 |
Robust and stochastic MPC of uncertain-parameter systemsFleming, James January 2016 (has links)
Constraint handling is difficult in model predictive control (MPC) of linear differential inclusions (LDIs) and linear parameter varying (LPV) systems. The designer is faced with a choice of using conservative bounds that may give poor performance, or accurate ones that require heavy online computation. This thesis presents a framework to achieve a more flexible trade-off between these two extremes by using a state tube, a sequence of parametrised polyhedra that is guaranteed to contain the future state. To define controllers using a tube, one must ensure that the polyhedra are a sub-set of the region defined by constraints. Necessary and sufficient conditions for these subset relations follow from duality theory, and it is possible to apply these conditions to constrain predicted system states and inputs with only a little conservatism. This leads to a general method of MPC design for uncertain-parameter systems. The resulting controllers have strong theoretical properties, can be implemented using standard algorithms and outperform existing techniques. Crucially, the online optimisation used in the controller is a convex problem with a number of constraints and variables that increases only linearly with the length of the prediction horizon. This holds true for both LDI and LPV systems. For the latter it is possible to optimise over a class of gain-scheduled control policies to improve performance, with a similar linear increase in problem size. The framework extends to stochastic LDIs with chance constraints, for which there are efficient suboptimal methods using online sampling. Sample approximations of chance constraint-admissible sets are generally not positively invariant, which motivates the novel concept of âsample-admissible' sets with this property to ensure recursive feasibility when using sampling methods. The thesis concludes by introducing a simple, convex alternative to chance-constrained MPC that applies a robust bound to the time average of constraint violations in closed-loop.
|
5 |
Otimização sob restrições probabilísticas: teoria e aplicaçõesAraújo, Julyana Kelly Tavares de 30 December 2012 (has links)
Made available in DSpace on 2015-05-08T14:53:29Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 1355369 bytes, checksum: 9c8287916a30feac7e9a3d355e472d28 (MD5)
Previous issue date: 2012-12-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This Project brings a Chance Constrained Programming substantial approaching (CCP). This kind of optimization is used to pattern uncertainties and became useful to all kind of knowledge areas. The project main idea was to show CCP s theories and beyond this to present some applications on Engineering and Public Politics areas. It is noteworthy to say that this tool is pretty important for the production systems because of its uncertainties process. So after showing the theory whose purpose is to comprehend the Chance Constrained Programming, this subject commits itself to apply such technique in Emergency Medical Care Production Services (SAMU) in João Pessoa using the proposed model from Beraldi et al. (2004). This application was really useful to define the necessary ambulances to supply João Pessoa s city as well as the local they must be. However, to understand this technique and also work with it it s necessary a previous knowledge of Statistics, Applied Mathematics and Computing. Therefore, this work emphasizes the continuous and discreet random variables, as well as the probabilistic functions and concepts. In Applied Mathematics, this work brings a Linear Optimization, Facility Location and log concave functions. Concerning to computing, it was used MATLAB R007, Google Maps and CPLEX to provide the model. The great benefit of using CCP is that it offers possible solutions to the person who chooses between them, according to the reality. / Este trabalho apresenta uma abordagem de Otimização Probabilística (OP). Esse tipo de Otimização é utilizada para modelar incertezas e se tornou útil em diversas áreas do conhecimento. O objetivo principal deste trabalho foi apresentar a teoria de OP e, além disso, expor algumas aplicações nas áreas de Engenharia e Políticas Públicas. Vale ressaltar que tal ferramenta é muito interessante para Sistemas de Produção por existir incertezas inerentes ao processo. Assim, depois de apresentada tal teoria, com o intuito de melhor compreender a melhor a ferramenta de OP, este trabalho, se propôs a aplicar tal técnica no Sistema de Produção dos Serviços de Atendimento Médico de Urgência (SAMU) da cidade João Pessoa usando o Modelo proposto por Beraldi et al.(2004). A aplicação serviu para definir a quantidade de ambulâncias necessárias para atender a demanda de João Pessoa, assim como os possíveis locais que as mesmas devem estar posicionadas. No entanto, para entender melhor sobre essa técnica e trabalhar com a mesma, é necessário um conhecimento prévio de Estatística, Matemática Aplicada e Computação. Portanto, este trabalho aborda as variáveis aleatórias discretas e contínuas, bem como conceitos de Funções de Probabilidade. Na parte da Matemática Aplicada, este trabalho aborda conceitos de Otimização Linear, Facility Location e funções log. côncavas. Quanto à computação foi utilizado MATLAB R007,Google Maps e CPLEX para realizar a aplicação do Modelo. A grande vantagem da utilização de OP é que a mesma oferece soluções viáveis cujo tomador de decisão tem a opção de escolher qual a melhor solução de acordo com sua realidade.
|
6 |
Stochastic Optimization for Integrated Energy System with Reliability Improvement Using Decomposition AlgorithmHuang, Yuping 01 January 2014 (has links)
As energy demands increase and energy resources change, the traditional energy system has been upgraded and reconstructed for human society development and sustainability. Considerable studies have been conducted in energy expansion planning and electricity generation operations by mainly considering the integration of traditional fossil fuel generation with renewable generation. Because the energy market is full of uncertainty, we realize that these uncertainties have continuously challenged market design and operations, even a national energy policy. In fact, only a few considerations were given to the optimization of energy expansion and generation taking into account the variability and uncertainty of energy supply and demand in energy markets. This usually causes an energy system unreliable to cope with unexpected changes, such as a surge in fuel price, a sudden drop of demand, or a large renewable supply fluctuation. Thus, for an overall energy system, optimizing a long-term expansion planning and market operation in a stochastic environment are crucial to improve the system's reliability and robustness. As little consideration was paid to imposing risk measure on the power management system, this dissertation discusses applying risk-constrained stochastic programming to improve the efficiency, reliability and economics of energy expansion and electric power generation, respectively. Considering the supply-demand uncertainties affecting the energy system stability, three different optimization strategies are proposed to enhance the overall reliability and sustainability of an energy system. The first strategy is to optimize the regional energy expansion planning which focuses on capacity expansion of natural gas system, power generation system and renewable energy system, in addition to transmission network. With strong support of NG and electric facilities, the second strategy provides an optimal day-ahead scheduling for electric power generation system incorporating with non-generation resources, i.e. demand response and energy storage. Because of risk aversion, this generation scheduling enables a power system qualified with higher reliability and promotes non-generation resources in smart grid. To take advantage of power generation sources, the third strategy strengthens the change of the traditional energy reserve requirements to risk constraints but ensuring the same level of systems reliability In this way we can maximize the use of existing resources to accommodate internal or/and external changes in a power system. All problems are formulated by stochastic mixed integer programming, particularly considering the uncertainties from fuel price, renewable energy output and electricity demand over time. Taking the benefit of models structure, new decomposition strategies are proposed to decompose the stochastic unit commitment problems which are then solved by an enhanced Benders Decomposition algorithm. Compared to the classic Benders Decomposition, this proposed solution approach is able to increase convergence speed and thus reduce 25% of computation times on the same cases.
|
7 |
Learning Algorithms Using Chance-Constrained ProgramsJagarlapudi, Saketha Nath 07 1900 (has links)
This thesis explores Chance-Constrained Programming (CCP) in the context of learning. It is shown that chance-constraint approaches lead to improved algorithms for three important learning problems — classification with specified error rates, large dataset classification and Ordinal Regression (OR). Using moments of training data, the CCPs are posed as Second Order Cone Programs (SOCPs). Novel iterative algorithms for solving the resulting SOCPs are also derived. Borrowing ideas from robust optimization theory, the proposed formulations are made robust to moment estimation errors.
A maximum margin classifier with specified false positive and false negative rates is derived. The key idea is to employ chance-constraints for each class which imply that the actual misclassification rates do not exceed the specified. The formulation is applied to the case of biased classification.
The problems of large dataset classification and ordinal regression are addressed by deriving formulations which employ chance-constraints for clusters in training data rather than constraints for each data point. Since the number of clusters can be substantially smaller than the number of data points, the resulting formulation size and number of inequalities are very small. Hence the formulations scale well to large datasets.
The scalable classification and OR formulations are extended to feature spaces and the kernelized duals turn out to be instances of SOCPs with a single cone constraint. Exploiting this speciality, fast iterative solvers which outperform generic SOCP solvers, are proposed. Compared to state-of-the-art learners, the proposed algorithms achieve a speed up as high as 10000 times, when the specialized SOCP solvers are employed.
The proposed formulations involve second order moments of data and hence are susceptible to moment estimation errors. A generic way of making the formulations robust to such estimation errors is illustrated. Two novel confidence sets for moments are derived and it is shown that when either of the confidence sets are employed, the robust formulations also yield SOCPs.
|
8 |
Approximations in Stochastic Optimization and Their Applications / Approximations in Stochastic Optimization and Their ApplicationsMrázková, Eva January 2010 (has links)
Mnoho inženýrských úloh vede na optimalizační modely s~omezeními ve tvaru obyčejných (ODR) nebo parciálních (PDR) diferenciálních rovnic, přičemž jsou v praxi často některé parametry neurčité. V práci jsou uvažovány tři inženýrské problémy týkající se optimalizace vibrací a optimálního návrhu rozměrů nosníku. Neurčitost je v nich zahrnuta ve formě náhodného zatížení nebo náhodného Youngova modulu. Je zde ukázáno, že dvoustupňové stochastické programování nabízí slibný přístup k řešení úloh daného typu. Odpovídající matematické modely, zahrnující ODR nebo PDR omezení, neurčité parametry a více kritérií, vedou na (vícekriteriální) stochastické nelineární optimalizační modely. Dále je dokázáno, pro jaký typ úloh je nutné použít stochastické programování (EO reformulace), a kdy naopak stačí řešit jednodušší deterministickou úlohu (EV reformulace), což má v praxi význam z hlediska výpočetní náročnosti. Jsou navržena výpočetní schémata zahrnující diskretizační metody pro náhodné proměnné a ODR nebo PDR omezení. Matematické modely odvozené pomocí těchto aproximací jsou implementovány a řešeny v softwaru GAMS. Kvalita řešení je určena na základě intervalových odhadů "optimality gapu" spočtených pomocí metody Monte Carlo. Parametrická analýza vícekriteriálního modelu vede na výpočet "efficient frontier". Jsou studovány možnosti aproximace modelu zahrnujícího pravděpodobnostní členy související se spolehlivostí pomocí smíšeného celočíselného nelineárního programování a reformulace pomocí penalizační funkce. Dále je vzhledem k budoucím možnostem paralelních výpočtů rozsáhlých inženýrských úloh implementován a testován PHA algoritmus. Výsledky ukazují, že lze tento algoritmus použít, i když nejsou splněny matematické podmínky zaručující konvergenci. Na závěr je pro deterministickou verzi jedné z úloh porovnána metoda konečných diferencí s metodou konečných prvků za použití softwarů GAMS a ANSYS se zcela srovnatelnými výsledky.
|
Page generated in 0.3036 seconds