Spelling suggestions: "subject:"[een] STOCHASTIC OPTIMIZATION"" "subject:"[enn] STOCHASTIC OPTIMIZATION""
151 |
Development of Regional Optimization and Market Penetration Models For Electric Vehicles in the United StatesNoori, Mehdi 01 January 2015 (has links)
Since the transportation sector still relies mostly on fossil fuels, the emissions and overall environmental impacts of the transportation sector are particularly relevant to the mitigation of the adverse effects of climate change. Sustainable transportation therefore plays a vital role in the ongoing discussion on how to promote energy insecurity and address future energy requirements. One of the most promising ways to increase energy security and reduce emissions from the transportation sector is to support alternative fuel technologies, including electric vehicles (EVs). As vehicles become electrified, the transportation fleet will rely on the electric grid as well as traditional transportation fuels for energy. The life cycle cost and environmental impacts of EVs are still very uncertain, but are nonetheless extremely important for making policy decisions. Moreover, the use of EVs will help to diversify the fuel mix and thereby reduce dependence on petroleum. In this respect, the United States has set a goal of a 20% share of EVs on U.S. roadways by 2030. However, there is also a considerable amount of uncertainty in the market share of EVs that must be taken into account. This dissertation aims to address these inherent uncertainties by presenting two new models: the Electric Vehicles Regional Optimizer (EVRO), and Electric Vehicle Regional Market Penetration (EVReMP). Using these two models, decision makers can predict the optimal combination of drivetrains and the market penetration of the EVs in different regions of the United States for the year 2030. First, the life cycle cost and life cycle environmental emissions of internal combustion engine vehicles, gasoline hybrid electric vehicles, and three different EV types (gasoline plug-in hybrid EVs, gasoline extended-range EVs, and all-electric EVs) are evaluated with their inherent uncertainties duly considered. Then, the environmental damage costs and water footprints of the studied drivetrains are estimated. Additionally, using an Exploratory Modeling and Analysis method, the uncertainties related to the life cycle costs, environmental damage costs, and water footprints of the studied vehicle types are modeled for different U.S. electricity grid regions. Next, an optimization model is used in conjunction with this Exploratory Modeling and Analysis method to find the ideal combination of different vehicle types in each U.S. region for the year 2030. Finally, an agent-based model is developed to identify the optimal market shares of the studied vehicles in each of 22 electric regions in the United States. The findings of this research will help policy makers and transportation planners to prepare our nation*s transportation system for the future influx of EVs. The findings of this research indicate that the decision maker*s point of view plays a vital role in selecting the optimal fleet array. While internal combustion engine vehicles have the lowest life cycle cost, the highest environmental damage cost, and a relatively low water footprint, they will not be a good choice in the future. On the other hand, although all-electric vehicles have a relatively low life cycle cost and the lowest environmental damage cost of the evaluated vehicle options, they also have the highest water footprint, so relying solely on all-electric vehicles is not an ideal choice either. Rather, the best fleet mix in 2030 will be an electrified fleet that relies on both electricity and gasoline. From the agent-based model results, a deviation is evident between the ideal fleet mix and that resulting from consumer behavior, in which EV shares increase dramatically by the year 2030 but only dominate 30 percent of the market. Therefore, government subsidies and the word-of-mouth effect will play a vital role in the future adoption of EVs.
|
152 |
Learning and planning with noise in optimization and reinforcement learningThomas, Valentin 06 1900 (has links)
La plupart des algorithmes modernes d'apprentissage automatique intègrent un
certain degré d'aléatoire dans leurs processus, que nous appellerons le
bruit, qui peut finalement avoir un impact sur les prédictions du modèle. Dans cette thèse, nous examinons de plus près l'apprentissage et la planification en présence de bruit pour les algorithmes d'apprentissage par renforcement et d'optimisation.
Les deux premiers articles présentés dans ce document se concentrent sur l'apprentissage par renforcement dans un environnement inconnu, et plus précisément sur la façon dont nous pouvons concevoir des algorithmes qui utilisent la stochasticité de leur politique et de l'environnement à leur avantage.
Notre première contribution présentée dans ce document se concentre sur le cadre
de l'apprentissage par renforcement non supervisé. Nous montrons comment un
agent laissé seul dans un monde inconnu sans but précis peut apprendre quels
aspects de l'environnement il peut contrôler indépendamment les uns des autres,
ainsi qu'apprendre conjointement une représentation latente démêlée de ces
aspects que nous appellerons \emph{facteurs de variation}.
La deuxième contribution se concentre sur la planification dans les tâches de
contrôle continu. En présentant l'apprentissage par renforcement comme un
problème d'inférence, nous empruntons des outils provenant de la littérature sur
les m\'thodes de Monte Carlo séquentiel pour concevoir un algorithme efficace
et théoriquement motiv\'{e} pour la planification probabiliste en utilisant un
modèle appris du monde. Nous montrons comment l'agent peut tirer parti de note
objectif probabiliste pour imaginer divers ensembles de solutions.
Les deux contributions suivantes analysent l'impact du bruit de gradient dû à l'échantillonnage dans les algorithmes d'optimisation.
La troisième contribution examine le rôle du bruit de l'estimateur du gradient dans l'estimation par maximum de vraisemblance avec descente de gradient stochastique, en explorant la relation entre la structure du bruit du gradient et la courbure locale sur la généralisation et la vitesse de convergence du modèle.
Notre quatrième contribution revient sur le sujet de l'apprentissage par
renforcement pour analyser l'impact du bruit d'échantillonnage sur l'algorithme
d'optimisation de la politique par ascension du gradient. Nous constatons que le
bruit d'échantillonnage peut avoir un impact significatif sur la dynamique
d'optimisation et les politiques découvertes en apprentissage par
renforcement. / Most modern machine learning algorithms incorporate a degree of randomness in their processes, which we will refer to as noise, which can ultimately impact the model's predictions. In this thesis, we take a closer look at learning and planning in the presence of noise for reinforcement learning and optimization algorithms.
The first two articles presented in this document focus on reinforcement learning in an unknown environment, specifically how we can design algorithms that use the stochasticity of their policy and of the environment to their advantage.
Our first contribution presented in this document focuses on the unsupervised reinforcement learning setting. We show how an agent left alone in an unknown world without any specified goal can learn which aspects of the environment it can control independently from each other as well as jointly learning a disentangled latent representation of these aspects, or factors of variation.
The second contribution focuses on planning in continuous control tasks. By framing reinforcement learning as an inference problem, we borrow tools from Sequential Monte Carlo literature to design a theoretically grounded and efficient algorithm for probabilistic planning using a learned model of the world. We show how the agent can leverage the uncertainty of the model to imagine a diverse set of solutions.
The following two contributions analyze the impact of gradient noise due to sampling in optimization algorithms.
The third contribution examines the role of gradient noise in maximum likelihood estimation with stochastic gradient descent, exploring the relationship between the structure of the gradient noise and local curvature on the generalization and convergence speed of the model.
Our fourth contribution returns to the topic of reinforcement learning to analyze the impact of sampling noise on the policy gradient algorithm. We find that sampling noise can significantly impact the optimization dynamics and policies discovered in on-policy reinforcement learning.
|
153 |
On Control and Optimization of DC MicrogridsLiu, Jianzhe January 2017 (has links)
No description available.
|
154 |
A Method for Simulation Optimization with Applications in Robust Process Design and Locating Supply Chain OperationsIttiwattana, Waraporn 11 September 2002 (has links)
No description available.
|
155 |
Optimization-based Formulations for Operability Analysis and Control of Process Supply ChainsMastragostino, Richard 10 1900 (has links)
<p>Process operability represents the ability of a process plant to operate satisfactorily away from the nominal operating or design condition, where flexibility and dynamic operability are two important attributes of operability considered in this thesis. Today's companies are facing numerous challenges, many as a result of volatile market conditions. Key to sustainable profitable operation is a robust process supply chain. Within a wider business context, flexibility and responsiveness, i.e. dynamic operability, are regarded as key qualifications of a robust process supply chain.</p> <p>The first part of this thesis develops methodologies to rigorously evaluate the dynamic operability and flexibility of a process supply chain. A model is developed which describes the response dynamics of a multi-product, multi-echelon supply chain system. Its incorporation within a dynamic operability analysis framework is shown, where a bi-criterion, two-stage stochastic programming approach is applied for the treatment of demand uncertainty, and for estimating the Pareto frontier between an economic and responsiveness criterion. Two case studies are presented to demonstrate the effect of supply chain design features on responsiveness. This thesis has also extended current paradigms for process flexibility analysis to supply chains. The flexibility analysis framework, where a steady-state supply chain model is considered, evaluates the ability to sustain feasible steady-state operation for a range of demand uncertainty.</p> <p>The second part of this thesis develops a decision-support tool for supply chain management (SCM), by means of a robust model predictive control (MPC) strategy. An effective decision-support tool can fully leverage the qualifications from the operability analysis. The MPC formulation proposed in this thesis: (i) captures uncertainty in model parameters and demand by stochastic programming, (ii) accommodates hybrid process systems with decisions governed by logical conditions/rulesets, (iii) addresses multiple supply chain performance metrics including customer service and economics, and (iv) considers both open-loop and closed-loop prediction of uncertainty propagation. The developed robust framework is applied for the control of a multi-echelon, multi-product supply chain, and provides a substantial reduction in the occurrence of back orders when compared with a nominal MPC framework.</p> / Master of Applied Science (MASc)
|
156 |
[en] OPTIMAL PRICING OF NATURAL GAS FLEXIBLE CONTRACTS / [pt] PRECIFICAÇÃO ÓTIMA DOS CONTRATOS DE GÁS NATURAL NA MODALIDADE INTERRUPTÍVELSYLVIA TELLES RIBEIRO 14 July 2010 (has links)
[pt] O segmento industrial desempenha um importante papel no
desenvolvimento do setor de gás Brasileiro. Em função dos baixos preços e dos
incentivos dados pelo governo para a conversão dos processos industriais (muitos
deles dependentes do óleo combustível) para o gás natural, criou-se uma fonte de
demanda firme deste combustível. Como as termelétricas operam em regime de
complementariedade ao sistema hidrelétrico (sendo coordenadas pelo Operador
Nacional do Sistema (ONS) elétrico e chamadas a gerar apenas em situações
hidrológicas desfavoráveis), o oconsumo de gás termelétrico ocorre de forma
esporádica. Uma forma de se aumentar a eficiência do uso do gás, mesclando duas
classes de consumidores se dá através dos contratos interruptíveis, que
proporcionam ao produtor a capacidade de atender consumidores industriais bicombustível
(gás e óleo por exemplo) com o gás ocioso das termelétricas. Como a
atratividade deste contrato depende do desconto dado com relação ao preço do
contrato firme, que não é interrompido, o objetivo deste trabalho é a construção de
um modelo analítico para a determinação do preço ótimo dos contratos de
fornecimento de gás interruptíveis, por parte de um produtor monopolista. O
consumo de gás das termelétricas será considerado como principal fonte de
incerteza do modelo, que por sua vez será caracterizada através de cenários de
operação ótima do sistema elétrico, simulados conforme a metodologia utilizada
pelo ONS. O perfil de risco do produtor será caracterizado pelo Conditional
Value-at-Risk (CVaR). / [en] Brazilian natural gas industry growth has been led by electricity supply. As
hydro plants generate at lower costs, thermal units only produce when hydro
electricity is insufficient. This makes natural gas consumption highly volatile:
Either all thermal units generate together or don’t. When all units generate
together, the gas trader has to buy LNG - Liquified Natural Gas at the spot market
incurring price risk. This risk can be mitigated in case the gas trader is able to sell
flexible contracts to the industrial sector that can be interrupted in case of thermal
generation. Thus the gas volume sold under flexible contracts is used either by
thermal generation or by the industrial sector, virtually reducing total demand and
avoiding emergency LNG purchases. The determination of the optimal price for
these contracts is the aim of this dissertation. The determination model proposed
will try to maximize a convex combination of CVaR - Conditional Value at Risk
NPV - Net Present Value and trader´s profit NPV.
|
157 |
[pt] A EFICÁCIA DA OTIMIZAÇÃO DE DOIS NÍVEIS EM PROBLEMAS DE SISTEMAS DE POTÊNCIA DE GRANDE PORTE: UMA FERRAMENTA PARA OTIMIZAÇÃO DE DOIS NÍVEIS, UMA METODOLOGIA PARA APRENDIZADO DIRIGIDO PELA APLICAÇÃO E UM SIMULADOR DE MERCADO / [en] THE EFFECTIVENESS OF BILEVEL OPTIMIZATION IN LARGE-SCALE POWER SYSTEMS PROBLEMS: A BILEVEL OPTIMIZATION TOOLBOX, A FRAMEWORK FOR APPLICATION-DRIVEN LEARNING, AND A MARKET SIMULATORJOAQUIM MASSET LACOMBE DIAS GARCIA 25 January 2023 (has links)
[pt] A otimização de binível é uma ferramenta extremamente poderosa para
modelar problemas realistas em várias áreas. Por outro lado, sabe-se que a otimização
de dois níveis frequentemente leva a problemas complexos ou intratáveis.
Nesta tese, apresentamos três trabalhos que expandem o estado da arte da
otimização de dois níveis e sua interseção com sistemas de potência. Primeiro,
apresentamos BilevelJuMP, um novo pacote de código aberto para otimização
de dois níveis na linguagem Julia. O pacote é uma extensão da linguagem
de modelagem de programação matemática JuMP, é muito geral, completo e
apresenta funcionalidades únicas, como a modelagem de programas cônicos no
nível inferior. O software permite aos usuários modelar diversos problemas de
dois níveis e resolvê-los com técnicas avançadas. Como consequência, torna a
otimização de dois níveis amplamente acessível a um público muito mais amplo.
Nos dois trabalhos seguintes, desenvolvemos métodos especializados para
lidar com modelos complexos e programas de dois níveis de grande escala decorrentes
de aplicações de sistemas de potência. Em segundo lugar, usamos a
programação de dois níveis como base para desenvolver o Aprendizado Dirigido
pela Aplicação, uma nova estrutura de ciclo fechado na qual os processos
de previsão e tomada de decisão são mesclados e co-otimizados. Descrevemos o
modelo matematicamente como um programa de dois níveis, provamos resultados
de convergência e descrevemos métodos de solução heurísticos e exatos
para lidar com sistemas de grande escala. O método é aplicado para previsão de
demanda e alocação de reservas na operação de sistemas de potência. Estudos
de caso mostram resultados muito promissores com soluções de boa qualidade em sistemas realistas com milhares de barras. Em terceiro lugar, propomos
um simulador para modelar mercados de energia hidrotérmica de longo prazo
baseados em ofertas. Um problema de otimização estocástica multi-estágio é
formulado para acomodar a dinâmica inerente aos sistemas hidrelétricos. No
entanto, os subproblemas de cada etapa são programas de dois níveis para
modelar agentes estratégicos. O simulador é escalável em termos de dados do
sistema, agentes, cenários e estágios considerados. Concluímos o terceiro trabalho
com simulações em grande porte com dados realistas do sistema elétrico
brasileiro com 3 agentes formadores de preço, 1000 cenários e 60 estágios mensais.
Esses três trabalhos mostram que, embora a otimização de dois níveis
seja uma classe extremamente desafiadora de problemas NP-difíceis, é possível
desenvolver algoritmos eficazes que levam a soluções de boa qualidade. / [en] Bilevel Optimization is an extremely powerful tool for modeling realistic
problems in multiple areas. On the other hand, Bilevel Optimization is known
to frequently lead to complex or intractable problems. In this thesis, we
present three works expanding the state of the art of bilevel optimization
and its intersection with power systems. First, we present BilevelJuMP, a
novel open-source package for bilevel optimization in the Julia language. The
package is an extension of the JuMP mathematical programming modeling
language, is very general, feature-complete, and presents unique functionality,
such as the modeling of lower-level cone programs. The software enables
users to model a variety of bilevel problems and solve them with advanced
techniques. As a consequence, it makes bilevel optimization widely accessible
to a much broader public. In the following two works, we develop specialized
methods to handle much model complex and very large-scale bilevel programs
arising from power systems applications. Second, we use bilevel programming
as the foundation to develop Application-Driven Learning, a new closed-loop
framework in which the processes of forecasting and decision-making are
merged and co-optimized. We describe the model mathematically as a bilevel
program, prove convergence results and describe exact and tailor-made heuristic
solution methods to handle very large-scale systems. The method is applied
to demand forecast and reserve allocation in power systems operation. Case
studies show very promising results with good quality solutions on realistic
systems with thousands of buses. Third, we propose a simulator to model
long-term bid-based hydro-thermal power markets. A multi-stage stochastic program is formulated to accommodate the dynamics inherent to hydropower
systems. However, the subproblems of each stage are bilevel programs in
order to model strategic agents. The simulator is scalable in terms of system
data, agents, scenarios, and stages being considered. We conclude the third
work with large-scale simulations with realistic data from the Brazilian power
system with 3 price maker agents, 1000 scenarios, and 60 monthly stages.
These three works show that although bilevel optimization is an extremely
challenging class of NP-hard problems, it is possible to develop effective
algorithms that lead to good-quality solutions.
|
158 |
[en] OPTIMIZATION OF THE OPERATION UNDER UNCERTAINTY OF THERMAL PLANTS WITH FUEL CONTRACT WITH TAKE-OR-PAY CLAUSES / [pt] OTIMIZAÇÃO DA OPERAÇÃO SOB INCERTEZA DE USINAS TERMELÉTRICAS COM CONTRATOS DE COMBUSTÍVEL COM CLÁUSULAS DE TAKE-OR-PAYRAPHAEL MARTINS CHABAR 03 February 2006 (has links)
[pt] O objetivo desta dissertação é desenvolver uma metodologia
para determinar a
estratégia ótima de despacho de usinas térmicas
considerando as especificações do
contrato de combustível e suas cláusulas de take-or-pay, as
oportunidades de compra e
venda de energia no mercado spot e as características da
usina. Como as decisões de uma
etapa têm impacto nas etapas seguintes, há um acoplamento
temporal entre as decisões
tomadas e o problema tem um caráter de decisão multi-
estágio. Além disso, o principal
guia para a tomada de decisão é o preço spot, que é
desconhecido no futuro e modelado
através de cenários. Desta forma, a estratégia ótima de
despacho torna-se um problema de
decisão sob incerteza, onde a cada etapa o objetivo é
determinar a operação que
maximize a rentabilidade total (ao longo de vários
períodos) da central térmica. A
metodologia desenvolvida se baseia em Programação Dinâmica
Estocástica (PDE).
Exemplos serão ilustrados com o sistema brasileiro. / [en] The objective of this work is to present a methodology to
determine the optimal
dispatch strategy of thermal power plants taking into
account the particular specifications
of fuel supply agreements, such as take-or-pay and make-up
clauses. Opportunities for
energy purchase and selling in the spot market as well as
the plant´s technical
characteristics are also considered in the optimization
process. Since decisions in one
stage impact the future stages, the problem is a time-
coupled in a multi-stage framework.
Moreover, the main driver for the decision-making is the
energy spot price, which is
unknown in the future and modeled through scenarios.
Therefore, the optimal dispatch
strategy is a decision under uncertainty problem, where at
each stage the objective is to
determine the optimal operation strategy that maximizes the
total revenues taking into
account constraints and characteristics of the fuel supply
agreement. The developed
methodology is based on Stochastic Dynamic Programming
(SDP). Examples and case
studies will be shown with the Brazilian system.
|
159 |
Navigating Uncertainty: Distributed and Bandit Solutions for Equilibrium Learning in Multiplayer GamesYuanhanqing Huang (18361527) 15 April 2024 (has links)
<p dir="ltr">In multiplayer games, a collection of self-interested players aims to optimize their individual cost functions in a non-cooperative manner. The cost function of each player depends not only on its own actions but also on the actions of others. In addition, players' actions may also collectively satisfy some global constraints. The study of this problem has grown immensely in the past decades with applications arising in a wide range of societal systems, including strategic behaviors in power markets, traffic assignment of strategic risk-averse users, engagement of multiple humanitarian organizations in disaster relief, etc. Furthermore, with machine learning models playing an increasingly important role in practical applications, the robustness of these models becomes another prominent concern. Investigation into the solutions of multiplayer games and Nash equilibrium problems (NEPs) can advance the algorithm design for fitting these models in the presence of adversarial noises. </p><p dir="ltr">Most of the existing methods for solving multiplayer games assume the presence of a central coordinator, which, unfortunately, is not practical in many scenarios. Moreover, in addition to couplings in the objectives and the global constraints, all too often, the objective functions contain uncertainty in the form of stochastic noises and unknown model parameters. The problem is further complicated by the following considerations: the individual objectives of players may be unavailable or too complex to model; players may exhibit reluctance to disclose their actions; players may experience random delays when receiving feedback regarding their actions. To contend with these issues and uncertainties, in the first half of the thesis, we develop several algorithms based on the theory of operator splitting and stochastic approximation, where the game participants only share their local information and decisions with their trusted neighbors on the network. In the second half of the thesis, we explore the bandit online learning framework as a solution to the challenges, where decisions made by players are updated based solely on the realized objective function values. Our future work will delve into data-driven approaches for learning in multiplayer games and we will explore functional representations of players' decisions, in a departure from the vector form. </p>
|
160 |
HIGH-DIMENSIONAL INFERENCE OVER NETWORKS: STATISTICAL AND COMPUTATIONAL GUARANTEESYao Ji (19697335) 19 September 2024 (has links)
<p dir="ltr">Distributed optimization problems defined over mesh networks are ubiquitous in signal processing, machine learning, and control. In contrast to centralized approaches where all information and computation resources are available at a centralized server, agents on a distributed system can only use locally available information. As a result, efforts have been put into the design of efficient distributed algorithms that take into account the communication constraints and make coordinated decisions in a fully distributed manner from a pure optimization perspective. Given the massive sample size and high-dimensionality generated by distributed systems such as social media, sensor networks, and cloud-based databases, it is essential to understand the statistical and computational guarantees of distributed algorithms to solve such high-dimensional problems over a mesh network.</p><p dir="ltr">A goal of this thesis is a first attempt at studying the behavior of distributed methods in the high-dimensional regime. It consists of two parts: (I) distributed LASSO and (II) distributed stochastic sparse recovery.</p><p dir="ltr">In Part (I), we start by studying linear regression from data distributed over a network of agents (with no master node) by means of LASSO estimation, in high-dimension, which allows the ambient dimension to grow faster than the sample size. While there is a vast literature of distributed algorithms applicable to the problem, statistical and computational guarantees of most of them remain unclear in high dimensions. This thesis provides a first statistical study of the Distributed Gradient Descent (DGD) in the Adapt-Then-Combine (ATC) form. Our theory shows that, under standard notions of restricted strong convexity and smoothness of the loss functions--which hold with high probability for standard data generation models--suitable conditions on the network connectivity and algorithm tuning, DGD-ATC converges globally at a linear rate to an estimate that is within the centralized statistical precision of the model. In the worst-case scenario, the total number of communications to statistical optimality grows logarithmically with the ambient dimension, which improves on the communication complexity of DGD in the Combine-Then-Adapt (CTA) form, scaling linearly with the dimension. This reveals that mixing gradient information among agents, as DGD-ATC does, is critical in high-dimensions to obtain favorable rate scalings. </p><p dir="ltr">In Part (II), we focus on addressing the problem of distributed stochastic sparse recovery through stochastic optimization. We develop and analyze stochastic optimization algorithms for problems over a network, modeled as an undirected graph (with no centralized node), where the expected loss is strongly convex with respect to the Euclidean norm, and the optimum is sparse. Assuming agents only have access to unbiased estimates of the gradients of the underlying expected objective, and stochastic gradients are sub-Gaussian, we use distributed stochastic dual averaging (DSDA) as a building block to develop a fully decentralized restarting procedure for recovery of sparse solutions over a network. We show that with high probability, the iterates generated by all agents linearly converge to an approximate solution, eliminating fast the initial error; and then converge sublinearly to the exact sparse solution in the steady-state stages owing to observation noise. The algorithm asymptotically achieves the optimal convergence rate and favorable dimension dependence enjoyed by a non-Euclidean centralized scheme. Further, we precisely identify its non-asymptotic convergence rate as a function of characteristics of the objective functions and the network, and we characterize the transient time needed for the algorithm to approach the optimal rate of convergence. We illustrate the performance of the algorithm in application to classical problems of sparse linear regression, sparse logistic regression and low rank matrix recovery. Numerical experiments demonstrate the tightness of the theoretical results.</p>
|
Page generated in 0.0495 seconds