• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 178
  • 42
  • 22
  • 20
  • 8
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 331
  • 331
  • 122
  • 62
  • 53
  • 44
  • 39
  • 37
  • 37
  • 37
  • 36
  • 35
  • 33
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Strategies, Methods and Tools for Solving Long-term Transmission Expansion Planning in Large-scale Power Systems

Fitiwi, Desta Zahlay January 2016 (has links)
Driven by a number of factors, the electric power industry is expected to undergo a paradigm shift with a considerably increased level of variable energy sources. A significant integration of such sources requires heavy transmission investments over geographically wide and large-scale networks. However, the stochastic nature of such sources, along with the sheer size of network systems, results in problems that may become intractable. Thus, the challenge addressed in this work is to design efficient and reasonably accurate models, strategies and tools that can solve large-scale TEP problems under uncertainty. A long-term stochastic network planning tool is developed, considering a multi-stage decision framework and a high level integration of renewables. Such a tool combines the need for short-term decisions with the evaluation of long-term scenarios, which is the practical essence of a real-world planning. Furthermore, in order to significantly reduce the combinatorial solution search space, a specific heuristic solution strategy is devised. This works by decomposing the original problem into successive optimization phases.One of the modeling challenges addressed in this work is to select the right network model for power flow and congestion evaluation: complex enough to capture the relevant features but simple enough to be computationally fast. Another relevant contribution is a domain-driven clustering process of snapshots which is based on a “moments” technique. Finally, the developed models, methods and solution strategies have been tested on standard and real-life systems. This thesis also presents numerical results of an aggregated 1060-node European network system considering multiple RES development scenarios. Generally, test results show the effectiveness of the proposed TEP model, since—as originally intended—it contributes to a significant reduction in computational effort while fairly maintaining optimality of the solutions. / Driven by several techno-economic, environmental and structural factors, the electric energy industry is expected to undergo a paradigm shift with a considerably increased level of renewables (mainly variable energy sources such as wind and solar), gradually replacing conventional power production sources. The scale and the speed of integrating such sources of energy are of paramount importance to effectively address a multitude of global and local concerns such as climate change, sustainability and energy security. In recent years, wind and solar power have been attracting large-scale investments in many countries, especially in Europe. The favorable agreements of states to curb greenhouse gas emissions and mitigate climate change, along with other driving factors, will further accelerate the renewable integration in power systems. Renewable energy sources (RESs), wind and solar in particular, are abundant almost everywhere, although their energy intensities differ very much from one place to another. Because of this, a significant integration of such energy sources requires heavy investments in transmission infrastructures. In other words, transmission expansion planning (TEP) has to be carried out in geographically wide and large-scale networks. This helps to effectively accommodate the RESs and optimally exploit their benefits while minimizing their side effects. However, the uncertain nature of most of the renewable sources, along with the size of the network systems, results in optimization problems that may become intractable in practice or require a huge computational effort. Thus, the challenge addressed in this work is to design models, strategies and tools that may solve large-scale and uncertain TEP problems, being computationally efficient and reasonably accurate. Of course, the specific definition of the term “reasonably accurate” is the key issue of the thesis work, since it requires a deep understanding of the main cost and technical drivers of adequate TEP investment decisions. A new formulation is proposed in this dissertation for a long-term planning of transmission investments under uncertainty, with a multi-stage decision framework and considering a high level of renewable sources integration. This multi-stage strategy combines the need for short-term decisions with the evaluation of long-term scenarios, which is the practical essence of a real-world planning. The TEP problem is defined as a stochastic mixed-integer linear programming (S-MILP) optimization, an exact solution method. This allows the use of effective off-the-shelf solvers to obtain solutions within a reasonable computational time, enhancing overall problem tractability. Furthermore, in order to significantly reduce the combinatorial solution search (CSS) space, a specific heuristic solution strategy is devised. In this global heuristic strategy, the problem is decomposed into successive optimization phases. Each phase uses more complex optimization models than the previous one, and uses the results of the previous phase so that the combinatorial solution search space is reduced after each phase. Moreover, each optimization phase is defined and solved as an independent problem; thus, allowing the use of specific decomposition techniques, or parallel computation when possible. A relevant feature of the solution strategy is that it combines deterministic and stochastic modeling techniques on a multi-stage modeling framework with a rolling-window planning concept. The planning horizon is divided into two sub-horizons: medium- and long-term, both having multiple decision stages. The first sub-horizon is characterized by a set of investments, which are good enough for all scenarios, in each stage while scenario-dependent decisions are made in the second sub-horizon. One of the first modeling challenges of this work is to select the right network model for power flow and congestion evaluation: complex enough to capture the relevant features but simple enough to be computationally fast. The thesis includes extensive analysis of existing and improved network models such as AC, linearized AC, “DC”, hybrid and pipeline models, both for the existing and the candidate lines. Finally, a DC network model is proposed as the most suitable option. This work also analyzes alternative losses models. Some of them are already available and others are proposed as original contributions of the thesis. These models are evaluated in the context of the target problem, i.e., in finding the right balance between accuracy and computational effort in a large-scale TEP problem subject to significant RES integration. It has to be pointed out that, although losses are usually neglected in TEP studies because of computational limitations, they are critical in network expansion decisions. In fact, using inadequate models may lead not only to cost-estimation errors, but also to technical errors such as the so-called “artificial losses”. Another relevant contribution of this work is a domain-driven clustering process to handle operational states. This allows a more compact and efficient representation of uncertainty with little loss of accuracy. This is relevant because, together with electricity demand and other traditional sources of uncertainty, the integration of variable energy sources introduces an additional operational variability and uncertainty. A substantial part of this uncertainty and variability is often handled by a set of operational states, here referred to as “snapshots”, which are generation-demand patterns of power systems that lead to optimal power flow (OPF) patterns in the transmission network. A large set of snapshots, each one with an estimated probability, is then used to evaluate and optimize the network expansion. In a long-term TEP problem of large networks, the number of operational states must be reduced. Hence, from a methodological perspective, this thesis shows how the snapshot reduction can be achieved by means of clustering, without relevant loss of accuracy, provided that a good selection of classification variables is used in the clustering process. The proposed method relies on two ideas. First, the snapshots are characterized by their OPF patterns (the effects) instead of the generation-demand patterns (the causes). This is simply because the network expansion is the target problem, and losses and congestions are the drivers to network investments. Second, the OPF patterns are classified using a “moments” technique, a well-known approach in Optical Pattern Recognition problems. The developed models, methods and solution strategies have been tested on small-, medium- and large-scale network systems. This thesis also presents numerical results of an aggregated 1060-node European network system obtained considering multiple RES development scenarios. Generally, test results show the effectiveness of the proposed TEP model, since—as originally intended—it contributes to a significant reduction in computational effort while fairly maintaining optimality of the solutions. / <p>QC 20160919</p>
92

Optimal Decisions in the Equity Index Derivatives Markets Using Option Implied Information

Barkhagen, Mathias January 2015 (has links)
This dissertation is centered around two comprehensive themes: the extraction of information embedded in equity index option prices, and how to use this information in order to be able to make optimal decisions in the equity index option markets. These problems are important for decision makers in the equity index options markets, since they are continuously faced with making decisions under uncertainty given observed market prices. The methods developed in this dissertation provide robust tools that can be used by practitioners in order to improve the quality of the decisions that they make. In order to be able to extract information embedded in option prices, the dissertation develops two different methods for estimation of stable option implied surfaces which are consistent with observed market prices. This is a difficult and ill-posed inverse problem which is complicated by the fact that observed option prices contain a large amount of noise stemming from market micro structure effects. Producing estimated surfaces that are stable over time is important since otherwise risk measurement of derivatives portfolios, pricing of exotic options and calculation of hedge parameters will be prone to include significant errors. The first method that we develop leads to an optimization problem which is formulated as a convex quadratic program with linear constraints which can be solved very efficiently. The second estimation method that we develop in the dissertation makes it possible to produce local volatility surfaces of high quality, which are consistent with market prices and stable over time. The high quality of the surfaces estimated with the second method is the crucial input to the research which has resulted in the last three papers of the dissertation. The stability of the estimated local volatility surfaces makes it possible to build a realistic dynamic model for the equity index derivatives market. This model forms the basis for the stochastic programming (SP) model for option hedging that we develop in the dissertation. We show that the SP model, which uses generated scenarios for the squared local volatility surface as input,  outperforms the traditional hedging methods that are described in the literature. Apart from having an accurate view of the variance of relevant risk factors, it is when building a dynamic model also important to have a good estimate of the expected values, and thereby risk premia, of those factors. We use a result from recently published research which lets us recover the real-world density from only a cross-section of observed option prices via a local volatility model. The recovered real-world densities are then used in order to identify and estimate liquidity premia that are embedded in option prices. We also use the recovered real-world densities in order to test how well the option market predicts the realized statistical characteristics of the underlying index. We compare the results with the performance of commonly used models for the underlying index. The results show that option prices contain a premium in the tails of the distribution. By removing the estimated premia from the tails, the resulting density predicts future realizations of the underlying index very well.
93

Empirické odhady ve stochastickém programování; závislá data / Empiciral Estimates in Stochastic Programming; Dependent Data

Kolafa, Ondřej January 2014 (has links)
This thesis concentrates on stochastic programming problems based on empirical and theoretical distributions and their relationship. Firstly, it focuses on the case where the empirical distribution is an independent random sample. The basic properties are shown followed by the convergence between the problem based on the empirical distribution and the same problem applied to the theoretical distribution. The thesis continues with an overview of some types of dependence - m-dependence, mixing, and also more general weak dependence. For sequences with some of these types of dependence, properties are shown to be similar to those holding for independent sequences. In the last section, the theory is demonstrated using numerical examples, and dependent and independent sequences, including sequences with different types of dependence, are compared.
94

Úlohy stochastického programování a ekonomické aplikace / Stochastic Programming Problems via Economic Problems

Kučera, Tomáš January 2014 (has links)
This thesis' topic is stochastic programming, in particular with regard to portfolio optimization and heavy tailed data. The first part of the thesis mentions the most common types of problems associated with stochastic programming. The second part focuses on solving the stochastic programming problems via the SAA method, especially on the condition of data with heavy tailed distributions. In the final part, the theory is applied to the portfolio optimization problem and the thesis concludes with a numerical study programmed in R based on data collected from Google Finance.
95

Parallel problem generation for structured problems in mathematical programming

Qiang, Feng January 2015 (has links)
The aim of this research is to investigate parallel problem generation for structured optimization problems. The result of this research has produced a novel parallel model generator tool, namely the Parallel Structured Model Generator (PSMG). PSMG adopts the model syntax from SML to attain backward compatibility for the models already written in SML [1]. Unlike the proof-of-concept implementation for SML in [2], PSMG does not depend on AMPL [3]. In this thesis, we firstly explain what a structured problem is using concrete real-world problems modelled in SML. Presenting those example models allows us to exhibit PSMG’s modelling syntax and techniques in detail. PSMG provides an easy to use framework for modelling large scale nested structured problems including multi-stage stochastic problems. PSMG can be used for modelling linear programming (LP), quadratic programming (QP), and nonlinear programming (NLP) problems. The second part of this thesis describes considerable thoughts on logical calling sequence and dependencies in parallel operation and algorithms in PSMG. We explain the design concept for PSMG’s solver interface. The interface follows a solver driven work assignment approach that allows the solver to decide how to distribute problem parts to processors in order to obtain better data locality and load balancing for solving problems in parallel. PSMG adopts a delayed constraint expansion design. This allows the memory allocation for computed entities to only happen on a process when it is necessary. The computed entities can be the set expansions of the indexing expressions associated with the variable, parameter and constraint declarations, or temporary values used for set and parameter constructions. We also illustrate algorithms that are important for delivering efficient implementation of PSMG, such as routines for partitioning constraints according to blocks and automatic differentiation algorithms for evaluating Jacobian and Hessian matrices and their corresponding sparsity partterns. Furthermore, PSMG implements a generic solver interface which can be linked with different structure exploiting optimization solvers such as decomposition or interior point based solvers. The work required for linking with PSMG’s solver interface is also discussed. Finally, we evaluate PSMG’s run-time performance and memory usage by generating structured problems with various sizes. The results from both serial and parallel executions are discussed. The benchmark results show that PSMG achieve good parallel efficiency on up to 96 processes. PSMG distributes memory usage among parallel processors which enables the generation of problems that are too large to be processed on a single node due to memory restriction.
96

[en] OPTIMUM ALLOCATION AND RISK MEASURE IN AN ALM MODEL FOR A PENSION FUND VIA MULTI-STAGE STOCHASTIC PROGRAMMING AND BOOTSTRAP / [pt] ALOCAÇÃO ÓTIMA E MEDIDA DE RISCO DE UM ALM PARA FUNDO DE PENSÃO VIA PROGRAMAÇÃO ESTOCÁSTICA MULTI-ESTÁGIO E BOOTSTRAP

DAVI MICHEL VALLADAO 29 September 2008 (has links)
[pt] Asset and Liability Management ou ALM pode ser definido como um processo gestão de ativos e passivos de forma coordenada com a finalidade de atingir os objetivos financeiros de uma organização. No caso dos fundos de pensão, o ALM consiste fundamentalmente na determinação da política ótima de investimentos. Esta deverá maximizar o capital acumulado através de contribuições dos participantes e do retorno dos investimentos ao mesmo tempo em que minimiza o risco do não cumprimento das obrigações do fundo. A aplicação de modelos de programação estocástica para problemas de ALM em fundos de pensão é dificultada pelos longos prazos envolvidos - a duração dos benefícios pode ultrapassar cem anos. No entanto, os modelos de programação estocástica propostos na literatura limitam o horizonte de planejamento a poucas décadas, ao final das quais é imposta uma restrição de capital mínimo com vistas a controlar o risco de equilíbrio relativo ao restante da vigência do fundo. Este trabalho propõe um novo método para incorporar o risco de equilíbrio na determinação do capital mínimo final do modelo de programação estocástica aplicado a um fundo de pensão no contexto brasileiro. No método proposto, o cálculo da probabilidade de insolvência leva em consideração que os benefícios futuros devem ser trazidos a valor presente pela rentabilidade futura da carteira, cuja distribuição de probabilidades é levantada através de um processo de reamostragem (bootstrap) dos cenários embutidos na solução do problema de programação estocástica. O método proposto permite evidenciar que a probabilidade de insolvência medida tradicionalmente utilizada subestima acentuadamente o risco de equilíbrio. / [en] Asset and Liability Management or ALM can be defined as a process of managing coordinately assets and liabilities in an attempt to achieve an organization´s financial objectives. For instance, a pension fund ALM consists in determining the optimal investment policy which is the one that maximizes wealth accumulated by the contributions and minimizes the equilibrium risk defined as the insolvency probability, i.e., the probability that the fund won´t be able to pay all benefits during the planning horizon. The use of stochastic programming models for ALM problems is more difficult because of the long planning horizon. However stochastic programming models are proposed in the literature reducing the planning horizon and including a chance constraint or an objective function penalization to control the equilibrium risk for the non-considered period. On this work, a new method for measuring and controlling the equilibrium risk is proposed determining capital requirement of a Brazilian pension fund for the nonconsidered period. This developed method considers the portfolio return as the discount rate of all net liability flows. The distribution of this discount rate conditioned on the optimal decisions is estimated by bootstrapping the portfolio return embedded on the stochastic programming solution. To sum up, this method shows that the usual insolvency probability of the previous models actually underestimates the pension fund`s equilibrium risk.
97

Aplicação de programação estocástica para estratégias de investimento em fundos estruturados. / Stochastic programming application for investment strategies in funds structured.

Vanzetto, Bruno Marquetti 02 April 2014 (has links)
Este trabalho apresenta a aplicação de uma abordagem via programação estocástica linear para definição da alocação de ativos ótima em um fundo de investimento estruturado. A carteira adotada é formada por caixa, futuro de Ibovespa e call de Ibovespa. A partir da definição da alocação ótima é feita uma análise do sucesso dessa carteira em superar a performance do índice Bovespa. Além disso, é feita uma análise comparativa contra uma carteira formada apenas por caixa e um ativo com retorno igual ao do índice Bovespa à vista. Tal comparação analisa a diferença de performance e também a diferença de risco medido pelo VaR (value at risk) versus retorno. A metodologia adotada para obtenção da alocação ótima é baseada na geração de uma árvore de cenários de parâmetros de mercado sobre a qual os ativos que formam a carteira são revalorizados de tal modo que o lucro acumulado ao fim do prazo de vida do fundo seja maximizado. Os resultados obtidos mostram que a performance da carteira formada por caixa, futuros e calls supera a performance da carteira formada por caixa e índice Bovespa à vista mesmo quando sujeito a um mesmo nível de risco. Além disso, a metodologia de comparação proposta neste trabalho também se mostrou bastante útil na prática de tomada de decisões de investimento por permitir comparar a performance de estratégias de investimentos dinâmicas. / This paper presents an application that uses linear stochastic programming to define the optimal asset allocation of an structured investment fund. The portfolio adopted is comprised of cash, Ibovespa futures and calls of Ibovespa. The performance of the portfolio with optimal allocation is compared to the performance of Bovespa index. Moreover, the performance of the portfolio with optimal allocation is also compared with that of a portfolio comprised of cash and a risky asset with return equal to the Bovespa index return. Such analysis highlights not only the differences of performance but also the differences of the associated risk level measured by the VaR (Value at Risk) versus return. The methodology used for obtaining the optimal allocation is based on an event tree of market parameters, which is used to reprice the assets comprising the portfolio so that the total profit at the end of the funds life is maximized. The results show that the portfolio comprised of cash, Ibovespa futures and calls of Ibovespa outperforms the Bovespa index and also the portfolio comprised of cash and Bovespa index even when subject to the same level of risk. The analysis developed in this paper allows the comparison between dynamic investment strategies, which can be very useful on practical decision making process of asset and liability management.
98

Tomada de decisão de investimento em um fundo de pensão com plano de benefícios do tipo benefício definido: uma abordagem via programação estocástica multiestágio linear. / Investment decision making in a defined benefit pension fund plan: an approach via linear stochastic programming.

Figueiredo, Danilo Zucolli 28 September 2011 (has links)
Este trabalho apresenta uma abordagem via programação estocástica linear para a tomada de decisão de investimento em um fundo de pensão com plano de benefícios do tipo benefício definido. Propõe-se uma nova metodologia para a definição da alocação da carteira do fundo no instante inicial baseada na média de vários cenários econômicos gerados aleatoriamente. Como exemplo de aplicação, essa metodologia é utilizada para resolver o problema da alocação inicial da carteira de um grande fundo de pensão brasileiro e a alocação inicial obtida é avaliada em termos da probabilidade de insolvência e VaR, valor em risco, do fundo no instante final do horizonte de planejamento de investimento. / This paper presents an approach via linear stochastic programming for investment decision making in a defined benefit pension fund plan. It proposes a new methodology for defining the allocation of the portfolio at the initial time based on the average of several randomly generated economic scenarios. As an illustrative example, this methodology is used to solve the problem of portfolio initial allocation of a large Brazilian pension fund and the obtained initial allocation is evaluated in terms of funds probability of default and VaR, Value-at-Risk, at the final time of the investment planning horizon.
99

Aplicação de programação estocástica para estratégias de investimento em fundos estruturados. / Stochastic programming application for investment strategies in funds structured.

Bruno Marquetti Vanzetto 02 April 2014 (has links)
Este trabalho apresenta a aplicação de uma abordagem via programação estocástica linear para definição da alocação de ativos ótima em um fundo de investimento estruturado. A carteira adotada é formada por caixa, futuro de Ibovespa e call de Ibovespa. A partir da definição da alocação ótima é feita uma análise do sucesso dessa carteira em superar a performance do índice Bovespa. Além disso, é feita uma análise comparativa contra uma carteira formada apenas por caixa e um ativo com retorno igual ao do índice Bovespa à vista. Tal comparação analisa a diferença de performance e também a diferença de risco medido pelo VaR (value at risk) versus retorno. A metodologia adotada para obtenção da alocação ótima é baseada na geração de uma árvore de cenários de parâmetros de mercado sobre a qual os ativos que formam a carteira são revalorizados de tal modo que o lucro acumulado ao fim do prazo de vida do fundo seja maximizado. Os resultados obtidos mostram que a performance da carteira formada por caixa, futuros e calls supera a performance da carteira formada por caixa e índice Bovespa à vista mesmo quando sujeito a um mesmo nível de risco. Além disso, a metodologia de comparação proposta neste trabalho também se mostrou bastante útil na prática de tomada de decisões de investimento por permitir comparar a performance de estratégias de investimentos dinâmicas. / This paper presents an application that uses linear stochastic programming to define the optimal asset allocation of an structured investment fund. The portfolio adopted is comprised of cash, Ibovespa futures and calls of Ibovespa. The performance of the portfolio with optimal allocation is compared to the performance of Bovespa index. Moreover, the performance of the portfolio with optimal allocation is also compared with that of a portfolio comprised of cash and a risky asset with return equal to the Bovespa index return. Such analysis highlights not only the differences of performance but also the differences of the associated risk level measured by the VaR (Value at Risk) versus return. The methodology used for obtaining the optimal allocation is based on an event tree of market parameters, which is used to reprice the assets comprising the portfolio so that the total profit at the end of the funds life is maximized. The results show that the portfolio comprised of cash, Ibovespa futures and calls of Ibovespa outperforms the Bovespa index and also the portfolio comprised of cash and Bovespa index even when subject to the same level of risk. The analysis developed in this paper allows the comparison between dynamic investment strategies, which can be very useful on practical decision making process of asset and liability management.
100

Essays on Multistage Stochastic Programming applied to Asset Liability Management

Oliveira, Alan Delgado de January 2018 (has links)
A incerteza é um elemento fundamental da realidade. Então, torna-se natural a busca por métodos que nos permitam representar o desconhecido em termos matemáticos. Esses problemas originam uma grande classe de programas probabilísticos reconhecidos como modelos de programação estocástica. Eles são mais realísticos que os modelos determinísticos, e tem por objetivo incorporar a incerteza em suas definições. Essa tese aborda os problemas probabilísticos da classe de problemas de multi-estágio com incerteza e com restrições probabilísticas e com restrições probabilísticas conjuntas. Inicialmente, nós propomos um modelo de administração de ativos e passivos multi-estágio estocástico para a indústria de fundos de pensão brasileira. Nosso modelo é formalizado em conformidade com a leis e políticas brasileiras. A seguir, dada a relevância dos dados de entrada para esses modelos de otimização, tornamos nossa atenção às diferentes técnicas de amostragem. Elas compõem o processo de discretização desses modelos estocásticos Nós verificamos como as diferentes metodologias de amostragem impactam a solução final e a alocação do portfólio, destacando boas opções para modelos de administração de ativos e passivos. Finalmente, nós propomos um “framework” para a geração de árvores de cenário e otimização de modelos com incerteza multi-estágio. Baseados na tranformação de Knuth, nós geramos a árvore de cenários considerando a representação filho-esqueda, irmão-direita o que torna a simulação mais eficiente em termos de tempo e de número de cenários. Nós também formalizamos uma reformulação do modelo de administração de ativos e passivos baseada na abordagem extensiva implícita para o modelo de otimização. Essa técnica é projetada pela definição de um processo de filtragem com “bundles”; e codifciada com o auxílio de uma linguagem de modelagem algébrica. A eficiência dessa metodologia é testada em um modelo de administração de ativos e passivos com incerteza com restrições probabilísticas conjuntas. Nosso framework torna possível encontrar a solução ótima para árvores com um número razoável de cenários. / Uncertainty is a key element of reality. Thus, it becomes natural that the search for methods allows us to represent the unknown in mathematical terms. These problems originate a large class of probabilistic programs recognized as stochastic programming models. They are more realistic than deterministic ones, and their aim is to incorporate uncertainty into their definitions. This dissertation approaches the probabilistic problem class of multistage stochastic problems with chance constraints and joint-chance constraints. Initially, we propose a multistage stochastic asset liability management (ALM) model for a Brazilian pension fund industry. Our model is formalized in compliance with the Brazilian laws and policies. Next, given the relevance of the input parameters for these optimization models, we turn our attention to different sampling models, which compose the discretization process of these stochastic models. We check how these different sampling methodologies impact on the final solution and the portfolio allocation, outlining good options for ALM models. Finally, we propose a framework for the scenario-tree generation and optimization of multistage stochastic programming problems. Relying on the Knuth transform, we generate the scenario trees, taking advantage of the left-child, right-sibling representation, which makes the simulation more efficient in terms of time and the number of scenarios. We also formalize an ALM model reformulation based on implicit extensive form for the optimization model. This technique is designed by the definition of a filtration process with bundles, and coded with the support of an algebraic modeling language. The efficiency of this methodology is tested in a multistage stochastic ALM model with joint-chance constraints. Our framework makes it possible to reach the optimal solution for trees with a reasonable number of scenarios.

Page generated in 0.0817 seconds