1 |
Analysis and Planning of Power Transmission System Subject to Uncertainties in the GridAryal, Durga 01 February 2019 (has links)
Power transmission systems frequently experience new power flow pattern due to several factors that increase uncertainties in the system. For instance, load shape uncertainty, uncertainty due to penetration of renewable sources, changing standards, and energy de-regulation threaten the reliability and security of power transmission systems. This demands for more rigorous analysis and planning of power transmission systems.
Stability issues in power transmission system are more pronounced with the penetration of utility-scale Photo-Voltaic (PV) sources. Synchronous generators provide inertia that helps in damping oscillations that arise due to fluctuations in the power system. Therefore, as PV generators replace the conventional synchronous generators, power transmission systems become vulnerable to these abnormalities. In this thesis, we study the effect of reduced inertia due to the penetration of utility-scale PV on the transient stability of power transmissions systems. In addition, the effect of increased PV penetration level in the system during normal operating condition is also analyzed. The later study illustrates that the PV penetration level and the placement of PV sources play crucial roles in determining the stability of power transmission systems.
Given increasing uncertainties in power transmission systems, there is a need to seek an alternative to deterministic planning approach because it inherently lacks capability to cover all the uncertainties. One practical alternative is the probabilistic planning approach. In probabilistic planning approach, an analysis is made with a wide variety of scenarios by considering the probability of occurrence of each scenario and the probability of contingencies. Then, the severity of the contingencies risk associated with each planning practice is calculated. However, due to the lack of techniques and tools to select wide varieties of scenarios along with their probability of occurrence, the probabilistic transmission planning approach has not been implemented in real-world power transmission systems. This thesis presents a technique that can select wide varieties of scenarios along with their probability of occurrence to facilitate probabilistic planning in Electricity Reliability Council of Texas (ERCOT) systems. / Master of Science / Reliability of power transmission systems are threatened due to the increasing uncertainties arising from penetration of renewable energy sources, load growth, energy de-regulation and changing standards. Stability issues become more prevalent than in past due to increasing load growth as the demand for reactive power increases. Several researchers have been studying the impact of increased load growth and increased penetration of renewables on the dynamic stability of the distribution system. However, far less emphasis has been given to the power transmission system. This thesis presents the transient stability analysis of power transmission systems during overloading conditions. Our study also facilitates identification of weak areas of the transmission system during overloading condition. In addition, the impact of replacing conventional synchronous generator by Photovoltaics (PV) on voltage stability of the system is also analyzed. With increasing uncertainties in transmission systems, it is necessary to carefully analyze a wide variety of scenarios while planning the system. The current approach to transmission planning i.e., the deterministic approach does not sufficiently cover all the uncertainties. This has imposed the need for the probabilistic transmission planning approach where the overall system is planned based on the analysis of wide varieties of scenarios. In addition, by considering the probability of occurrence of a scenario, the probability of contingencies and severity of contingencies risk associated with each planning practice is calculated. However, there is no well-established approach that is capable of selecting wide varieties of scenarios based on their probability of occurrence. Due to this limitation, probabilistic approach is not widely implemented in real-world power transmission systems. To address this issue, this thesis presents a new technique, based on K-means clustering, to select scenarios based on their probability of occurrence.
|
2 |
Um modelo unificado para planejamento sob incerteza / An unified model for planning under uncertaintyTrevizan, Felipe Werndl 31 May 2006 (has links)
Dois modelos principais de planejamento em inteligência artificial são os usados, respectivamente, em planejamento probabilístico (MDPs e suas generalizações) e em planejamento não-determinístico (baseado em model checking). Nessa dissertação será: (1) exibido que planejamento probabilístico e não-determinístico são extremos de um rico contínuo de problemas capaz de lidar simultaneamente com risco e incerteza (Knightiana); (2) obtido um modelo para unificar esses dois tipos de problemas usando MDPs imprecisos; (3) derivado uma versão simplificada do princípio ótimo de Bellman para esse novo modelo; (4) exibido como adaptar e analisar algoritmos do estado-da-arte, como (L)RTDP e LDFS, nesse modelo unificado. Também será discutido exemplos e relações entre modelos já propostos para planejamento sob incerteza e o modelo proposto. / Two noteworthy models of planning in AI are probabilistic planning (based on MDPs and its generalizations) and nondeterministic planning (mainly based on model checking). In this dissertation we: (1) show that probabilistic and nondeterministic planning are extremes of a rich continuum of problems that deal simultaneously with risk and (Knightian) uncertainty; (2) obtain a unifying model for these problems using imprecise MDPs; (3) derive a simplified Bellman\'s principle of optimality for our model; and (4) show how to adapt and analyze state-of-art algorithms such as (L)RTDP and LDFS in this unifying setup. We discuss examples and connections to various proposals for planning under (general) uncertainty.
|
3 |
Short-Sighted Probabilistic PlanningTrevizan, Felipe W. 01 August 2013 (has links)
Planning is an essential part of intelligent behavior and a ubiquitous task for both humans and rational agents. One framework for planning in the presence of uncertainty is probabilistic planning, in which actions are described by a probability distribution over their possible outcomes. Probabilistic planning has been applied to different real-world scenarios such as public health, sustainability and robotics; however, the usage of probabilistic planning in practice is limited due to the poor performance of existing planners.
In this thesis, we introduce a novel approach to effectively solve probabilistic planning problems by relaxing them into short-sighted problems. A short-sighted problem is a relaxed problem in which the state space of the original problem is pruned and artificial goals are added to heuristically estimate the cost of reaching an original goal from the pruned states. Differently from previously proposed relaxations, short-sighted problems maintain the original structure of actions and no restrictions are imposed in the maximum number of actions that can be executed. Therefore, the solutions for short-sighted problems take into consideration all the probabilistic outcomes of actions and their probabilities. In this thesis, we also study different criteria to generate short-sighted problems, i.e., how to prune the state space, and the relation between the obtained short-sighted models and previously proposed relaxation approaches.
We present different planning algorithms that use short-sighted problems in order to solve probabilistic planning problems. These algorithms iteratively generate and execute optimal policies for short-sighted problems until the goal of the original problem is reached. We also formally analyze the introduced algorithms, focusing on their optimality guarantees with respect to the original probabilistic problem. Finally, this thesis contributes a rich empirical comparison between our algorithms and state-of-the-art probabilistic planners.
|
4 |
Um modelo unificado para planejamento sob incerteza / An unified model for planning under uncertaintyFelipe Werndl Trevizan 31 May 2006 (has links)
Dois modelos principais de planejamento em inteligência artificial são os usados, respectivamente, em planejamento probabilístico (MDPs e suas generalizações) e em planejamento não-determinístico (baseado em model checking). Nessa dissertação será: (1) exibido que planejamento probabilístico e não-determinístico são extremos de um rico contínuo de problemas capaz de lidar simultaneamente com risco e incerteza (Knightiana); (2) obtido um modelo para unificar esses dois tipos de problemas usando MDPs imprecisos; (3) derivado uma versão simplificada do princípio ótimo de Bellman para esse novo modelo; (4) exibido como adaptar e analisar algoritmos do estado-da-arte, como (L)RTDP e LDFS, nesse modelo unificado. Também será discutido exemplos e relações entre modelos já propostos para planejamento sob incerteza e o modelo proposto. / Two noteworthy models of planning in AI are probabilistic planning (based on MDPs and its generalizations) and nondeterministic planning (mainly based on model checking). In this dissertation we: (1) show that probabilistic and nondeterministic planning are extremes of a rich continuum of problems that deal simultaneously with risk and (Knightian) uncertainty; (2) obtain a unifying model for these problems using imprecise MDPs; (3) derive a simplified Bellman\'s principle of optimality for our model; and (4) show how to adapt and analyze state-of-art algorithms such as (L)RTDP and LDFS in this unifying setup. We discuss examples and connections to various proposals for planning under (general) uncertainty.
|
5 |
Radiation therapy treatment plan optimization accounting for random and systematic patient setup uncertaintiesMoore, Joseph 25 April 2011 (has links)
External-beam radiotherapy is one of the primary methods for treating cancer. Typically a radiotherapy treatment course consists of radiation delivered to the patient in multiple daily treatment fractions over 6-8 weeks. Each fraction requires the patient to be aligned with the image acquired before the treatment course used in treatment planning. Unfortunately, patient alignment is not perfect and results in residual errors in patient setup. The standard technique for dealing with errors in patient setup is to expand the volume of the target by some margin to ensure the target receives the planned dose in the presence of setup errors. This work develops an alternative to margins for accommodating setup errors in the treatment planning process by directly including patient setup uncertainty in IMRT plan optimization. This probabilistic treatment planning (PTP) operates directly on the planning structure and develops a dose distribution robust to variations in the patient position. Two methods are presented. The first method includes only random setup uncertainty in the planning process by convolving the fluence of each beam with a Gaussian model of the distribution of random setup errors. The second method builds upon this by adding systematic uncertainty to optimization by way of a joint optimization over multiple probable patient positions. To assess the benefit of PTP methods, a PTP plan and a margin-based plan are developed for each of the 28 patients used in this study. Comparisons of plans show that PTP plans generally reduce the dose to normal tissues while maintaining a similar dose to the target structure when compared to margin-based plans. Physician assessment indicates that PTP plans are generally preferred over margin-based plans. PTP methods shows potential for improving patient outcome due to reduced complications associated with treatment.
|
6 |
Probabilistic treatment planning based on dose coverage : How to quantify and minimize the effects of geometric uncertainties in radiotherapyTilly, David January 2016 (has links)
Traditionally, uncertainties are handled by expanding the irradiated volume to ensure target dose coverage to a certain probability. The uncertainties arise from e.g. the uncertainty in positioning of the patient at every fraction, organ motion and in defining the region of interests on the acquired images. The applied margins are inherently population based and do not exploit the geometry of the individual patient. Probabilistic planning on the other hand incorporates the uncertainties directly into the treatment optimization and therefore has more degrees of freedom to tailor the dose distribution to the individual patient. The aim of this thesis is to create a framework for probabilistic evaluation and optimization based on the concept of dose coverage probabilities. Several computational challenges for this purpose are addressed in this thesis. The accuracy of the fraction by fraction accumulated dose depends directly on the accuracy of the deformable image registration (DIR). Using the simulation framework, we could quantify the requirements on the DIR to 2 mm or less for a 3% uncertainty in the target dose coverage. Probabilistic planning is computationally intensive since many hundred treatments must be simulated for sufficient statistical accuracy in the calculated treatment outcome. A fast dose calculation algorithm was developed based on the perturbation of a pre-calculated dose distribution with the local ratio of the simulated treatment’s fluence and the fluence of the pre-calculated dose. A speedup factor of ~1000 compared to full dose calculation was achieved with near identical dose coverage probabilities for a prostate treatment. For some body sites, such as the cervix dataset in this work, organ motion must be included for realistic treatment simulation. A statistical shape model (SSM) based on principal component analysis (PCA) provided the samples of deformation. Seven eigenmodes from the PCA was sufficient to model the dosimetric impact of the interfraction deformation. A probabilistic optimization method was developed using constructs from risk management of stock portfolios that enabled the dose planner to request a target dose coverage probability. Probabilistic optimization was for the first time applied to dataset from cervical cancer patients where the SSM provided samples of deformation. The average dose coverage probability of all patients in the dataset was within 1% of the requested.
|
7 |
Algoritmos eficientes para o problema do orçamento mínimo em processos de decisão Markovianos sensíveis ao risco / Efficient algorithms for the minimum budget problem in risk-sensitive Markov decision processesMoreira, Daniel Augusto de Melo 06 November 2018 (has links)
O principal critério de otimização utilizado em Processos de Decisão Markovianos (mdps) é minimizar o custo acumulado esperado. Embora esse critério de otimização seja útil, em algumas aplicações, o custo gerado por algumas execuções pode exceder um limite aceitável. Para lidar com esse problema foram propostos os Processos de Decisão Markovianos Sensíveis ao Risco (rs-mdps) cujo critério de otimização é maximizar a probabilidade do custo acumulado não ser maior que um orçamento limite definido pelo usuário, portanto garantindo que execuções custosas de um mdp ocorram com menos probabilidade. Algoritmos para rs-mdps possuem problemas de escalabilidade quando lidam com intervalos de custo amplos, uma vez que operam no espaço aumentado que enumera todos os possíveis orçamentos restantes. Neste trabalho é proposto um novo problema que é encontrar o orçamento mínimo para o qual a probabilidade de que o custo acumulado não exceda esse orçamento converge para um máximo. Para resolver esse problema são propostas duas abordagens: (i) uma melhoria no algoritmo tvi-dp (uma solução previamente proposta para rsmdps) e (ii) o primeiro algoritmo de programação dinâmica simbólica para rs-mdps que explora as independências condicionais da função de transição no espaço de estados aumentado. Os algoritmos propostos eliminam estados inválidos e adicionam uma nova condição de parada. Resultados empíricos mostram que o algoritmo rs-spudd é capaz de resolver problemas até 103 vezes maior que o algoritmo tvi-dp e é até 26.2 vezes mais rápido que tvi-dp (nas instâncias que o algoritmo tvi-dp conseguiu resolver). De fato, é mostrado que o algoritmo rs-spudd é o único que consegue resolver instâncias grandes dos domínios analisados. Outro grande desafio em rs-mdps é lidar com custos contínuos. Para resolver esse problema são definidos os rs-mdps híbridos que incluem variáveis contínuas e discretas, além do orçamento limite definido pelo usuário. É mostrado que o algoritmo de programação dinâmica simbólica (sdp), existente na literatura, pode ser usado para resolver esse tipo de mdps. Esse algoritmo foi empiricamente testado de duas maneiras diferentes: (i) comparado com os demais algoritmos propostos em um domínio em que todos são capazes de resolver e (ii) testado em um domínio que somente ele é capaz de resolver. Os resultados mostram que o algoritmo sdp para rs-mdp híbridos é capaz de resolver domínios com custos contínuos sem a necessidade de enumeração de estados, porém em troca do aumento do custo computacional. / The main optimization criterion used in Markovian Decision Processes (mdps) is to minimize the expected cumulative cost. Although this optimization criterion is useful, in some applications the cost generated by some executions may exceed an acceptable threshold. In order to deal with this problem, the Risk-Sensitive Markov Decision Processes (rs-mdps) were proposed whose optimization criterion is to maximize the probability of the cumulative cost not to be greater than an user-defined budget, thus guaranteeing that costly executions of an mdp occur with least probability. Algorithms for rs-mdps face scalability issues when handling large cost intervals, since they operate in an augmented state space which enumerates the possible remaining budgets. In this work, we propose a new challenging problem of finding the minimum budget for which the probability that the cumulative cost does not exceed this budget converges to a maximum. To solve this problem, we propose: (i) an improved version of tvi-dp (a previous solution for rs-mdps) and (ii) the first symbolic dynamic programming algorithm for rs-mdps that explores conditional independence of the transition function in the augmented state space. The proposed algorithms prune invalid states and perform early termination. Empirical results show that rs-spudd is able to solve problems up to 103 times larger than tvi-dp and is up to 26.2 times faster than tvi-dp (in the instances tvi-dp was able to solve). In fact, we show that rs-spudd is the only one that can solve large instances of the analyzed domains. Another challenging problem for rs-mdps is handle continous costs. To solve this problem, we define Hybrid rs-mdps which include continous and discrete variables, and the user-defined budget. In this work, we show that Symbolic Dynamic Programming (sdp) algorithm can be used to solve this kind of mdps. We empirically evaluated the sdp algorithm: (i) in a domain that can be solved with the previously proposed algorithms and (ii) in a domain that only sdp can solve. Results shown that sdp algorithm for Hybrid rs-mdps is capable of solving domains with continous costs, but with a higher computational cost.
|
8 |
Planejamento probabilístico com becos sem saída / Probabilistic planning with dead-endsSimão, Thiago Dias 06 March 2017 (has links)
Planejamento probabilístico lida com a tomada de decisão sequencial em ambientes estocásticos e geralmente é modelado por um Processo de Decisão Markoviano (Markovian Decision Process - MDP). Um MDP modela a interação entre um agente e o seu ambiente: em cada estágio, o agente decide executar uma ação, com efeitos probabilísticos e um certo custo, que irá produzir um estado futuro. O objetivo do agente MDP é minimizar o custo esperado ao longo de uma sequência de escolhas de ação. O número de estágios que o agente atua no ambiente é chamado de horizonte, o qual pode ser finito, infinito ou indefinido. Um exemplo de MDP com horizonte indefinido é o Stochastic Shortest Path MDP (SSP MDP), que estende a definição de MDP adicionando um conjunto de estados meta (o agente para de agir ao alcançar um estado meta). Num SSP MDP é feita a suposição de que é sempre possível alcançar um estado meta a partir de qualquer estado do mundo. No entanto, essa é uma suposição muito forte e que não pode ser garantida em aplicações práticas. Estados a partir dos quais é impossível atingir a meta são chamados de becos-sem-saída. Um beco-sem-saída pode ser evitável ou inevitável (se nenhuma política leva do estado inicial para a meta com probabilidade um). Em trabalhos recentes foram propostas extensões para SSP MDP que permitem a existência de diferentes tipos de beco-sem-saída, bem como algoritmos para resolvê-los. No entanto, a detecção de becos-sem-saída é feita utilizando: (i) heurísticas que podem falhar para becos-sem-saída implícitos ou (ii) métodos mais confiáveis, mas que demandam alto custo computacional. Neste projeto fazemos uma caracterização formal de modelos de planejamento probabilístico com becos-sem-saída. Além disso, propomos uma nova técnica para detecção de becos-sem-saída baseada nessa caracterização e adaptamos algoritmos de planejamento probabilístico para utilizarem esse novo método de detecção. Os resultados empíricos mostram que o método proposto é capaz de detectar todos os becos-sem-saída de um dado conjunto de estados e, quando usado com planejadores probabilísticos, pode tornar esses planejadores mais eficientes em domínios com becos-sem-saída difíceis de serem detectados / Probabilistic planning deals with sequential decision making in stochastic environments and is modeled by a Markovian Decision Process (MDP). An MDP models the interaction between an agent and its environment: at each stage, the agent decides to execute an action, with probabilistic effects and a certain cost which produces a future state. The purpose of the MDP agent is to minimize the expected cost along a sequence of choices. The number of stages that the agent acts in the environment is called horizon, which can be finite, infinite or undefined. An example of MDP with undefined horizon is the Stochastic Shortest Path MDP, which extends the definition of MDP by adding a set of goal states (the agent stops acting after reaching a goal state). In an SSP MDP the assumption is made that it is always possible to achieve a goal state from every state of the world. However, this is a very strong assumption and cannot be guaranteed in practical applications. States from which it is impossible to reach the goal are called dead-ends. A dead-end may be avoidable or unavoidable (when no policy leads from the initial state to the goal with probability one). Recent work has proposed extensions to SSP MDP that allow the existence of different types of dead-ends as well as algorithms to solve them. However, the detection of dead-end is done using: (i) heuristics that may fail to detect implicitly dead-ends or (ii) more reliable methods that require a high computational cost. In this project we make a formal characterization of probabilistic planning models with dead-ends. In addition, we propose a new technique for dead-end detection based on this characterization and we adapt probabilistic planning algorithms to use this new detection method. The empirical results show that the proposed method is able to detect all dead-ends of a given set of states and, when used withprobabilistic planners, can make these planners more efficient in domains with difficult to detect dead-ends.
|
9 |
Planejamento probabilístico com becos sem saída / Probabilistic planning with dead-endsThiago Dias Simão 06 March 2017 (has links)
Planejamento probabilístico lida com a tomada de decisão sequencial em ambientes estocásticos e geralmente é modelado por um Processo de Decisão Markoviano (Markovian Decision Process - MDP). Um MDP modela a interação entre um agente e o seu ambiente: em cada estágio, o agente decide executar uma ação, com efeitos probabilísticos e um certo custo, que irá produzir um estado futuro. O objetivo do agente MDP é minimizar o custo esperado ao longo de uma sequência de escolhas de ação. O número de estágios que o agente atua no ambiente é chamado de horizonte, o qual pode ser finito, infinito ou indefinido. Um exemplo de MDP com horizonte indefinido é o Stochastic Shortest Path MDP (SSP MDP), que estende a definição de MDP adicionando um conjunto de estados meta (o agente para de agir ao alcançar um estado meta). Num SSP MDP é feita a suposição de que é sempre possível alcançar um estado meta a partir de qualquer estado do mundo. No entanto, essa é uma suposição muito forte e que não pode ser garantida em aplicações práticas. Estados a partir dos quais é impossível atingir a meta são chamados de becos-sem-saída. Um beco-sem-saída pode ser evitável ou inevitável (se nenhuma política leva do estado inicial para a meta com probabilidade um). Em trabalhos recentes foram propostas extensões para SSP MDP que permitem a existência de diferentes tipos de beco-sem-saída, bem como algoritmos para resolvê-los. No entanto, a detecção de becos-sem-saída é feita utilizando: (i) heurísticas que podem falhar para becos-sem-saída implícitos ou (ii) métodos mais confiáveis, mas que demandam alto custo computacional. Neste projeto fazemos uma caracterização formal de modelos de planejamento probabilístico com becos-sem-saída. Além disso, propomos uma nova técnica para detecção de becos-sem-saída baseada nessa caracterização e adaptamos algoritmos de planejamento probabilístico para utilizarem esse novo método de detecção. Os resultados empíricos mostram que o método proposto é capaz de detectar todos os becos-sem-saída de um dado conjunto de estados e, quando usado com planejadores probabilísticos, pode tornar esses planejadores mais eficientes em domínios com becos-sem-saída difíceis de serem detectados / Probabilistic planning deals with sequential decision making in stochastic environments and is modeled by a Markovian Decision Process (MDP). An MDP models the interaction between an agent and its environment: at each stage, the agent decides to execute an action, with probabilistic effects and a certain cost which produces a future state. The purpose of the MDP agent is to minimize the expected cost along a sequence of choices. The number of stages that the agent acts in the environment is called horizon, which can be finite, infinite or undefined. An example of MDP with undefined horizon is the Stochastic Shortest Path MDP, which extends the definition of MDP by adding a set of goal states (the agent stops acting after reaching a goal state). In an SSP MDP the assumption is made that it is always possible to achieve a goal state from every state of the world. However, this is a very strong assumption and cannot be guaranteed in practical applications. States from which it is impossible to reach the goal are called dead-ends. A dead-end may be avoidable or unavoidable (when no policy leads from the initial state to the goal with probability one). Recent work has proposed extensions to SSP MDP that allow the existence of different types of dead-ends as well as algorithms to solve them. However, the detection of dead-end is done using: (i) heuristics that may fail to detect implicitly dead-ends or (ii) more reliable methods that require a high computational cost. In this project we make a formal characterization of probabilistic planning models with dead-ends. In addition, we propose a new technique for dead-end detection based on this characterization and we adapt probabilistic planning algorithms to use this new detection method. The empirical results show that the proposed method is able to detect all dead-ends of a given set of states and, when used withprobabilistic planners, can make these planners more efficient in domains with difficult to detect dead-ends.
|
10 |
A Decision Theoretic Approach to Natural Language GenerationMcKinley, Nathan D. 21 February 2014 (has links)
No description available.
|
Page generated in 0.1202 seconds