• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 9
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analyses of power system vulnerability and total transfer capability

Yu, Xingbin 12 April 2006 (has links)
Modern power systems are now stepping into the post-restructuring era, in which utility industries as well as ISOs (Independent System Operators) are involved. Attention needs to be paid to the reliability study of power systems by both the utility companies and the ISOs. An uninterrupted and high quality power is required for the sustainable development of a technological society. Power system blackouts generally result from cascading outages. Protection system hidden failures remain dormant when everything is normal and are exposed as a result of other system disturbances. This dissertation provides new methods for power system vulnerability analysis including protection failures. Both adequacy and security aspects are included. The power system vulnerability analysis covers the following issues: 1) Protection system failure analysis and modeling based on protection failure features; 2) New methodology for reliability evaluation to incorporate protection system failure modes; and, 3) Application of variance reduction techniques and evaluation. A new model of current-carrying component paired with its associated protection system has been proposed. The model differentiates two protection failure modes, and it is the foundation of the proposed research. Detailed stochastic features of system contingencies and corresponding responses are considered. Both adequacy and security reliability indices are computed. Moreover, a new reliability index ISV (Integrated System Vulnerability) is introduced to represent the integrated reliability performance with consideration of protection system failures. According to these indices, we can locate the weakest point or link in a power system. The whole analysis procedure is based on a non-sequential Monte Carlo simulation method. In reliability analysis, especially with Monte Carlo simulation, computation time is a function not only of a large number of simulations, but also time-consuming system state evaluation, such as OPF (Optimal Power Flow) and stability assessment. Theoretical and practical analysis is conducted for the application of variance reduction techniques. The dissertation also proposes a comprehensive approach for a TTC (Total Transfer Capability) calculation with consideration of thermal, voltage and transient stability limits. Both steady state and dynamic security assessments are included in the process of obtaining total transfer capability. Particularly, the effect of FACTS (Flexible AC Transmission Systems) devices on TTC is examined. FACTS devices have been shown to have both positive and negative effects on system stability depending on their location. Furthermore, this dissertation proposes a probabilistic method which gives a new framework for analyzing total transfer capability with actual operational conditions.
2

Analyses of power system vulnerability and total transfer capability

Yu, Xingbin 12 April 2006 (has links)
Modern power systems are now stepping into the post-restructuring era, in which utility industries as well as ISOs (Independent System Operators) are involved. Attention needs to be paid to the reliability study of power systems by both the utility companies and the ISOs. An uninterrupted and high quality power is required for the sustainable development of a technological society. Power system blackouts generally result from cascading outages. Protection system hidden failures remain dormant when everything is normal and are exposed as a result of other system disturbances. This dissertation provides new methods for power system vulnerability analysis including protection failures. Both adequacy and security aspects are included. The power system vulnerability analysis covers the following issues: 1) Protection system failure analysis and modeling based on protection failure features; 2) New methodology for reliability evaluation to incorporate protection system failure modes; and, 3) Application of variance reduction techniques and evaluation. A new model of current-carrying component paired with its associated protection system has been proposed. The model differentiates two protection failure modes, and it is the foundation of the proposed research. Detailed stochastic features of system contingencies and corresponding responses are considered. Both adequacy and security reliability indices are computed. Moreover, a new reliability index ISV (Integrated System Vulnerability) is introduced to represent the integrated reliability performance with consideration of protection system failures. According to these indices, we can locate the weakest point or link in a power system. The whole analysis procedure is based on a non-sequential Monte Carlo simulation method. In reliability analysis, especially with Monte Carlo simulation, computation time is a function not only of a large number of simulations, but also time-consuming system state evaluation, such as OPF (Optimal Power Flow) and stability assessment. Theoretical and practical analysis is conducted for the application of variance reduction techniques. The dissertation also proposes a comprehensive approach for a TTC (Total Transfer Capability) calculation with consideration of thermal, voltage and transient stability limits. Both steady state and dynamic security assessments are included in the process of obtaining total transfer capability. Particularly, the effect of FACTS (Flexible AC Transmission Systems) devices on TTC is examined. FACTS devices have been shown to have both positive and negative effects on system stability depending on their location. Furthermore, this dissertation proposes a probabilistic method which gives a new framework for analyzing total transfer capability with actual operational conditions.
3

An Evaluation of Transfer Capability Limitations and Solutions for South Mississippi Electric Power Association

Brown, Nathan L 11 May 2002 (has links)
Historically, transmission transfer capability between transmission systems was typically assumed based on the thermal limits of specific transmission paths. Because electric transmission systems are becoming more and more heavily loaded, accurately evaluating transfer capability between transmission systems and maintaining minimum levels of transfer capability has become increasingly important. This Thesis evaluates the transfer capability needs and limitations of a specific small electric utility. Identified are various analysis tools and evaluation methods used to determine transfer capability. Two specific evaluation methods are analyzed and compared to determine the best method of determining transfer capability. Various solutions, including upgrading existing or installing new transmission interconnections, are identified and evaluated to determine the best overall solution to achieve and maintain the utility?s desired transfer capability level.
4

Congestion Management, Total Transfer Capability Improvement and Short-Term Adequacy Evaluation in Deregulated Power Systems – Prospering and Surviving in the Competitive World

Yan, Ping 2011 August 1900 (has links)
While two objectives of deregulation are to reduce service interruptions and achieve lower energy costs, deregulation has actually introduced new problems in both areas. Since the transmission network was built in the last century, mainly for the regulated power systems, with mostly local power transfers, the increased long distance power transfer introduced by free energy trading has made congestion happen more frequently. When congestion happens, service interruptions occur and higher energy costs arise. We approach the issue from the viewpoints of both planning and online operations. Accordingly, we develop a reactive online remedying method that uses Flexible AC Transmission (FACTS) devices to eliminate congestion with minimum transaction curtailment to maintain market force. We also develop a proactive preventive method for offline planning, such as in the day-ahead market, which uses FACTS devices to maximize the Total Transfer Capability so that more transactions can be scheduled without causing congestion in the system. Optimal Power Flow is used for both methods with FACTS devices treated as control variables so that they can be adjusted to the best FACTS parameters to minimize the transaction curtailment or maximize the Total Transfer Capability. We demonstrate that FACTS devices are very effective for both situations. Since the installation of FACTS devices involves heavy infrastructure investment, an effective pricing method needs to be in place to encourage this investment by guaranteeing sufficient return. This research uses a novel pricing scheme to charge for both utilizing the FACTS devices and having the FACTS devices operating at their limits. The owners of the FACTS devices will then be able to recover their investment. With the above control method and the pricing method, we can make better use of the existing transmission network and relieve congestion. However, deregulation may also degrade system reliability since the generation companies are not obligated to sell into the market and market participation is driven by market forces instead. We use the market share based short-term adequacy analysis method to address generation resource adequacy issues. The market share based method uses the market share time series for the generation companies to reflect their market behavior in the new environment. Multiple regression modeling, a tool of time series analysis, is used to model involved factors. We demonstrate how the market share based short-term adequacy analysis method can capture the adequacy problems that the traditional method cannot. In addition, it can also help to remedy the adequacy problems, which can in turn reduce service interruption and thus the energy price.
5

Study of Two-Objective Dynamic Power Dispatch Problem by Particle Swarm Optimization

Chen, Yi-Sheng 12 June 2009 (has links)
In recent years, the awareness of environmental protection has made the power dispatch model no longer purely economical-oriented. This thesis proposed the application of particle swarm optimization (PSO) algorithm and interactive compromise programming method to solve the 24-hour two-objective power dispatch problem. Considering simultaneously the lowest generating cost and the lowest pollution emission, the two mutually-conflicting objectives will choose a compromised dispatch model. This thesis joined the mixed-integer programming problem of optimal power flow (MIOPF) with the dynamic economic dispatch (DED), making this dispatch solution more realistic without electrical violations; The MIOPF considers both continuous and discrete types of variables. The continuous variables are the generating unit real power output and the generator-bus voltage magnitudes; the discrete variables are the shunt capacitor banks and transformer tap setting. Simulation were run on the standard IEEE 30 Bus system. In order to avoid the PSO local optimality problem, this thesis proposed the utilization of the PSO algorithm with time-varying acceleration coefficients (PSO_TVAC) plus the local random search method (LRS), so it can quickly and effectively reach the optimal solution, without advantages of performance and accuracy of PSO. This thesis also proposed the consideration of the available transfer capability (ATC) on transmission lines of the existing dispatch model. Applying sensitivity factors to calculate each generator¡¦s available transfer capability that can be offered in the analyzed time interval, enables the creation of a new constraint. Joined with the dynamic economic dispatch problem, it will make possible that a load client wishes to raise its demand. Simultaneously taking care of the minimum cost and the limits of system security, better dispatch results could be expected.
6

Internet-based Wide Area Measurement Applications in Deregulated Power Systems

Khatib, Abdel Rahman Amin 15 August 2002 (has links)
Since the deregulation of power systems was started in 1989 in the UK, many countries have been motivated to undergo deregulation. The United State started deregulation in the energy sector in California back in 1996. Since that time many other states have also started the deregulation procedures in different utilities. Most of the deregulation market in the United States now is in the wholesale market area, however, the retail market is still undergoing changes. Deregulation has many impacts on power system network operation and control. The number of power transactions among the utilities has increased and many Independent Power Producers (IPPs) now have a rich market for competition especially in the green power market. The Federal Energy Regulatory Commission (FERC) called upon utilities to develop the Regional Transmission Organization (RTO). The RTO is a step toward the national transmission grid. RTO is an independent entity that will operate the transmission system in a large region. The main goal of forming RTOs is to increase the operation efficiency of the power network under the impact of the deregulated market. The objective of this work is to study Internet based Wide Area Information Sharing (WAIS) applications in the deregulated power system. The study is the first step toward building a national transmission grid picture using information sharing among utilities. Two main topics are covered as applications for the WAIS in the deregulated power system, state estimation and Total Transfer Capability (TTC) calculations. As a first step for building this national transmission grid picture, WAIS and the level of information sharing of the state estimation calculations have been discussed. WAIS impacts to the TTC calculations are also covered. A new technique to update the TTC using on line measurements based on WAIS created by sharing state estimation is presented. / Ph. D.
7

Decomposition algorithms for multi-area power system analysis

Min, Liang 17 September 2007 (has links)
A power system with multiple interconnected areas needs to be operated coordinately for the purposes of the system reliability and economic operation, although each area has its own ISO under the market environment. In consolidation of different areas under a common grid coordinator, analysis of a power system becomes more computationally demanding. Furthermore, the analysis becomes more challenging because each area cannot obtain the network operating or economic data of other areas. This dissertation investigates decomposition algorithms for multi-area power system transfer capability analysis and economic dispatch analysis. All of the proposed algorithms assume that areas do not share their network operating and economic information among themselves, while they are willing to cooperate via a central coordinator for system wide analyses. The first proposed algorithm is based on power transfer distribution factors (PTDFs). A quadratic approximation, developed for the nonlinear PTDFs, is used to update tie-line power flows calculated by Repeated Power Flow (RPF). These tie-line power flows are then treated as injections in the TTC calculation of each area, as the central entity coordinates these results to determine the final system-wide TTC value. The second proposed algorithm is based on REI-type network equivalents. It uses the Continuation Power Flow (CPF) as the computational tool and, thus, the problem of voltage stability is considered in TTC studies. Each area uses REI equivalents of external areas to compute its TTC via the CPF. The choice and updating procedure for the continuation parameter employed by the CPF is implemented in a distributed but coordinated manner. The third proposed algorithm is based on inexact penalty functions. The traditional OPF is treated as the optimization problems with global variables. Quadratic penalty functions are used to relax the compatible constraints between the global variables and the local variables. The solution is proposed to be implemented by using a two-level computational architecture. All of the proposed algorithms are verified by numerical comparisons between the integrated and proposed decomposition algorithms. The proposed algorithms lead to potential gains in the computational efficiency with limited data exchanges among areas.
8

Calculating the Distance to the Saddle-Node Bifurcation Set

Borquez Caballero, Rodrigo Edgardo January 2009 (has links)
A power system will experience voltage collapse when the loads increase up to a certain critical limit, where the system physically cannot support the amount of connected load. This point identified as a Saddle- Node Bifurcation (SNB), corresponds to a generic instability of parameterized differential equation models and represents the intersection point where different branches of equilibria meet. At this point the jacobian matrix of the system is singular and the system loses stability bringing the typical scenario of voltage collapse. To prevent voltage instability and collapse, the computation of the closest distance from a present operating point to the saddle-node bifurcation set can be used as a loadability index useful in power system operation and planning. The power margin is determined by applying the iterative or direct method described in [16]. Numerical examples of both methods applied to IEEE 9-bus system and IEEE 39-bus system shows that the iterative method is more reliable although it requires a longer computation time. The stability of the system is negatively affected in two ways when generators reach their reactive power limits: the voltage stability margin is deteriorated, or immediate voltage instability and collapse is produced.
9

Metamodeling strategies for high-dimensional simulation-based design problems

Shan, Songqing 13 October 2010 (has links)
Computational tools such as finite element analysis and simulation are commonly used for system performance analysis and validation. It is often impractical to rely exclusively on the high-fidelity simulation model for design activities because of high computational costs. Mathematical models are typically constructed to approximate the simulation model to help with the design activities. Such models are referred to as “metamodel.” The process of constructing a metamodel is called “metamodeling.” Metamodeling, however, faces eminent challenges that arise from high-dimensionality of underlying problems, in addition to the high computational costs and unknown function properties (that is black-box functions) of analysis/simulation. The combination of these three challenges defines the so-called high-dimensional, computationally-expensive, and black-box (HEB) problems. Currently there is a lack of practical methods to deal with HEB problems. This dissertation, by means of surveying existing techniques, has found that the major deficiency of the current metamodeling approaches lies in the separation of the metamodeling from the properties of underlying functions. The survey has also identified two promising approaches - mapping and decomposition - for solving HEB problems. A new analytic methodology, radial basis function–high-dimensional model representation (RBF-HDMR), has been proposed to model the HEB problems. The RBF-HDMR decomposes the effects of variables or variable sets on system outputs. The RBF-HDMR, as compared with other metamodels, has three distinct advantages: 1) fundamentally reduces the number of calls to the expensive simulation in order to build a metamodel, thus breaks/alleviates exponentially-increasing computational difficulty; 2) reveals the functional form of the black-box function; and 3) discloses the intrinsic characteristics (for instance, linearity/nonlinearity) of the black-box function. The RBF-HDMR has been intensively tested with mathematical and practical problems chosen from the literature. This methodology has also successfully applied to the power transfer capability analysis of Manitoba-Ontario Electrical Interconnections with 50 variables. The test results demonstrate that the RBF-HDMR is a powerful tool to model large-scale simulation-based engineering problems. The RBF-HDMR model and its constructing approach, therefore, represent a breakthrough in modeling HEB problems and make it possible to optimize high-dimensional simulation-based design problems.
10

Metamodeling strategies for high-dimensional simulation-based design problems

Shan, Songqing 13 October 2010 (has links)
Computational tools such as finite element analysis and simulation are commonly used for system performance analysis and validation. It is often impractical to rely exclusively on the high-fidelity simulation model for design activities because of high computational costs. Mathematical models are typically constructed to approximate the simulation model to help with the design activities. Such models are referred to as “metamodel.” The process of constructing a metamodel is called “metamodeling.” Metamodeling, however, faces eminent challenges that arise from high-dimensionality of underlying problems, in addition to the high computational costs and unknown function properties (that is black-box functions) of analysis/simulation. The combination of these three challenges defines the so-called high-dimensional, computationally-expensive, and black-box (HEB) problems. Currently there is a lack of practical methods to deal with HEB problems. This dissertation, by means of surveying existing techniques, has found that the major deficiency of the current metamodeling approaches lies in the separation of the metamodeling from the properties of underlying functions. The survey has also identified two promising approaches - mapping and decomposition - for solving HEB problems. A new analytic methodology, radial basis function–high-dimensional model representation (RBF-HDMR), has been proposed to model the HEB problems. The RBF-HDMR decomposes the effects of variables or variable sets on system outputs. The RBF-HDMR, as compared with other metamodels, has three distinct advantages: 1) fundamentally reduces the number of calls to the expensive simulation in order to build a metamodel, thus breaks/alleviates exponentially-increasing computational difficulty; 2) reveals the functional form of the black-box function; and 3) discloses the intrinsic characteristics (for instance, linearity/nonlinearity) of the black-box function. The RBF-HDMR has been intensively tested with mathematical and practical problems chosen from the literature. This methodology has also successfully applied to the power transfer capability analysis of Manitoba-Ontario Electrical Interconnections with 50 variables. The test results demonstrate that the RBF-HDMR is a powerful tool to model large-scale simulation-based engineering problems. The RBF-HDMR model and its constructing approach, therefore, represent a breakthrough in modeling HEB problems and make it possible to optimize high-dimensional simulation-based design problems.

Page generated in 0.0894 seconds