• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

From the measurement of synchrophasors to the identification of inter-area oscillations in power transmission systems

Warichet, Jacques 26 February 2013 (has links)
In the early 1980s, relaying engineers conceived a technology allowing a huge step forward in the monitoring of power system behavior: the synchrophasor, i.e. the estimation of a phasor representation - amplitude and phase - of a sinusoidal waveform at a given point in time thanks to highly accurate time synchronization of a digital relay. By measuring synchrophasors across the power system several times per second, and centralizing the appropriate information in a hierarchical way through a telecommunication network link, it is now possible to continuously monitor the state of very large systems at a high refresh rate. <p><p>At the beginning, the phase angle information of synchrophasors was used to support or improve the performance of classic monitoring applications, such as state estimation and post-mortem analysis. Later, synchrophasors were found to be valuable for the detection and analysis of phenomena that were not monitored previously, such as system islanding and angular stability. This allows a better understanding of system behavior and the design of remedial actions in cases where system security appears to be endangered. Early detection and even prediction of instabilities, as well as validation and improvement of the dynamic models used for studies, have thus become possible.<p><p>However, a power system is rarely stationary and the assumptions behind the definition of “phasor” are not completely fulfilled because the waveforms' frequency and amplitude are not constant over a signal cycle at fundamental frequency. Therefore, accuracy of synchrophasor measurements during dynamic events is an important performance criterion. Furthermore, when discontinuities (phase jumps and high magnitude variations) and harmonics disturb the measured analog signals as a consequence of switching actions or external disturbances, measurements provided to the “user” (the operator or the algorithms that will take decisions such as triggering alarms and remedial actions) require a certain robustness. <p><p>The efforts underpinning this thesis have lead to the development of a method that ensures the robustness of the measurement. This scheme is described and tested in various conditions. In order to achieve a closer alignment between required and actual measurement performance, it is recommended to add an online indicator of phasor accuracy to the phasor data. <p><p>Fast automated corrective actions and closed-loop control schemes relying on synchrophasors are increasingly deployed in power systems. The delay introduced in the measurement and the telecommunication can have a negative impact on the efficiency of these schemes. Therefore, measurement latency is also a major performance indicator of the synchrophasor measurement. <p><p>This thesis illustrates the full measurement chain, from the measurement of analog voltages and currents in the power system to the use of these measurements for various purposes, with an emphasis on real-time applications: visualization, triggering of alarms in the control room or remedial actions, and integration in closed-loop controls. It highlights the various elements along this chain, which influence the availability, accuracy and delay of the data. <p><p>The main focus is on the algorithm to estimate synchrophasors and on the tradeoff between accuracy and latency that arises in applications for which measurements are taken during dynamic events and the data must be processed within a very limited timeframe. <p><p>If both fast phasors and slower, more accurate phasors are made available, the user would be able to select the set of phasors that are the most suitable for each application, by giving priority to either accuracy or a short delay.<p><p>This thesis also tentatively identifies gaps between requirements and typical measurements in order to identify current barriers and challenges to the use of wide area measurement systems. <p><p>A specific application, the continuous monitoring of oscillatory stability, was selected in order to illustrate the benefits of synchrophasors for the monitoring, analysis and control of power system behavior. This application requires a good phasor accuracy but can allow for some measurement delay, unless phasor data are used in an oscillation damping controller. In addition, it also relies on modal estimators, i.e. techniques for the online identification of the characteristics of oscillatory modes from measurements. This field of ongoing research is also introduced in this thesis. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
2

A two-level Probabilistic Risk Assessment of cascading failures leading to blackout in transmission power systems

Henneaux, Pierre 19 September 2013 (has links)
In our society, private and industrial activities increasingly rest on the implicit assumption that electricity is available at any time and at an affordable price. Even if operational data and feedback from the electrical sector is very positive, a residual risk of blackout or undesired load shedding in critical zones remains. The occurrence of such a situation is likely to entail major direct and indirect economical consequences, as observed in recent blackouts. Assessing this residual risk and identifying scenarios likely to lead to these feared situations is crucial to control and optimally reduce this risk of blackout or major system disturbance. The objective of this PhD thesis is to develop a methodology able to reveal scenarios leading to a blackout or a major system disturbance and to estimate their frequencies and their consequences with a satisfactory accuracy.<p><p>A blackout is a collapse of the electrical grid on a large area, leading to a power cutoff, and is due to a cascading failure. Such a cascade is composed of two phases: a slow cascade, starting with the occurrence of an initiating event and displaying characteristic times between successive events from minutes to hours, and a fast cascade, displaying characteristic times between successive events from milliseconds to tens of seconds. In cascading failures, there is a strong coupling between events: the loss of an element increases the stress on other elements and, hence, the probability to have another failure. It appears that probabilistic methods proposed previously do not consider correctly these dependencies between failures, mainly because the two very different phases are analyzed with the same model. Thus, there is a need to develop a conceptually satisfying probabilistic approach, able to take into account all kinds of dependencies, by using different models for the slow and the fast cascades. This is the aim of this PhD thesis.<p><p>This work first focuses on the level-I which is the analysis of the slow cascade progression up to the transition to the fast cascade. We propose to adapt dynamic reliability, an integrated approach of Probabilistic Risk Analysis (PRA) developed initially for the nuclear sector, to the case of transmission power systems. This methodology will account for the double interaction between power system dynamics and state transitions of the grid elements. This PhD thesis also introduces the development of the level-II to analyze the fast cascade, up to the transition towards an operational state with load shedding or a blackout. The proposed method is applied to two test systems. Results show that thermal effects can play an important role in cascading failures, during the first phase. They also show that the level-II analysis after the level-I is necessary to have an estimation of the loss of supplied power that a scenario can lead to: two types of level-I scenarios with a similar frequency can induce very different risks (in terms of loss of supplied power) and blackout frequencies. The level-III, i.e. the restoration process analysis, is however needed to have an estimation of the risk in terms of loss of supplied energy. This PhD thesis also presents several perspectives to improve the approach in order to scale up applications to real grids.<p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
3

Fault-tolerant permanent-magnet synchronous machine drives: fault detection and isolation, control reconfiguration and design considerations

Meinguet, Fabien 13 February 2012 (has links)
The need for efficiency, reliability and continuous operation has lead over the years to the development of fault-tolerant electrical drives for various industrial purposes and for transport applications. Permanent-magnet synchronous machines have also been gaining interest due to their high torque-to-mass ratio and high efficiency, which make them a very good candidate to reduce the weight and volume of the equipment.<p><p>In this work, a multidisciplinary approach for the design of fault-tolerant permanent-magnet synchronous machine drives is presented. <p><p>The drive components are described, including the electrical machine, the IGBT-based two-level inverter, the capacitors, the sensors, the controller, the electrical source and interfaces. A literature review of the failure mechanisms and of the reliability model of most of these components is performed. This allows understanding how to take benefit of the redundancy generally introduced in fault-tolerant systems.<p><p>A necessary step towards fault tolerance is the modelling of the electrical drive, both in healthy and faulty operations. A general model of multi-phase machines (with a number of phases equal to or larger than three) and associated converters is proposed. Next, control algorithms for multi-phase machines are derived. The impact of a closed-loop controller upon the occurrence of faults is also examined through simulation analysis and verified by experimental results.<p><p>Condition monitoring of electrical machines has expanded these last decades. New techniques relying on various measurements have emerged, which allow a better planning of maintenance operations and an optimization of the uptime of electrical machines. Regarding drives, a number of sensors are inherently present for control and basic protection functions. The utilization of these sensors for advanced condition monitoring is thus particularly interesting since they are available at no cost. <p><p>A novel fault detection and isolation scheme based on the available measurements (phase currents, DC-link voltage and mechanical position) is developed and validated experimentally. Change-detection algorithms are used for this purpose. Special attention is paid to sensor faults as well, what avoids diagnosis errors.<p><p>Fault-tolerant control can be implemented with passive and active approaches. The former consists in deriving a control scheme that gives acceptable performance for all operating conditions, including faulty conditions. The latter consists in applying dedicated solutions upon the occurrence of faults, i.e. by reconfiguring the control. Both approaches are investigated and implemented. <p><p>Finally, design considerations are discussed throughout the thesis. The advantages and drawbacks of various topologies are analyzed, which eventually leads to the design of a five-phase fault-tolerant permanent-magnet synchronous machine. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
4

Progressive collapse: comparison of main standards, formulation and validation of new computational procedures

Menchel, Kfir 29 October 2008 (has links)
Throughout recent history, famous records of building failures may be found, unfortunately accompanied by great human loss and major economic consequences. One of the mechanisms of failure is referred to as ‘progressive collapse’: one or several structural members suddenly fail, whatever the cause (accident or attack). The building then collapses progressively, every load redistribution causing the failure of other structural elements, until the complete failure of the building or of a major part of it. The civil engineering community’s attention to this type of event was first drawn by the progressive collapse of the building called Ronan Point, following a gas explosion in one of the last floors. Different simplified procedures for simulating the effects of progressive collapse can now be found in the literature, some of them described in detail. However, no extensive study can be found, in which these procedures are compared to more complete approaches for progressive collapse simulation, aiming at the comparison of the assumptions underlying them. To further contribute to the elaboration of design codes for progressive collapse, such a study would therefore be of great interest for practitioners.<p>All parties involved with the subject of progressive collapse are currently attempting to bridge the gap between the work done on the research front on the one hand, what can be considered as a fitting numerical model for regular industrial use on the other, and finally, the normalisation committees. The present research work aims at providing insight as to how the gaps between these poles may be reduced. The approach consists in studying the various hypotheses one by one, and gradually adding complexities to the numerical model, if they prove to be warranted by the need for sufficient accuracy. One of the contributions of the present work stems from this approach, in that it provides insight regarding the validity of the various simplifying assumptions. It also leads to the development of procedures which are kept as simple as possible, in an attempt to design them as best as possible for regular industrial use.<p>The objective of simplifying assumptions validation is pursued in Chapter 2. This chapter consists of the text of a paper entitled “Comparison and study of different progressive collapse simulation techniques for RC structures”, in which the main simplifying assumptions of the progressive collapse guidelines are detailed and assessed. The DoD [1] and GSA [2] static linear and non-linear procedures are investigated, and compared to more complete approaches in order to assess their validity.<p>In the next two chapters, two new procedures for design against progressive collapse are developed. They are based on quasi-static computations, their main objective being to account accurately for dynamic inertial effects. The first of these chapters consists in the text of a paper entitled “A new pushover analysis procedure for structural progressive collapse based on a kinetic energy criterion”, in which energetic considerations allow for the development of a static equivalent pushover procedure. The second chapter consists of the text of a paper entitled “A new pushover analysis procedure for structural progressive collapse based on optimised load amplification factors”, which uses load amplification factors resulting from optimisation procedures in order to account for dynamic inertial effects. The contributions of these two papers lie in the fact that they offer an improved accuracy on the results, when compared with other procedure available in the literature, which follow the same general principles. The two proposed procedures are thoroughly validated by systematic comparisons with results obtained with the more costly dynamic non-linear computations.<p>Finally, an additional chapter focuses on the various approaches that can be adopted for the simulation of reinforced concrete beams and columns. Because a rather simple model for reinforced concrete is used in Chapter 2, the bulk of this chapter consists in the implementation of a more complex fibre-based non-linear beam element. Comparisons performed with this model provide insight to the limitations of the simpler model, which is based on the use of lumped plastic hinges, but show this simpler model to be valid for the purposes of the present work.<p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished

Page generated in 0.0484 seconds