Spelling suggestions: "subject:"smoothing"" "subject:"moothing""
401 |
Optimisation du dimensionnement d’une chaîne de conversion électrique directe incluant un système de lissage de production par supercondensateurs : application au houlogénérateur SEAREV / Sizing optimization of a direct electrical conversion chain including a supercapacitor-based power output smoothing system : application to the SEAREV wave energy converterAubry, Judicaël 03 November 2011 (has links)
Le travail présenté dans cette thèse porte sur l'étude du dimensionnement d'une chaine de conversion électrique en entrainement direct d'un système direct de récupération de l'énergie des vagues (searev). Cette chaine de conversion est composée d'une génératrice synchrone à aimants permanents solidaire d'un volant pendulaire, d'un convertisseur électronique composé de deux ponts triphasés à modulation de largeur d'impulsion, l'un contrôlant la génératrice, l'autre permettant d'injecter l'énergie électrique au réseau. En complément, un système de stockage de l'énergie (batterie de supercondensateurs) est destiné au lissage de la puissance produite. Le dimensionnement de tous ces éléments constitutifs nécessite une approche d'optimisation sur cycle, dans un contexte de fort couplage multi-physique notamment entre les parties hydrodynamique et électromécanique. Dans un premier temps, l'ensemble génératrice-convertisseur, dont le rôle est d'amortir le mouvement d'un volant pendulaire interne, est optimisé en vue de minimiser le coût de production de l'énergie (coût du kWh sur la durée d'usage). Cette optimisation sur cycle est réalisée en couplage fort avec le système houlogénérateur grâce à la prise en compte conjointe de variables d'optimisation relatives à l'ensemble convertisseur-machine mais aussi à la loi d'amortissement du volant pendulaire. L'intégration d'une stratégie de défluxage, intéressante pour assurer un fonctionnement en écrêtage de la puissance, permet, dès l'étape de dimensionnement, de traiter l'interaction convertisseur-machine. Dans un second temps, la capacité énergétique du système de stockage de l'énergie fait l'objet d'une optimisation en vue de la minimisation de son coût économique sur cycle de vie. Pour ce faire, nous définissons des critères de qualité de l'énergie injectée au réseau, dont un lié au flicker, et nous comparons des stratégies de gestion de l'état de charge tout en tenant compte du vieillissement en cyclage des supercondensateurs dû à la tension et à leur température. Dans un troisième temps, à partir de données d'états de mer sur une année entière, nous proposons des dimensionnements de chaines de conversion électrique qui présentent les meilleurs compromis en termes d'énergie totale récupérée et de coût d'investissement. / The work presented in this thesis sets forth the study of the sizing of a direct-drive electrical conversion chain for a direct wave energy converter (SEAREV). This electrical chain is made up of a permanent magnet synchronous generator attached to a pendular wheel and a power-electronic converter made up of two three-phase pulse width modulation bridge, one controlling the generator, the other allowing injecting electrical energy into the grid. In addition, an energy storage system (bank of supercapacitors) is intended to smooth the power output. The sizing of all these components needs an operating cycle optimization approach, in a system context with strong multi-physics coupling, more particularly between hydrodynamical and electromechanical parts. At first, the generator-converter set, whose role is to damp the pendular movement of an internal wheel, is optimized with a view to minimize the cost of energy (kWh production cost). This optimization, based on torque-speed operating profiles, is carried out considering a strong coupling with the wave energy converter thanks to the consideration as design variables, some relatives to the generator-converter sizing but also some relatives to the damping law of the pendular wheel. In addition, the consideration of a flux-weakening strategy, interesting to ensure a constant power operation (levelling), allows, as soon as the sizing step, to deal with the generator-converter interaction. In a second step, the rated energy capacity of the energy storage system is being optimized with a view of the minimization of its economical life-cycle cost. To do this, we define quality criteria of the power output, including one related to the flicker, and we compare three energy managment rules while taking into account the power cycling aging of the supercapacitors due to the voltage and their temperature. In a third step, from yearly sea-states data, we provide sizings of the direct-drive electrical conversion chain that are the best trades-offs in terms of total electrical produced energy and economical investment cost.
|
402 |
Nedskrivning av goodwill : Kan intressenter lita på redovisningen? / Goodwill impairment : Can stakeholders trust the accounts?Leopold, Fredrik, Lundborg Larsson, Jennifer, Olofsson, Sofia January 2021 (has links)
Bakgrund: 2005 infördes internationella redovisningsstandarder som innebar att många företag skulle redovisa enligt goodwill i enlighet med IAS36 och därmed utföra en årlig nedskrivningsprövning av goodwill. Denna nedgångsprövning av goodwill öppnar upp för subjektiva antagande och bidrar till att företagsledare får en möjlighet att agera opportunistiskt. Tidigare studier pekar på att nedskrivningen av goodwill bland annat kan påverkas av faktorerna big bath, resultatutjämning samt VD-byten, vilket bidrar till frågeställningen: Kan intressenter lita på redovisningen av goodwill? Syfte: Syftet med denna studie är att genom att undersöka faktorer som kan påverka nedskrivning av goodwill förklarar huruvida intressenter kan lita på företag på Stockholmsbörsens redovisning av goodwill. Syftet är även att förklara huruvida redovisning av goodwillnedskrivning på Stockholmsbörsen uppfyller IASB:s krav på neutralitet. Metod: Studien innefattar en kvantitativ metod med abduktiv ansats. För att uppfylla syftet samlades data in från 90 slumpmässigt utvalda bolag. För att analysera detta användes multivariata regressioner i form av tobit regressionsanalys samt logistisk regressionsanalys. Analyserna utfördes både på urvalet som helhet och då urvalet var indelade utifrån vilken lista på Stockholmsbörsen de tillhört aktuellt år. Detta för att även kunna klargöra huruvida resultatet skiljer sig baserat på vilken lista företagen tillhör. Resultat och slutsatser: Vår studie visar att nedskrivning av goodwill i större utsträckning sker när resultatet är onormalt högt samt då bolag bytt VD under de två senaste räkenskapsåren. Detta betyder att intressenter inte fullt ut kan lita på företagens redovisning av goodwill. Resultatet visar även att ett VD-byte endast påverkar nedskrivning av goodwill bland urvalets minsta företag samt att resultatutjämning förekommer bland urvalets största och minsta företag men inte bland urvalets mellanstora företag. Detta indikerar att stora och små företag kan ha olika incitament att utöva resultatmanipulering. / Background: In 2005, international accounting standards were introduced, which meant that many companies would report goodwill in accordance with IAS36 and thereby perform an annual impairment test of goodwill. This impairment test of goodwill opens for subjective assumptions and contributes to business leaders being able to act opportunistically. Previous studies indicate that the impairment of goodwill can, among other things, be affected by factors like big bath, income smoothing and change of CEO. This contributes to the question: Can stakeholders trust the reporting of goodwill? Purpose: The purpose of this study is to, by examining factors that may affect the impairment of goodwill, explain whether stakeholders can trust companies listed on Nasdaq OMX Stockholm’s reporting of goodwill. The purpose is also to explain whether the reporting of goodwill impairment on Nasdaq OMX Stockholm meets IASB:s requirements for neutrality. Methodology: The study includes a quantitative method with an abductive approach. To fulfill the purpose, data was collected from 90 randomly selected companies listed on Nasdaq OMX Stockholm. To analyze this, multivariate regressions were used in the form of tobit regression analysis and logistic regression analysis. These analyzes were performed both on the sample but also when the sample was divided based on which list on Nasdaq OMX Stockholm they belonged to in the current year. This was done so that it could also be clarified whether the result differs based on which list the companies belong to. Results and conclusion: Our study shows that goodwill impairment occurs in bigger extent when the results are abnormally high and when companies have changed CEO during the last 2 years. This means that stakeholders cannot fully trust the companies’ reporting of goodwill. The result also show that a change of CEO only affects the impairment of goodwill among the sample’s smallest companies and that income smoothing occurs among the largest and smallest companies in the sample but not among the middle-sized companies in the sample. This indicate that large and small companies may have different incentives to exercise earnings management.
|
403 |
Non-parametric methodologies for reconstruction and estimation in nonlinear state-space models / Méthodologies non-paramétriques pour la reconstruction et l’estimation dans les modèles d’états non linéairesChau, Thi Tuyet Trang 26 February 2019 (has links)
Le volume des données disponibles permettant de décrire l’environnement, en particulier l’atmosphère et les océans, s’est accru à un rythme exponentiel. Ces données regroupent des observations et des sorties de modèles numériques. Les observations (satellite, in situ, etc.) sont généralement précises mais sujettes à des erreurs de mesure et disponibles avec un échantillonnage spatio-temporel irrégulier qui rend leur exploitation directe difficile. L’amélioration de la compréhension des processus physiques associée à la plus grande capacité des ordinateurs ont permis des avancées importantes dans la qualité des modèles numériques. Les solutions obtenues ne sont cependant pas encore de qualité suffisante pour certaines applications et ces méthodes demeurent lourdes à mettre en œuvre. Filtrage et lissage (les méthodes d’assimilation de données séquentielles en pratique) sont développés pour abonder ces problèmes. Ils sont généralement formalisées sous la forme d’un modèle espace-état, dans lequel on distingue le modèle dynamique qui décrit l’évolution du processus physique (état), et le modèle d’observation qui décrit le lien entre le processus physique et les observations disponibles. Dans cette thèse, nous abordons trois problèmes liés à l’inférence statistique pour les modèles espace-états: reconstruction de l’état, estimation des paramètres et remplacement du modèle dynamique par un émulateur construit à partir de données. Pour le premier problème, nous introduirons tout d’abord un algorithme de lissage original qui combine les algorithmes Conditional Particle Filter (CPF) et Backward Simulation (BS). Cet algorithme CPF-BS permet une exploration efficace de l’état de la variable physique, en raffinant séquentiellement l’exploration autour des trajectoires qui respectent le mieux les contraintes du modèle dynamique et des observations. Nous montrerons sur plusieurs modèles jouets que, à temps de calcul égal, l’algorithme CPF-BS donne de meilleurs résultats que les autres CPF et l’algorithme EnKS stochastique qui est couramment utilisé dans les applications opérationnelles. Nous aborderons ensuite le problème de l’estimation des paramètres inconnus dans les modèles espace-état. L’algorithme le plus usuel en statistique pour estimer les paramètres d’un modèle espace-état est l’algorithme EM qui permet de calculer itérativement une approximation numérique des estimateurs du maximum de vraisemblance. Nous montrerons que les algorithmes EM et CPF-BS peuvent être combinés efficacement pour estimer les paramètres d’un modèle jouet. Pour certaines applications, le modèle dynamique est inconnu ou très coûteux à résoudre numériquement mais des observations ou des simulations sont disponibles. Il est alors possible de reconstruire l’état conditionnellement aux observations en utilisant des algorithmes de filtrage/lissage dans lesquels le modèle dynamique est remplacé par un émulateur statistique construit à partir des observations. Nous montrerons que les algorithmes EM et CPF-BS peuvent être adaptés dans ce cadre et permettent d’estimer de manière non-paramétrique le modèle dynamique de l’état à partir d'observations bruitées. Pour certaines applications, le modèle dynamique est inconnu ou très coûteux à résoudre numériquement mais des observations ou des simulations sont disponibles. Il est alors possible de reconstruire l’état conditionnellement aux observations en utilisant des algorithmes de filtrage/lissage dans lesquels le modèle dynamique est remplacé par un émulateur statistique construit à partir des observations. Nous montrerons que les algorithmes EM et CPF-BS peuvent être adaptés dans ce cadre et permettent d’estimer de manière non-paramétrique le modèle dynamique de l’état à partir d'observations bruitées. Enfin, les algorithmes proposés sont appliqués pour imputer les données de vent (produit par Météo France). / The amount of both observational and model-simulated data within the environmental, climate and ocean sciences has grown at an accelerating rate. Observational (e.g. satellite, in-situ...) data are generally accurate but still subject to observational errors and available with a complicated spatio-temporal sampling. Increasing computer power and understandings of physical processes have permitted to advance in models accuracy and resolution but purely model driven solutions may still not be accurate enough. Filtering and smoothing (or sequential data assimilation methods) have developed to tackle the issues. Their contexts are usually formalized under the form of a space-state model including the dynamical model which describes the evolution of the physical process (state), and the observation model which describes the link between the physical process and the available observations. In this thesis, we tackle three problems related to statistical inference for nonlinear state-space models: state reconstruction, parameter estimation and replacement of the dynamic model by an emulator constructed from data. For the first problem, we will introduce an original smoothing algorithm which combines the Conditional Particle Filter (CPF) and Backward Simulation (BS) algorithms. This CPF-BS algorithm allows for efficient exploration of the state of the physical variable, sequentially refining exploration around trajectories which best meet the constraints of the dynamic model and observations. We will show on several toy models that, at the same computation time, the CPF-BS algorithm gives better results than the other CPF algorithms and the stochastic EnKS algorithm which is commonly used in real applications. We will then discuss the problem of estimating unknown parameters in state-space models. The most common statistical algorithm for estimating the parameters of a space-state model is based on EM algorithm, which makes it possible to iteratively compute a numerical approximation of the maximum likelihood estimators. We will show that the EM and CPF-BS algorithms can be combined to effectively estimate the parameters in toy models. In some applications, the dynamical model is unknown or very expensive to solve numerically but observations or simulations are available. It is thence possible to reconstruct the state conditionally to the observations by using filtering/smoothing algorithms in which the dynamical model is replaced by a statistical emulator constructed from the observations. We will show that the EM and CPF-BS algorithms can be adapted in this framework and allow to provide non-parametric estimation of the dynamic model of the state from noisy observations. Finally the proposed algorithms are applied to impute wind data (produced by Méteo France).
|
404 |
Influence of the Choice of Disease Mapping Method on Population Characteristics in Areas of High Disease BurdensDesai, Khyati Sanket 12 1900 (has links)
Disease maps are powerful tools for depicting spatial variations in disease risk and its underlying drivers. However, producing effective disease maps requires careful consideration of the statistical and spatial properties of the disease data. In fact, the choice of mapping method influences the resulting spatial pattern of the disease, as well as the understanding of its underlying population characteristics. New developments in mapping methods and software in addition to continuing improvements in data quality and quantity are requiring map-makers to make a multitude of decisions before a map of disease burdens can be created. The impact of such decisions on a map, including the choice of appropriate mapping method, not been addressed adequately in the literature. This research demonstrates how choice of mapping method and associated parameters influence the spatial pattern of disease. We use four different disease-mapping methods – unsmoothed choropleth maps, smoothed choropleth maps produced using the headbanging method, smoothed kernel density maps, and smoothed choropleth maps produced using spatial empirical Bayes methods and 5-years of zip code level HIV incidence (2007- 2011) data from Dallas and Tarrant Counties, Texas. For each map, the leading population characteristics and their relative importance with regards to HIV incidence is identified using a regression analysis of a CDC recommended list of socioeconomic determinants of HIV. Our results show that the choice of mapping method leads to different conclusions regarding the associations between HIV disease burden and the underlying demographic and socioeconomic characteristics. Thus, the choice of mapping method influences the patterns of disease we see or fail to see. Accurate depiction of areas of high disease burden is important for developing and targeting appropriate public health interventions.
|
405 |
Analysis Of A Wave Power System With Passive And Active RectificationWahid, Ferdus January 2020 (has links)
Wave energy converter (WEC) harnesses energy from the ocean to produce electrical power. The electrical power produced by the WEC is fluctuating and is not maximized as well, due to the varying ocean conditions. As a consequence, without any intermediate power conversion stage, the output power from the WEC can not be fed into the grid. To feed WEC output power into the grid, a two-stage power conversion topology is used, where the WEC output power is first converted into DCpower through rectification, and then a DC-AC converter (inverter) is used to supply AC power into the grid. The main motive of this research is to extract maximum electrical power from the WEC by active rectification and smoothing the power fluctuation of the wave energy converter through a hybrid energy storage system consisting of battery and flywheel. This research also illustrates active and reactive power injection to the grid according to load demand through a voltage source inverter.
|
406 |
The Influence of Disease Mapping Methods on Spatial Patterns and Neighborhood Characteristics for Health RiskRuckthongsook, Warangkana 12 1900 (has links)
This thesis addresses three interrelated challenges of disease mapping and contributes a new approach for improving visualization of disease burdens to enhance disease surveillance systems. First, it determines an appropriate threshold choice (smoothing parameter) for the adaptive kernel density estimation (KDE) in disease mapping. The results show that the appropriate threshold value depends on the characteristics of data, and bandwidth selector algorithms can be used to guide such decisions about mapping parameters. Similar approaches are recommended for map-makers who are faced with decisions about choosing threshold values for their own data. This can facilitate threshold selection. Second, the study evaluates the relative performance of the adaptive KDE and spatial empirical Bayes for disease mapping. The results reveal that while the estimated rates at the state level computed from both methods are identical, those at the zip code level are slightly different. These findings indicate that using either the adaptive KDE or spatial empirical Bayes method to map disease in urban areas may provide identical rate estimates, but caution is necessary when mapping diseases in non-urban (sparsely populated) areas. This study contributes insights on the relative performance in terms of accuracy of visual representation and associated limitations. Lastly, the study contributes a new approach for delimiting spatial units of disease risk using straightforward statistical and spatial methods and social determinants of health. The results show that the neighborhood risk map not only helps in geographically targeting where but also in tailoring interventions in those areas to those high risk populations. Moreover, when health data is limited, the neighborhood risk map alone is adequate for identifying where and which populations are at risk. These findings will benefit public health tasks of planning and targeting appropriate intervention even in areas with limited and poor-quality health data. This study not only fills the identified gaps of knowledge in disease mapping but also has a wide range of broader impacts. The findings of this study improve and enhance the use of the adaptive KDE method in health research, provide better awareness and understanding of disease mapping methods, and offer an alternative method to identify populations at risk in areas with limited health data. Overall, these findings will benefit public health practitioners and health researchers as well as enhance disease surveillance systems.
|
407 |
Modul pro dolování v časových řadách systému pro dolování z dat / Time-Serie Mining Module of a Data Mining SystemKlement, Ondřej January 2010 (has links)
The subject of this master's thesis is extension of existing data mining system. System will be extended by the module for the time series data mining. This thesis consists of common introduction to data mining issues and continues with time series analysis. Thesis then also contains some of the current tasks and algorithms used in time series data mining, follows by the concept of the implementation and description of the choosen mining method. Possible future system's improvments are disscused at the end of the paper.
|
408 |
Redovisning av goodwill under IAS 36 : Bestämmande faktorer som påverkar aktualisering av goodwillnedskrivning hos företag på Nasdaq StockholmBerbic, Almir, de Barès, Markus January 2020 (has links)
Denna studie undersöker företagsspecifika ekonomiska faktorer och opportunistiska incitamentrelaterade faktorer hos företagsledare som är bestämmande för aktualisering av goodwillnedskrivning i den svenska kontexten. År 2005 implementerade International Accounting Standard Board principbaserade riktlinjer avseende redovisning av goodwill enligt IAS 36 mot tidigare systematiska avskrivningar av goodwill. Implementeringen avsåg att förbättra redovisning av goodwill genom att förse användarna av finansiella rapporter med mer värderelevant information avseende tillgångens underliggande prestation. Dock har det nya principbaserade reglementet kritiserats av forskare mot bakgrund av diskretionen som medföljer av IAS 36 vid nedskrivningsprövningar som kan ge upphov till opportunistiska incitament hos företagsledare. Undersökningen avgränsas till Nasdaq Stockholm i betraktande av att tidigare forskning visat inkonsekventa forskningsresultat avseende vilka faktorer som är bestämmande för goodwill nedskrivning samt att det föreligger få empiriska belägg och olika argument inom redovisningslitteraturen. Undersökningen utgörs av totalt 285 företag på Nasdaq Stockholm över fem undersökningsår, vilket efter täckningsfel och bortfall resulterar i 1090 företagsobservationer. Det empiriska resultatet tyder att företagsledare i den svenska kontexten under diskretionen som medföljer av IAS 36 agerar opportunistiskt för att uppnå eftertraktade resultat vid nedskrivningsprövningar, specifikt vid positionsbyte av den verkställande direktören och resultatutjämning vid abnormt höga resultat, och inte fullständigt följer företagsspecifika ekonomiska kriterier som följer av IAS 36 vid bedömning av kassagenererande enheters återvinningsvärde. Det subjektiva utrymmet vid nedskrivningsprövningar medför praktiska och teoretiska implikationer för användare av finansiella rapporter, utövare och normgivare. / This study examines factors associated to business specific characteristics and factors associated to opportunistic incentives by executives that are determining for actualization of goodwill impairment losses in the Swedish context. In 2005, the International Accounting Standard Board implemented principle-based guidelines regarding the recognition of goodwill in accordance with IAS 36 against previously systematic amortization of goodwill. The implementation was issued to improve the recognition of goodwill by providing users of financial reports with more value-relevant information regarding the asset's underlying performance. However, the new principle-based regulations have been criticized by researchers considering the discretion that follows with IAS 36 in impairment tests that may give rise to opportunistic incentives on the part of executives. The study is limited to Nasdaq Stockholm, owing to previous research showing inconsistent results regarding the factors that determine goodwill impairment losses and that there are few empirical evidence and different arguments in the accounting literature. The survey consists of a total of 285 companies on Nasdaq Stockholm over five examination years, which after coverage errors and omissions results in 1090 observations. The empirical result indicates that executives in Sweden under the discretion that are incorporated with IAS 36 act opportunistically to achieve coveted results in impairment tests, specifically in the change of position of the CEO and in managing of earnings by equalizing results in periods of abnormally high results, and do not fully follow business specific financials criteria that follows from IAS 36 when assessing the recoverable amount of cash generating units. The subjective scope of impairment tests entails practical and theoretical implications for users of financial reports, practitioners and normsetters.
|
409 |
Iterativni postupci sa regularizacijom za rešavanje nelinearnih komplementarnih problemaRapajić Sanja 13 July 2005 (has links)
<p><span style="left: 81.5833px; top: 720.322px; font-size: 17.5px; font-family: serif; transform: scaleX(1.07268);">U doktorskoj disertaciji razmatrani su iterativni postupci za rešavanje nelinearnih komplementarnih problema (NCP). Problemi ovakvog tipa javljaju se u teoriji optimizacije, inženjerstvu i ekonomiji. Matematički modeli mnogih prirodnih, društvenih i tehničkih procesa svode se takođe na ove probleme. Zbog izuzetno velike zastupljenosti NCP problema, njihovo rešavanje je veoma aktuelno. Među mnogobrojnim numeričkim postupcima koji se koriste u tu svrhu, u ovoj disertaciji posebna pažnja posvećena je<br />generalizovanim postupcima Njutnovog tipa i iterativnim postupcima sa re-gularizacijom matrice jakobijana. Definisani su novi postupci za rešavanje NCP i dokazana je njihova lokalna ili globalna konvergencija. Dobijeni teorijski rezultati testirani su na relevantnim numeričkim primerima. </span></p> / <p>Iterative methods for nonlinear complementarity problems (NCP) are con-sidered in this doctoral dissertation. NCP problems appear in many math-ematical models from economy, engineering and optimization theory. Solv-ing NCP is very atractive in recent years. Among many numerical methods for NCP, we are interested in generalized Newton-type methods and Jaco-bian smoothing methođs. Several new methods for NCP are defined in this dissertation and their local or global convergence is proved. Theoretical results are tested on relevant numerical examples.</p>
|
410 |
Evaluation of DC Fault Current in Grid Connected Converters in HVDC StationsSinhaRoy, Soham January 2022 (has links)
The main circuit equipment in an HVDC station must be rated for continuous operation as well as for stresses during ground faults and other short circuits. The component impedances are thus selected for proper operation during both continuous operation and short circuit events. Normally, Electromagnetic Transient (EMT) simulations are performed for the short circuit current ratings, which can leadto time consuming iterations for the optimization of impedance values. Hence, sufficiently correct and handy formulas are useful. For that reason, in this research work, firstly, a thorough literature study is done to gain a deep understanding of the modular multilevel converter (MMC) and its behaviour after aDC pole-to-pole short circuit fault. Two associated simulation models are designed in PSCAD/EMTDC simulation software. The focus of this thesis is on DC pole-topoleshort circuit in Symmetric Monopole HVDC VSC Modular Multilevel Converter (MMC). The desired analytical expression for the steady state fault current is determined byusing mesh analysis and also by applying KCL and it is verified by doing a set of simulations in PSCAD. A detailed sensitivity study has been done in the PSCAD simulation software to understand the influence of the AC converter reactor inductance and the DC smoothing reactor inductance on the steady state as well as peak fault current respectively. From the sensitivity study, the simulated values of peak factor have been obtained. By means of the ratio in between DC side inductance (L_DC) and AC side inductance (L_AC), and by performing a number of calculations, the desired expression for the peak factor is derived. As a result, the peak fault current can be calculated. The calculated value of the peak fault current from the derived formula is compared to the simulated value and validated. An over-estimation is considered for the rating of the equipment. Along with that, the analysis of the effect of impedances of equipment and systems are done and also verified, to better judge the accuracy of the result. In the result, it is found that, the error margin obtained from the derived analytical expression for the steady state value is within 2% of the PSCAD simulated value, which means the error can be safely ignored. Similarly, the value obtained from the derived formula for the peak fault current is within 4% over-estimation margin of the PSCAD simulated value, which is quite good in terms of cost estimation for the rating of the components.
|
Page generated in 0.0532 seconds