• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 25
  • 20
  • 12
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 221
  • 221
  • 40
  • 35
  • 32
  • 30
  • 30
  • 24
  • 24
  • 24
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Bayesian calibration of building energy models for energy retrofit decision-making under uncertainty

Heo, Yeonsook 10 November 2011 (has links)
Retrofitting of existing buildings is essential to reach reduction targets in energy consumption and greenhouse gas emission. In the current practice of a retrofit decision process, professionals perform energy audits, and construct dynamic simulation models to benchmark the performance of existing buildings and predict the effect of retrofit interventions. In order to enhance the reliability of simulation models, they typically calibrate simulation models based on monitored energy use data. The calibration techniques used for this purpose are manual and expert-driven. The current practice has major drawbacks: (1) the modeling and calibration methods do not scale to large portfolio of buildings due to their high costs and heavy reliance on expertise, and (2) the resulting deterministic models do not provide insight into underperforming risks associated with each retrofit intervention. This thesis has developed a new retrofit analysis framework that is suitable for large-scale analysis and risk-conscious decision-making. The framework is based on the use of normative models and Bayesian calibration techniques. Normative models are light-weight quasi-steady state energy models that can scale up to large sets of buildings, i.e. to city and regional scale. In addition, they do not require modeling expertise since they follow a set of modeling rules that produce a standard measure for energy performance. The normative models are calibrated under a Bayesian approach such that the resulting calibrated models quantify uncertainties in the energy outcomes of a building. Bayesian calibration models can also incorporate additional uncertainties associated with retrofit interventions to generate probability distributions of retrofit performance. Probabilistic outputs can be straightforwardly translated into a measure that quantifies underperforming risks of retrofit interventions and thus enable decision making relative to the decision-makers' rational objectives and risk attitude. This thesis demonstrates the feasibility of the new framework on retrofit applications by verifying the following two hypotheses: (1) normative models supported by Bayesian calibration have sufficient model fidelity to adequately support retrofit decisions, and (2) they can support risk-conscious decision-making by explicitly quantifying risks associated with retrofit options. The first and second hypotheses are examined through case studies that compare outcomes from the calibrated normative model with those from a similarly calibrated transient simulation model and compare decisions derived by the proposed framework with those derived by standard practices respectively. The new framework will enable cost-effective retrofit analysis at urban scale with explicit management of uncertainties.
62

A Probabilistic Re-assesment of the Index of New Cars’ Climate Impact : Benefits of Biofuels in Sweden

Lübbe, Nils January 2011 (has links)
Introduction: The index of new cars’ climate impact (Trafikverket 2010) calculated a reduction factor to quantify the positive impact of renewable fuels on CO2 emissions for new cars in Sweden 2009. Although uncertainty for many input parameters is acknowledged, all calculations were made with point estimates. Aim: The aim of this study was to verify consumer recommendations based on carbon dioxide emissions of new cars by re-assessing the CO2 emissions considering the uncertainty in input data. The data uncertainty was quantified by using probabilistic risk assessment methodology. The objectives were (1) to calculate the possible reduction of CO2 emissions for new cars in Sweden accounting for uncertainties; (2) to identify the most influential factors by conducting a sensitivity analysis; and (3) to assess the implications of interval analysis and probabilistic modelling for epistemic uncertainty in this context and thereby to contribute to the discussion on which method is the most appropriate to support decision making. Method: Calculations were performed with the deterministic model proposed by Trafikverket (2010) using probability distributions (Bayesian probability distributions assigned by expert judgements) and intervals as model input. Probabilistic modelling was implemented as Monte Carlo simulation with uniform distributions and triangular distributions in Matlab. Interval calculations were done by hand. Results: The best estimate from probabilistic modelling of CO2 reduction when using E85 as a fuel of 30% (95% confidence interval = 10% to 52%) is in a similar range as the 20% given by Trafikverket (2010). The best estimate of 28% decrease for gas cars (95% confidence interval = 3% to 44%) is lower than the originally proposed 42%, but still in a similar range. The difference is due to the large extent of optimistic values taken by Trafikverket (2010). The interval analysis produced a wider range. For ethanol driven cars, a CO2 reduction of 68%, an increase of 14% or anything in between is calculated, for cars operated on gas the result is a CO2 reduction of 59%, an increase of 29% or anything in between. Conclusions: The use of biofuels most likely reduces CO2 emissions. The CO2 emissions from the production of the biofuel are the most influential factors. The recommendations of Trafikverket to the consumers are still valid after probabilistic recalculation. Probabilistic modelling is more appropriate than interval analysis to guide authority and consumer decisions as it allows conclusions to be drawn and therefore can support decision making.
63

Development and Implementation of Bayesian Computer Model Emulators

Lopes, Danilo Lourenco January 2011 (has links)
<p>Our interest is the risk assessment of rare natural hazards, such as</p><p>large volcanic pyroclastic flows. Since catastrophic consequences of</p><p>volcanic flows are rare events, our analysis benefits from the use of</p><p>a computer model to provide information about these events under</p><p>natural conditions that may not have been observed in reality.</p><p>A common problem in the analysis of computer experiments, however, is the high computational cost associated with each simulation of a complex physical process. We tackle this problem by using a statistical approximation (emulator) to predict the output of this computer model at untried values of inputs. Gaussian process response surface is a technique commonly used in these applications, because it is fast and easy to use in the analysis.</p><p>We explore several aspects of the implementation of Gaussian process emulators in a Bayesian context. First, we propose an improvement for the implementation of the plug-in approach to Gaussian processes. Next, we also evaluate the performance of a spatial model for large data sets in the context of computer experiments.</p><p>Computer model data can also be combined to field observations in order to calibrate the emulator and obtain statistical approximations to the computer model that are closer to reality. We present an application where we learn the joint distribution of inputs from field data and then bind this auxiliary information to the emulator in a calibration process.</p><p>One of the outputs of our computer model is a surface of maximum volcanic flow height over some geographical area. We show how the topography of the volcano area plays an important role in determining the shape of this surface, and we propose methods</p><p>to incorporate geophysical information in the multivariate analysis of computer model output.</p> / Dissertation
64

A Case Study of Risk Management for Groundwater Contaminated Site by Organic Chlorine Solvents.

Lin, Sang-Feng 02 January 2012 (has links)
The pollution of soil and groundwater have increased due to the leakage of ptrolium products and organic chemicals in the country recently,and Taiwan face with small region and dense population,so the pollution maybe exposed through relevant way such as quality of water,soil,air,crop and fish and so on to affect human health and cause risk of jeopardise directly or indiretly. The study is to aimed at chlorinated compounds for domestic,and use methods of risk assessment to analyze necessity and urgency of renovation for be contaiminted compound and downstream influence.And then according to result to make relevant management strategies and projets to control risk level and influence for contaiminated compound. In this study,we collect information relate to case,in accordence to health and risk assessment methods for soil and groundwater contaminated sites and some softwares of CRYSTAL BALL and BIOCHLO by Environmental Protection Bureau to estimate the site which was evaluated chlorinated compound (trichloroethylene,TCE),and considerate whether they affect residents healy nearby,use some hydrogeological survey of sites to process second-level health risk assessments. First,use commercial software of CRYSTAL BALL execute uncertainty and sensitivity analysis. Not only does the analysis probe into overall parameters variaty how to affect risk value but also they process analysis and results by different combinations of paremeter.From this result, we can confirm that the large parameter values for affecting risk is transmission of pollutants and is similar to previous studies assessment and analysis.Other parameter cause less influence for risk variaty such as age level of receptor,group,the way of contact,time and water quntity.The study discovers TCE pollutants concentration will make change of risk value by accompany with groundwater move to downstream distance. That means receptor of distance of contamination resource cause large influence. The far distance represents can product the larger function for TCE pollutant, and causes less cancer risk for receptor, conversely, it causes higher cancer risk. Subsequently, we also use BIOCHLOR assessment software by U.S.A Enviromental Protection Bureau. from the result to determine whether the site have potential of anaerrobic reductive dechlorination or not and estimate natural attenuation possibility.The actual situation of the site gets the three score in the weight total.This display do not prove that chlorinated compounds can procedd biogical decomposition in the anaerobic invironment without more correct value in the stage.We recommend looking for another renovations. The study selects more important sensitive parameters through risk assessment result for the site and work out the way of renovations which is suitable for this case by schedule of object. The investigation has found that residents indeed extracted groundwater for agriculture irrigation, but not drank water directly. In order to avoid the worst situation happens from the view of risk, we will consider two aspects of regulation and technology for plan. First, in order to administration control for the aspect of regulation we will consider how to prohibit residents to extract groundwater effectively. For instance, prohibit or limit to set up underground well, provide substitute drink water, set up notice sign and underground water quality monitor measure regularity. Second, for the sake of preventing pollutant group spread and adopt measure for the aspect of technology, for example pollution remediation technology (include soil gas extraction, air injection method at present) and containment control technology (include hydraulic containment, permeability reactive barrier, natural biological remediation, etc) to manage effectively possible events in the site and prepare well prevention in advance. We also adopt good communication with local residents to let residents understand executive content for renovation and reduce their resistance in favor of making progress for renovation and achieve risk management objective.
65

Design and performance of an ammonia measurement system

Boriack, Cale Nolan 25 April 2007 (has links)
Ammonia emissions from animal feeding operations (AFOs) have recently come under increased scrutiny. The US Environmental Protection Agency (EPA) has come under increased pressure from special interest groups to regulate ammonia. Regulation of ammonia is very difficult because every facility has different manure management practices. Different management practices lead to different emissions for every facility. Researchers have been tasked by industry to find best management practices to reduce emissions. The task cannot be completed without equipment that can efficiently and accurately compare emissions. To complete this task, a measurement system was developed and performance tested to measure ammonia. Performance tests included uncertainty analysis, system response, and adsorption kinetics. A measurement system was designed for measurement of gaseous emissions from ground level area sources (GLAS) in order to sample multiple receptors with a single sensor. This multiplexer may be used in both local and remote measurement systems to increase the sampling rate of gaseous emissions. The increased data collection capacity with the multiplexer allows for nearly three times as many samples to be taken in the same amount of time while using the same protocol for sampling. System response analysis was performed on an ammonia analyzer, a hydrogen sulfide analyzer, and tubing used with flux chamber measurement. System responses were measured and evaluated using transfer functions. The system responses for the analyzers were found to be first order with delay in auto mode. The tubing response was found to be a first order response with delay. Uncertainty analysis was performed on an ammonia sampling and analyzing system. The system included an analyzer, mass flow controllers, calibration gases, and analog outputs. The standard uncertainty was found to be 443 ppb when measuring a 16 ppm ammonia stream with a 20 ppm span. A laboratory study dealing with the adsorption kinetics of ammonia on a flux chamber was performed to determine if adsorption onto the chamber walls was significant. The study found that the adsorption would not significantly change the concentration of the output flow 30 minutes after a clean chamber was exposed to ammonia concentrations for concentrations above 2.5 ppm.
66

Real-Time Demand Estimation for Water Distribution Systems

Kang, Doo Sun January 2008 (has links)
The goal of a water distribution system (WDS) is to supply the desired quantity of fresh water to consumers at the appropriate time. In order to properly operate a WDS, system operators need information about the system states, such as tank water level, nodal pressure, and water quality for the system wide locations. Most water utilities now have some level of SCADA (Supervisory Control and Data Acquisition) systems providing nearly real-time monitoring data. However, due to the prohibitive metering costs and lack of applications for the data, only portions of systems are monitored and the use of the SCADA data is limited. This dissertation takes a comprehensive view of real-time demand estimation in water distribution systems. The goal is to develop an optimal monitoring system plan that will collect appropriate field data to determine accurate, precise demand estimates and to understand their impact on model predictions. To achieve that goal, a methodology for real-time demand estimates and associated uncertainties using limited number of field measurements is developed. Further, system wide nodal pressure and chlorine concentration and their uncertainties are predicted using the estimated nodal demands. This dissertation is composed of three journal manuscripts that address these three key steps beginning with uncertainty evaluation, followed by demand estimation and finally optimal metering layout.The uncertainties associated with the state estimates are quantified in terms of confidence limits. To compute the uncertainties in real-time alternative schemes that reduce computational efforts while providing good statistical approximations are evaluated and verified by Monte Carlo simulation (MCS). The first order second moment(FOSM) method provides accurate variance estimates for pressure; however, because of its linearity assumption it has limited predictive ability for chlorine under unsteady conditions. Latin Hypercube sampling (LHS) provides good estimates of prediction uncertainty for chlorine and pressure in steady and unsteady conditions with significantly less effort.For real-time demand estimation, two recursive state estimators; tracking state estimator (TSE) based on weighted least squares (WLS) scheme and Kalman filter (KF), are applied. In addition, in order to find available field data types for demand estimation, comparative studies are performed using pipe flow rate and nodal pressure head as measurements. To reduce the number of unknowns and make the system solvable, nodes with similar user characteristics are grouped and assumed to have same demand pattern. The uncertainties in state variables are quantified in terms of confidence limits using the approximate methods (i.e., FOSM and LHS). Results show that TSE with pipe flow rates as measurements provide reliable demand estimations. Also, the model predictions computed using the estimated demands match well with the synthetically generated true values.Field measurements are critical elements to obtaining quality real-time state estimates. However, the limited number of metering locations has been a significant obstacle for the real-time studies and identifying locations to best gain information is critical. Here, an optimal meter placement (OMP) is formulated as a multi-objective optimization problem and solved using a multi-objective genetic algorithm (MOGA) based on Pareto-optimal solutions. Results show that model accuracy and precision should be pursued at the same time as objectives since both measures have trade-off relationship. GA solutions were improvements over the less robust methods or designers' experienced judgment.
67

Decision support algorithms for power system and power electronic design

Heidari, Maziar 10 September 2010 (has links)
The thesis introduces an approach for obtaining higher level decision support information using electromagnetic transient (EMT) simulation programs. In this approach, a suite of higher level driver programs (decision support tools) control the simulator to gain important information about the system being simulated. These tools conduct a sequence of simulation runs, in each of which the study parameters are carefully selected based on the observations of the earlier runs in the sequence. In this research two such tools have been developed in conjunction with the PSCAD/EMTDC electromagnetic transient simulation program. The first tool is an improved optimization algorithm, which is used for automatic optimization of the system parameters to achieve a desired performance. This algorithm improves the capabilities of the previously reported method of optimization-enabled electromagnetic transient simulation by using an enhanced gradient-based optimization algorithm with constraint handling techniques. In addition to allow handling of design problems with more than one objective the thesis proposes to augment the optimization tool with the technique of Pareto optimality. A sequence of optimization runs are conducted to obtain the Pareto frontier, which quantifies the tradeoffs between the design objectives. The frontier can be used by the designer for decision making process. The second tool developed in this research helps the designer to study the effects of uncertainties in a design. By using a similar multiple-run approach this sensitivity analysis tool provides surrogate models of the system, which are simple mathematical functions that represent different aspects of the system performance. These models allow the designer to analyze the effects of uncertainties on system performance without having to conduct any further time-consuming EMT simulations. In this research it has been also proposed to add probabilistic analysis capabilities to the developed sensitivity analysis tool. Since probabilistic analysis of a system using conventional techniques (e.g. Monte-Carlo simulations) normally requires a large number of EMT simulation runs, using surrogate models instead of the actual simulation runs yields significant savings in terms of shortened simulation time. A number of examples have been used throughout the thesis to demonstrate the application and usefulness of the proposed tools.
68

On Fuel Coolant Interactions and Debris Coolability in Light Water Reactors

Thakre, Sachin January 2015 (has links)
During the case of a hypothetical severe accident in a light water reactor, core damage may occur and molten fuel may interact with water resulting in explosive interactions. A Fuel-Coolant Interactions (FCI) consists of many complex phenomena whose characteristics determine the energetics of the interactions. The fuel melt initially undergoes fragmentation after contact with the coolant which subsequently increases the melt surface area exposed to coolant and causes rapid heat transfer. A substantial amount of research has been done to understand the phenomenology of FCI, still there are gaps to be filled in terms of the uncertainties in describing the processes such as breakup/fragmentation of melt and droplets. The objective of the present work is to substantiate the understanding in the premixing phase of the FCI process by studying the deformation/pre-fragmentation of melt droplets and also the mechanism of melt jet breakup. The focus of the work is to study the effect of various influential parameters during the premixing phase that determine the intensity of the energetics in terms of steam explosion. The study is based on numerical analysis starting from smaller scale and going to the large scale FCI. Efforts are also taken to evaluate the uncertainties in estimating the steam explosion loads on the reactor scale. The fragmented core is expected to form a porous debris bed. A part of the present work also deals with experimental investigations on the coolability of prototypical debris bed. Initially, the phenomenology of FCI and debris bed coolability is introduced. A review of the state of the art based on previous experimental and theoretical developments is also presented. The study starts with numerical investigation of molten droplet hydrodynamics in a water pool, carried out using the Volume Of Fluid (VOF) method in the CFD code ANSYS FLUENT. This fundamental study is related to single droplets in a preconditioning phase, i.e. deformation/pre-fragmentation prior to steam explosion. The droplet deformation is studied extensively also including the effect of the pressure pulse on its deformation behavior. The effect of material physical properties such as density, surface tension and viscosity are investigated. The work is then extended to 3D analysis as a part of high fidelity simulations, in order to overcome the possible limitations of 2D simulations. The investigation on FCI processes is then continued to the analysis on melt jet fragmentation in a water pool, since this is the crucial phenomenon which creates the melt-coolant pre-mixture, an initial condition for steam explosion. The calculations are carried out assuming non-boiling conditions and the properties of Wood’s metal. The jet fragmentation and breakup pattern are carefully observed at various Weber numbers. Moreover, the effect of physical and material properties such as diameter, velocity, density, surface tension and viscosity on jet breakup length, are investigated. After the fundamental studies, the work was extended to reactor scale FCI energetics. It is mainly oriented on the evaluation of uncertainties in estimating the explosion impact loads on the surrounding structures. The uncertainties include the influential parameters in the FCI process and also the code uncertainties in calculations. The FCI code MC3D is used for the simulations and the PIE (propagation of input errors) method is used for the uncertainty analysis. The last part of the work is about experimental investigations of debris coolability carried out using the POMECO-HT facility at KTH. The focus is on the determination of the effect of the bed’s prototypical characteristics on its coolability, in terms of inhomogeneity with heap like (triangular shape) bed and the radial stratified bed, and also the effect of its multi-dimensionality. For this purpose, four particle beds were constructed: two homogeneous, one with radial stratification and one with triangular shape, respectively. The effectiveness of coolability-enhanced measures such as bottom injection of water and a downcomer (used for natural circulation driven coolability, NCDC) was also investigated. The final chapter includes the summary of the whole work. / Under ett svårt haveri i en kärnkraftsreaktor kan en härdsmälta bildas och smältan växelverka på ett explosivt sätt med kylvattnet. En sådan FCI (Fuel-Coolant-Interaction) inbegriper flera fysikaliska processer vilkas förlopp bestämmer hur stor den frigjorda energin blir. Vid kontakt med vattnet fragmenteras först härdsmältan vilket i sin tur leder till att en större yta exponeras för kylvattnet och att värmeöverföringen från smältan snabbt ökar. Mycket forskning har ägnats åt att förstå vad som sker under en FCI men det finns fortfarande luckor att fylla vad beträffar t ex osäkerheter i beskrivningen av fragmentering av såväl smälta som enskilda droppar av smält material. Syftet med detta arbete är främst att underbygga en bättre förståelse av den inledande delen av en FCI genom att studera dels hur enskilda droppar av smält material deformeras och splittras och dels hur en stråle av smält material fragmenteras. Vi studerar särskilt vilka parametrar som mest påverkar den energi som frigörs vid ångexplosionen. Problemet studeras med numerisk analys med början i liten skala och sedan i full skala. Vi söker också uppskatta de laster som explosionen utsätter reaktorns komponenter för. En annan viktig fråga gäller kylbarheten hos den slaggansamling som bildas under reaktorhärden efter en FCI. Slagghögen förväntas ha en porös struktur och en del av avhandlingen redogör för experimentella försök som genomförts för att utvärdera kylbarheten i olika prototypiska slaggformationer. I avhandlingens inledning beskrivs de fysikaliska processerna under en FCI och kylningen av en slaggansamling. Det aktuella kunskapsläget på dessa områden presenteras också utgående från tidigare experimentella och teoretiska studier. Studierna i avhandlingen inleds med numerisk analys av hydrodynamiken för en enskild droppe smälta i en vattentank där VOF-metoden i CFD-programmet ANSYS FLUENT används. Denna grundläggande studie rör en enskild droppe under förstadiet till fragmentering och ångexplosion då droppen deformeras alltmer. Deformationen studeras ingående också med hänsyn tagen till inverkan av en tryckpuls. Inverkan av olika egenskaper hos materialet, som densitet, ytspänning och viskositet studeras också. Arbetet utvidgas sedan till en beskrivning i 3D för att undvika de begränsningar som finns i en 2D-simulering. Studierna av FCI utvidgas sedan till en analys av fragmentering av en stråle smälta i vatten. Detta är en kritisk del av förloppet då smälta och vatten blandas för att ge utgångstillståndet för ångexplosionen. Beräkningarna genomförs under antagande att kokning inte sker och med materialegenskaper som för Wood´s metall. Mönstret för fragmentering och uppsplittring studeras ingående för olika Weber-tal. Dessutom studeras effekten på strålens uppsplittringslängd av parametrar som diameter och hastighet för strålen samt densitet, ytspänning och viskositet hos materialet. Efter dessa grundläggande studier utvidgas arbetet till FCI-energier i reaktorskala. Här ligger tonvikten på utvärdering av osäkerheter i bestämningen av den inverkan explosionen har på omgivande konstruktioner och komponenter. Osäkerheterna inkluderar eventuell bristande noggrannhet hos såväl de viktiga parametrarna i FCI-processen som i själva beräkningarna. Den sista delen av arbetet handlar om experimentella undersökningar av slaggformationens kylbarhet som genomförts i uppställningen POMECO-HT vid avdelningen för kärnkraftsäkerhet på KTH. Vi vill bestämma effekten av formationens prototypiska egenskaper på kylbarheten. För detta ändamål konstruerades fyra olika formationer: två homogena, en med radiell variation i partikelstorlek och en med triangulär variation. Vi undersökte också hur förbättrad kylning kan uppnås genom att tillföra kylvatten underifrån respektive via ett fallrör (kylning genom naturlig cirkulation). I det avslutande kapitlet ges en sammanfattning av hela arbetet. / <p>QC 20150507</p>
69

Using uncertainty and sensitivity analysis to inform the design of net-zero energy vaccine warehouses

Pudleiner, David Burl 27 August 2014 (has links)
The vaccine cold chain is an integral part of the process of storing and distributing vaccines prior to administration. A key component of this cold chain for developing countries is the primary vaccine storage warehouse. As the starting point for the distribution of vaccines throughout the country, these buildings have a significant amount of refrigerated space and therefore consume large amounts of energy. Therefore, this thesis focuses on analyzing the relative importance of parameters for the design of an energy efficient primary vaccine storage warehouse with the end goal of achieving Net-Zero Energy operation. A total of 31 architectural design parameters, such as roof insulation U-Value and external wall thermal mass, along with 14 building control parameters, including evaporator coil defrost termination and thermostat set points, are examined. The analysis is conducted across five locations in the developing world with significant variations in climate conditions: Buenos Aires, Argentina; Tunis, Tunisia; Asuncion, Paraguay; Mombasa, Kenya; and Bangkok, Thailand. Variations in the parameters are examined through the implementation of a Monte Carlo-based global uncertainty and sensitivity analysis to a case study building layout. A regression-based sensitivity analysis is used to analyze both the main effects of each parameter as well as the interactions between parameter pairs. The results of this research indicate that for all climates examined, the building control parameters have a larger relative importance than the architectural design parameters in determining the warehouse energy consumption. This is due to the dominance of the most influential building control parameter examined, the Chilled Storage evaporator fan control strategy. The importance of building control parameters across all climates examined emphasizes the need for an integrated design method to ensure the delivery of an energy efficient primary vaccine warehouse.
70

Decision support algorithms for power system and power electronic design

Heidari, Maziar 10 September 2010 (has links)
The thesis introduces an approach for obtaining higher level decision support information using electromagnetic transient (EMT) simulation programs. In this approach, a suite of higher level driver programs (decision support tools) control the simulator to gain important information about the system being simulated. These tools conduct a sequence of simulation runs, in each of which the study parameters are carefully selected based on the observations of the earlier runs in the sequence. In this research two such tools have been developed in conjunction with the PSCAD/EMTDC electromagnetic transient simulation program. The first tool is an improved optimization algorithm, which is used for automatic optimization of the system parameters to achieve a desired performance. This algorithm improves the capabilities of the previously reported method of optimization-enabled electromagnetic transient simulation by using an enhanced gradient-based optimization algorithm with constraint handling techniques. In addition to allow handling of design problems with more than one objective the thesis proposes to augment the optimization tool with the technique of Pareto optimality. A sequence of optimization runs are conducted to obtain the Pareto frontier, which quantifies the tradeoffs between the design objectives. The frontier can be used by the designer for decision making process. The second tool developed in this research helps the designer to study the effects of uncertainties in a design. By using a similar multiple-run approach this sensitivity analysis tool provides surrogate models of the system, which are simple mathematical functions that represent different aspects of the system performance. These models allow the designer to analyze the effects of uncertainties on system performance without having to conduct any further time-consuming EMT simulations. In this research it has been also proposed to add probabilistic analysis capabilities to the developed sensitivity analysis tool. Since probabilistic analysis of a system using conventional techniques (e.g. Monte-Carlo simulations) normally requires a large number of EMT simulation runs, using surrogate models instead of the actual simulation runs yields significant savings in terms of shortened simulation time. A number of examples have been used throughout the thesis to demonstrate the application and usefulness of the proposed tools.

Page generated in 0.0861 seconds