• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 25
  • 20
  • 12
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 222
  • 222
  • 40
  • 35
  • 32
  • 31
  • 30
  • 24
  • 24
  • 24
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Soft computing approaches to uncertainty propagation in environmental risk mangement

Kumar, Vikas 19 June 2008 (has links)
Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like probabilistic reasoning, fuzzy logic, artificial neural nets etc have characteristics of approximation and dispositionality. Although in hard computing, imprecision and uncertainty are undesirable properties, in soft computing the tolerance for imprecision and uncertainty is exploited to achieve tractability, lower cost of computation, effective communication and high Machine Intelligence Quotient (MIQ). Proposed thesis has tried to explore use of different soft computing approaches to handle uncertainty in environmental risk management. The work has been divided into three parts consisting five papers. In the first part of this thesis different uncertainty propagation methods have been investigated. The first methodology is generalized fuzzy α-cut based on the concept of transformation method. A case study of uncertainty analysis of pollutant transport in the subsurface has been used to show the utility of this approach. This approach shows superiority over conventional methods of uncertainty modelling. A Second method is proposed to manage uncertainty and variability together in risk models. The new hybrid approach combining probabilistic and fuzzy set theory is called Fuzzy Latin Hypercube Sampling (FLHS). An important property of this method is its ability to separate randomness and imprecision to increase the quality of information. A fuzzified statistical summary of the model results gives indices of sensitivity and uncertainty that relate the effects of variability and uncertainty of input variables to model predictions. The feasibility of the method is validated to analyze total variance in the calculation of incremental lifetime risks due to polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/F) for the residents living in the surroundings of a municipal solid waste incinerator (MSWI) in Basque Country, Spain. The second part of this thesis deals with the use of artificial intelligence technique for generating environmental indices. The first paper focused on the development of a Hazzard Index (HI) using persistence, bioaccumulation and toxicity properties of a large number of organic and inorganic pollutants. For deriving this index, Self-Organizing Maps (SOM) has been used which provided a hazard ranking for each compound. Subsequently, an Integral Risk Index was developed taking into account the HI and the concentrations of all pollutants in soil samples collected in the target area. Finally, a risk map was elaborated by representing the spatial distribution of the Integral Risk Index with a Geographic Information System (GIS). The second paper is an improvement of the first work. New approach called Neuro-Probabilistic HI was developed by combining SOM and Monte-Carlo analysis. It considers uncertainty associated with contaminants characteristic values. This new index seems to be an adequate tool to be taken into account in risk assessment processes. In both study, the methods have been validated through its implementation in the industrial chemical / petrochemical area of Tarragona. The third part of this thesis deals with decision-making framework for environmental risk management. In this study, an integrated fuzzy relation analysis (IFRA) model is proposed for risk assessment involving multiple criteria. The fuzzy risk-analysis model is proposed to comprehensively evaluate all risks associated with contaminated systems resulting from more than one toxic chemical. The model is an integrated view on uncertainty techniques based on multi-valued mappings, fuzzy relations and fuzzy analytical hierarchical process. Integration of system simulation and risk analysis using fuzzy approach allowed to incorporate system modelling uncertainty and subjective risk criteria. In this study, it has been shown that a broad integration of fuzzy system simulation and fuzzy risk analysis is possible. In conclusion, this study has broadly demonstrated the usefulness of soft computing approaches in environmental risk analysis. The proposed methods could significantly advance practice of risk analysis by effectively addressing critical issues of uncertainty propagation problem. / Los problemas del mundo real, especialmente aquellos que implican sistemas naturales, son complejos y se componen de muchos componentes indeterminados, que muestran en muchos casos una relación no lineal. Los modelos convencionales basados en técnicas analíticas que se utilizan actualmente para conocer y predecir el comportamiento de dichos sistemas pueden ser muy complicados e inflexibles cuando se quiere hacer frente a la imprecisión y la complejidad del sistema en un mundo real. El tratamiento de dichos sistemas, supone el enfrentarse a un elevado nivel de incertidumbre así como considerar la imprecisión. Los modelos clásicos basados en análisis numéricos, lógica de valores exactos o binarios, se caracterizan por su precisión y categorización y son clasificados como una aproximación al hard computing. Por el contrario, el soft computing tal como la lógica de razonamiento probabilístico, las redes neuronales artificiales, etc., tienen la característica de aproximación y disponibilidad. Aunque en la hard computing, la imprecisión y la incertidumbre son propiedades no deseadas, en el soft computing la tolerancia en la imprecisión y la incerteza se aprovechan para alcanzar tratabilidad, bajos costes de computación, una comunicación efectiva y un elevado Machine Intelligence Quotient (MIQ). La tesis propuesta intenta explorar el uso de las diferentes aproximaciones en la informática blanda para manipular la incertidumbre en la gestión del riesgo medioambiental. El trabajo se ha dividido en tres secciones que forman parte de cinco artículos. En la primera parte de esta tesis, se han investigado diferentes métodos de propagación de la incertidumbre. El primer método es el generalizado fuzzy α-cut, el cual está basada en el método de transformación. Para demostrar la utilidad de esta aproximación, se ha utilizado un caso de estudio de análisis de incertidumbre en el transporte de la contaminación en suelo. Esta aproximación muestra una superioridad frente a los métodos convencionales de modelación de la incertidumbre. La segunda metodología propuesta trabaja conjuntamente la variabilidad y la incertidumbre en los modelos de evaluación de riesgo. Para ello, se ha elaborado una nueva aproximación híbrida denominada Fuzzy Latin Hypercube Sampling (FLHS), que combina los conjuntos de la teoría de probabilidad con la teoría de los conjuntos difusos. Una propiedad importante de esta teoría es su capacidad para separarse los aleatoriedad y imprecisión, lo que supone la obtención de una mayor calidad de la información. El resumen estadístico fuzzificado de los resultados del modelo generan índices de sensitividad e incertidumbre que relacionan los efectos de la variabilidad e incertidumbre de los parámetros de modelo con las predicciones de los modelos. La viabilidad del método se llevó a cabo mediante la aplicación de un caso a estudio donde se analizó la varianza total en la cálculo del incremento del riesgo sobre el tiempo de vida de los habitantes que habitan en los alrededores de una incineradora de residuos sólidos urbanos en Tarragona, España, debido a las emisiones de dioxinas y furanos (PCDD/Fs). La segunda parte de la tesis consistió en la utilización de las técnicas de la inteligencia artificial para la generación de índices medioambientales. En el primer artículo se desarrolló un Índice de Peligrosidad a partir de los valores de persistencia, bioacumulación y toxicidad de un elevado número de contaminantes orgánicos e inorgánicos. Para su elaboración, se utilizaron los Mapas de Auto-Organizativos (SOM), que proporcionaron un ranking de peligrosidad para cada compuesto. A continuación, se elaboró un Índice de Riesgo Integral teniendo en cuenta el Índice de peligrosidad y las concentraciones de cada uno de los contaminantes en las muestras de suelo recogidas en la zona de estudio. Finalmente, se elaboró un mapa de la distribución espacial del Índice de Riesgo Integral mediante la representación en un Sistema de Información Geográfico (SIG). El segundo artículo es un mejoramiento del primer trabajo. En este estudio, se creó un método híbrido de los Mapas Auto-organizativos con los métodos probabilísticos, obteniéndose de esta forma un Índice de Riesgo Integrado. Mediante la combinación de SOM y el análisis de Monte-Carlo se desarrolló una nueva aproximación llamada Índice de Peligrosidad Neuro-Probabilística. Este nuevo índice es una herramienta adecuada para ser utilizada en los procesos de análisis. En ambos artículos, la viabilidad de los métodos han sido validados a través de su aplicación en el área de la industria química y petroquímica de Tarragona (Cataluña, España). El tercer apartado de esta tesis está enfocado en la elaboración de una estructura metodológica de un sistema de ayuda en la toma de decisiones para la gestión del riesgo medioambiental. En este estudio, se presenta un modelo integrado de análisis de fuzzy (IFRA) para la evaluación del riesgo cuyo resultado depende de múltiples criterios. El modelo es una visión integrada de las técnicas de incertidumbre basadas en diseños de valoraciones múltiples, relaciones fuzzy y procesos analíticos jerárquicos inciertos. La integración de la simulación del sistema y el análisis del riesgo utilizando aproximaciones inciertas permitieron incorporar la incertidumbre procedente del modelo junto con la incertidumbre procedente de la subjetividad de los criterios. En este estudio, se ha demostrado que es posible crear una amplia integración entre la simulación de un sistema incierto y de un análisis de riesgo incierto. En conclusión, este trabajo demuestra ampliamente la utilidad de aproximación Soft Computing en el análisis de riesgos ambientales. Los métodos propuestos podría avanzar significativamente la práctica de análisis de riesgos de abordar eficazmente el problema de propagación de incertidumbre.
62

Bayesian calibration of building energy models for energy retrofit decision-making under uncertainty

Heo, Yeonsook 10 November 2011 (has links)
Retrofitting of existing buildings is essential to reach reduction targets in energy consumption and greenhouse gas emission. In the current practice of a retrofit decision process, professionals perform energy audits, and construct dynamic simulation models to benchmark the performance of existing buildings and predict the effect of retrofit interventions. In order to enhance the reliability of simulation models, they typically calibrate simulation models based on monitored energy use data. The calibration techniques used for this purpose are manual and expert-driven. The current practice has major drawbacks: (1) the modeling and calibration methods do not scale to large portfolio of buildings due to their high costs and heavy reliance on expertise, and (2) the resulting deterministic models do not provide insight into underperforming risks associated with each retrofit intervention. This thesis has developed a new retrofit analysis framework that is suitable for large-scale analysis and risk-conscious decision-making. The framework is based on the use of normative models and Bayesian calibration techniques. Normative models are light-weight quasi-steady state energy models that can scale up to large sets of buildings, i.e. to city and regional scale. In addition, they do not require modeling expertise since they follow a set of modeling rules that produce a standard measure for energy performance. The normative models are calibrated under a Bayesian approach such that the resulting calibrated models quantify uncertainties in the energy outcomes of a building. Bayesian calibration models can also incorporate additional uncertainties associated with retrofit interventions to generate probability distributions of retrofit performance. Probabilistic outputs can be straightforwardly translated into a measure that quantifies underperforming risks of retrofit interventions and thus enable decision making relative to the decision-makers' rational objectives and risk attitude. This thesis demonstrates the feasibility of the new framework on retrofit applications by verifying the following two hypotheses: (1) normative models supported by Bayesian calibration have sufficient model fidelity to adequately support retrofit decisions, and (2) they can support risk-conscious decision-making by explicitly quantifying risks associated with retrofit options. The first and second hypotheses are examined through case studies that compare outcomes from the calibrated normative model with those from a similarly calibrated transient simulation model and compare decisions derived by the proposed framework with those derived by standard practices respectively. The new framework will enable cost-effective retrofit analysis at urban scale with explicit management of uncertainties.
63

A Probabilistic Re-assesment of the Index of New Cars’ Climate Impact : Benefits of Biofuels in Sweden

Lübbe, Nils January 2011 (has links)
Introduction: The index of new cars’ climate impact (Trafikverket 2010) calculated a reduction factor to quantify the positive impact of renewable fuels on CO2 emissions for new cars in Sweden 2009. Although uncertainty for many input parameters is acknowledged, all calculations were made with point estimates. Aim: The aim of this study was to verify consumer recommendations based on carbon dioxide emissions of new cars by re-assessing the CO2 emissions considering the uncertainty in input data. The data uncertainty was quantified by using probabilistic risk assessment methodology. The objectives were (1) to calculate the possible reduction of CO2 emissions for new cars in Sweden accounting for uncertainties; (2) to identify the most influential factors by conducting a sensitivity analysis; and (3) to assess the implications of interval analysis and probabilistic modelling for epistemic uncertainty in this context and thereby to contribute to the discussion on which method is the most appropriate to support decision making. Method: Calculations were performed with the deterministic model proposed by Trafikverket (2010) using probability distributions (Bayesian probability distributions assigned by expert judgements) and intervals as model input. Probabilistic modelling was implemented as Monte Carlo simulation with uniform distributions and triangular distributions in Matlab. Interval calculations were done by hand. Results: The best estimate from probabilistic modelling of CO2 reduction when using E85 as a fuel of 30% (95% confidence interval = 10% to 52%) is in a similar range as the 20% given by Trafikverket (2010). The best estimate of 28% decrease for gas cars (95% confidence interval = 3% to 44%) is lower than the originally proposed 42%, but still in a similar range. The difference is due to the large extent of optimistic values taken by Trafikverket (2010). The interval analysis produced a wider range. For ethanol driven cars, a CO2 reduction of 68%, an increase of 14% or anything in between is calculated, for cars operated on gas the result is a CO2 reduction of 59%, an increase of 29% or anything in between. Conclusions: The use of biofuels most likely reduces CO2 emissions. The CO2 emissions from the production of the biofuel are the most influential factors. The recommendations of Trafikverket to the consumers are still valid after probabilistic recalculation. Probabilistic modelling is more appropriate than interval analysis to guide authority and consumer decisions as it allows conclusions to be drawn and therefore can support decision making.
64

Development and Implementation of Bayesian Computer Model Emulators

Lopes, Danilo Lourenco January 2011 (has links)
<p>Our interest is the risk assessment of rare natural hazards, such as</p><p>large volcanic pyroclastic flows. Since catastrophic consequences of</p><p>volcanic flows are rare events, our analysis benefits from the use of</p><p>a computer model to provide information about these events under</p><p>natural conditions that may not have been observed in reality.</p><p>A common problem in the analysis of computer experiments, however, is the high computational cost associated with each simulation of a complex physical process. We tackle this problem by using a statistical approximation (emulator) to predict the output of this computer model at untried values of inputs. Gaussian process response surface is a technique commonly used in these applications, because it is fast and easy to use in the analysis.</p><p>We explore several aspects of the implementation of Gaussian process emulators in a Bayesian context. First, we propose an improvement for the implementation of the plug-in approach to Gaussian processes. Next, we also evaluate the performance of a spatial model for large data sets in the context of computer experiments.</p><p>Computer model data can also be combined to field observations in order to calibrate the emulator and obtain statistical approximations to the computer model that are closer to reality. We present an application where we learn the joint distribution of inputs from field data and then bind this auxiliary information to the emulator in a calibration process.</p><p>One of the outputs of our computer model is a surface of maximum volcanic flow height over some geographical area. We show how the topography of the volcano area plays an important role in determining the shape of this surface, and we propose methods</p><p>to incorporate geophysical information in the multivariate analysis of computer model output.</p> / Dissertation
65

A Case Study of Risk Management for Groundwater Contaminated Site by Organic Chlorine Solvents.

Lin, Sang-Feng 02 January 2012 (has links)
The pollution of soil and groundwater have increased due to the leakage of ptrolium products and organic chemicals in the country recently,and Taiwan face with small region and dense population,so the pollution maybe exposed through relevant way such as quality of water,soil,air,crop and fish and so on to affect human health and cause risk of jeopardise directly or indiretly. The study is to aimed at chlorinated compounds for domestic,and use methods of risk assessment to analyze necessity and urgency of renovation for be contaiminted compound and downstream influence.And then according to result to make relevant management strategies and projets to control risk level and influence for contaiminated compound. In this study,we collect information relate to case,in accordence to health and risk assessment methods for soil and groundwater contaminated sites and some softwares of CRYSTAL BALL and BIOCHLO by Environmental Protection Bureau to estimate the site which was evaluated chlorinated compound (trichloroethylene,TCE),and considerate whether they affect residents healy nearby,use some hydrogeological survey of sites to process second-level health risk assessments. First,use commercial software of CRYSTAL BALL execute uncertainty and sensitivity analysis. Not only does the analysis probe into overall parameters variaty how to affect risk value but also they process analysis and results by different combinations of paremeter.From this result, we can confirm that the large parameter values for affecting risk is transmission of pollutants and is similar to previous studies assessment and analysis.Other parameter cause less influence for risk variaty such as age level of receptor,group,the way of contact,time and water quntity.The study discovers TCE pollutants concentration will make change of risk value by accompany with groundwater move to downstream distance. That means receptor of distance of contamination resource cause large influence. The far distance represents can product the larger function for TCE pollutant, and causes less cancer risk for receptor, conversely, it causes higher cancer risk. Subsequently, we also use BIOCHLOR assessment software by U.S.A Enviromental Protection Bureau. from the result to determine whether the site have potential of anaerrobic reductive dechlorination or not and estimate natural attenuation possibility.The actual situation of the site gets the three score in the weight total.This display do not prove that chlorinated compounds can procedd biogical decomposition in the anaerobic invironment without more correct value in the stage.We recommend looking for another renovations. The study selects more important sensitive parameters through risk assessment result for the site and work out the way of renovations which is suitable for this case by schedule of object. The investigation has found that residents indeed extracted groundwater for agriculture irrigation, but not drank water directly. In order to avoid the worst situation happens from the view of risk, we will consider two aspects of regulation and technology for plan. First, in order to administration control for the aspect of regulation we will consider how to prohibit residents to extract groundwater effectively. For instance, prohibit or limit to set up underground well, provide substitute drink water, set up notice sign and underground water quality monitor measure regularity. Second, for the sake of preventing pollutant group spread and adopt measure for the aspect of technology, for example pollution remediation technology (include soil gas extraction, air injection method at present) and containment control technology (include hydraulic containment, permeability reactive barrier, natural biological remediation, etc) to manage effectively possible events in the site and prepare well prevention in advance. We also adopt good communication with local residents to let residents understand executive content for renovation and reduce their resistance in favor of making progress for renovation and achieve risk management objective.
66

Design and performance of an ammonia measurement system

Boriack, Cale Nolan 25 April 2007 (has links)
Ammonia emissions from animal feeding operations (AFOs) have recently come under increased scrutiny. The US Environmental Protection Agency (EPA) has come under increased pressure from special interest groups to regulate ammonia. Regulation of ammonia is very difficult because every facility has different manure management practices. Different management practices lead to different emissions for every facility. Researchers have been tasked by industry to find best management practices to reduce emissions. The task cannot be completed without equipment that can efficiently and accurately compare emissions. To complete this task, a measurement system was developed and performance tested to measure ammonia. Performance tests included uncertainty analysis, system response, and adsorption kinetics. A measurement system was designed for measurement of gaseous emissions from ground level area sources (GLAS) in order to sample multiple receptors with a single sensor. This multiplexer may be used in both local and remote measurement systems to increase the sampling rate of gaseous emissions. The increased data collection capacity with the multiplexer allows for nearly three times as many samples to be taken in the same amount of time while using the same protocol for sampling. System response analysis was performed on an ammonia analyzer, a hydrogen sulfide analyzer, and tubing used with flux chamber measurement. System responses were measured and evaluated using transfer functions. The system responses for the analyzers were found to be first order with delay in auto mode. The tubing response was found to be a first order response with delay. Uncertainty analysis was performed on an ammonia sampling and analyzing system. The system included an analyzer, mass flow controllers, calibration gases, and analog outputs. The standard uncertainty was found to be 443 ppb when measuring a 16 ppm ammonia stream with a 20 ppm span. A laboratory study dealing with the adsorption kinetics of ammonia on a flux chamber was performed to determine if adsorption onto the chamber walls was significant. The study found that the adsorption would not significantly change the concentration of the output flow 30 minutes after a clean chamber was exposed to ammonia concentrations for concentrations above 2.5 ppm.
67

Real-Time Demand Estimation for Water Distribution Systems

Kang, Doo Sun January 2008 (has links)
The goal of a water distribution system (WDS) is to supply the desired quantity of fresh water to consumers at the appropriate time. In order to properly operate a WDS, system operators need information about the system states, such as tank water level, nodal pressure, and water quality for the system wide locations. Most water utilities now have some level of SCADA (Supervisory Control and Data Acquisition) systems providing nearly real-time monitoring data. However, due to the prohibitive metering costs and lack of applications for the data, only portions of systems are monitored and the use of the SCADA data is limited. This dissertation takes a comprehensive view of real-time demand estimation in water distribution systems. The goal is to develop an optimal monitoring system plan that will collect appropriate field data to determine accurate, precise demand estimates and to understand their impact on model predictions. To achieve that goal, a methodology for real-time demand estimates and associated uncertainties using limited number of field measurements is developed. Further, system wide nodal pressure and chlorine concentration and their uncertainties are predicted using the estimated nodal demands. This dissertation is composed of three journal manuscripts that address these three key steps beginning with uncertainty evaluation, followed by demand estimation and finally optimal metering layout.The uncertainties associated with the state estimates are quantified in terms of confidence limits. To compute the uncertainties in real-time alternative schemes that reduce computational efforts while providing good statistical approximations are evaluated and verified by Monte Carlo simulation (MCS). The first order second moment(FOSM) method provides accurate variance estimates for pressure; however, because of its linearity assumption it has limited predictive ability for chlorine under unsteady conditions. Latin Hypercube sampling (LHS) provides good estimates of prediction uncertainty for chlorine and pressure in steady and unsteady conditions with significantly less effort.For real-time demand estimation, two recursive state estimators; tracking state estimator (TSE) based on weighted least squares (WLS) scheme and Kalman filter (KF), are applied. In addition, in order to find available field data types for demand estimation, comparative studies are performed using pipe flow rate and nodal pressure head as measurements. To reduce the number of unknowns and make the system solvable, nodes with similar user characteristics are grouped and assumed to have same demand pattern. The uncertainties in state variables are quantified in terms of confidence limits using the approximate methods (i.e., FOSM and LHS). Results show that TSE with pipe flow rates as measurements provide reliable demand estimations. Also, the model predictions computed using the estimated demands match well with the synthetically generated true values.Field measurements are critical elements to obtaining quality real-time state estimates. However, the limited number of metering locations has been a significant obstacle for the real-time studies and identifying locations to best gain information is critical. Here, an optimal meter placement (OMP) is formulated as a multi-objective optimization problem and solved using a multi-objective genetic algorithm (MOGA) based on Pareto-optimal solutions. Results show that model accuracy and precision should be pursued at the same time as objectives since both measures have trade-off relationship. GA solutions were improvements over the less robust methods or designers' experienced judgment.
68

Decision support algorithms for power system and power electronic design

Heidari, Maziar 10 September 2010 (has links)
The thesis introduces an approach for obtaining higher level decision support information using electromagnetic transient (EMT) simulation programs. In this approach, a suite of higher level driver programs (decision support tools) control the simulator to gain important information about the system being simulated. These tools conduct a sequence of simulation runs, in each of which the study parameters are carefully selected based on the observations of the earlier runs in the sequence. In this research two such tools have been developed in conjunction with the PSCAD/EMTDC electromagnetic transient simulation program. The first tool is an improved optimization algorithm, which is used for automatic optimization of the system parameters to achieve a desired performance. This algorithm improves the capabilities of the previously reported method of optimization-enabled electromagnetic transient simulation by using an enhanced gradient-based optimization algorithm with constraint handling techniques. In addition to allow handling of design problems with more than one objective the thesis proposes to augment the optimization tool with the technique of Pareto optimality. A sequence of optimization runs are conducted to obtain the Pareto frontier, which quantifies the tradeoffs between the design objectives. The frontier can be used by the designer for decision making process. The second tool developed in this research helps the designer to study the effects of uncertainties in a design. By using a similar multiple-run approach this sensitivity analysis tool provides surrogate models of the system, which are simple mathematical functions that represent different aspects of the system performance. These models allow the designer to analyze the effects of uncertainties on system performance without having to conduct any further time-consuming EMT simulations. In this research it has been also proposed to add probabilistic analysis capabilities to the developed sensitivity analysis tool. Since probabilistic analysis of a system using conventional techniques (e.g. Monte-Carlo simulations) normally requires a large number of EMT simulation runs, using surrogate models instead of the actual simulation runs yields significant savings in terms of shortened simulation time. A number of examples have been used throughout the thesis to demonstrate the application and usefulness of the proposed tools.
69

On Fuel Coolant Interactions and Debris Coolability in Light Water Reactors

Thakre, Sachin January 2015 (has links)
During the case of a hypothetical severe accident in a light water reactor, core damage may occur and molten fuel may interact with water resulting in explosive interactions. A Fuel-Coolant Interactions (FCI) consists of many complex phenomena whose characteristics determine the energetics of the interactions. The fuel melt initially undergoes fragmentation after contact with the coolant which subsequently increases the melt surface area exposed to coolant and causes rapid heat transfer. A substantial amount of research has been done to understand the phenomenology of FCI, still there are gaps to be filled in terms of the uncertainties in describing the processes such as breakup/fragmentation of melt and droplets. The objective of the present work is to substantiate the understanding in the premixing phase of the FCI process by studying the deformation/pre-fragmentation of melt droplets and also the mechanism of melt jet breakup. The focus of the work is to study the effect of various influential parameters during the premixing phase that determine the intensity of the energetics in terms of steam explosion. The study is based on numerical analysis starting from smaller scale and going to the large scale FCI. Efforts are also taken to evaluate the uncertainties in estimating the steam explosion loads on the reactor scale. The fragmented core is expected to form a porous debris bed. A part of the present work also deals with experimental investigations on the coolability of prototypical debris bed. Initially, the phenomenology of FCI and debris bed coolability is introduced. A review of the state of the art based on previous experimental and theoretical developments is also presented. The study starts with numerical investigation of molten droplet hydrodynamics in a water pool, carried out using the Volume Of Fluid (VOF) method in the CFD code ANSYS FLUENT. This fundamental study is related to single droplets in a preconditioning phase, i.e. deformation/pre-fragmentation prior to steam explosion. The droplet deformation is studied extensively also including the effect of the pressure pulse on its deformation behavior. The effect of material physical properties such as density, surface tension and viscosity are investigated. The work is then extended to 3D analysis as a part of high fidelity simulations, in order to overcome the possible limitations of 2D simulations. The investigation on FCI processes is then continued to the analysis on melt jet fragmentation in a water pool, since this is the crucial phenomenon which creates the melt-coolant pre-mixture, an initial condition for steam explosion. The calculations are carried out assuming non-boiling conditions and the properties of Wood’s metal. The jet fragmentation and breakup pattern are carefully observed at various Weber numbers. Moreover, the effect of physical and material properties such as diameter, velocity, density, surface tension and viscosity on jet breakup length, are investigated. After the fundamental studies, the work was extended to reactor scale FCI energetics. It is mainly oriented on the evaluation of uncertainties in estimating the explosion impact loads on the surrounding structures. The uncertainties include the influential parameters in the FCI process and also the code uncertainties in calculations. The FCI code MC3D is used for the simulations and the PIE (propagation of input errors) method is used for the uncertainty analysis. The last part of the work is about experimental investigations of debris coolability carried out using the POMECO-HT facility at KTH. The focus is on the determination of the effect of the bed’s prototypical characteristics on its coolability, in terms of inhomogeneity with heap like (triangular shape) bed and the radial stratified bed, and also the effect of its multi-dimensionality. For this purpose, four particle beds were constructed: two homogeneous, one with radial stratification and one with triangular shape, respectively. The effectiveness of coolability-enhanced measures such as bottom injection of water and a downcomer (used for natural circulation driven coolability, NCDC) was also investigated. The final chapter includes the summary of the whole work. / Under ett svårt haveri i en kärnkraftsreaktor kan en härdsmälta bildas och smältan växelverka på ett explosivt sätt med kylvattnet. En sådan FCI (Fuel-Coolant-Interaction) inbegriper flera fysikaliska processer vilkas förlopp bestämmer hur stor den frigjorda energin blir. Vid kontakt med vattnet fragmenteras först härdsmältan vilket i sin tur leder till att en större yta exponeras för kylvattnet och att värmeöverföringen från smältan snabbt ökar. Mycket forskning har ägnats åt att förstå vad som sker under en FCI men det finns fortfarande luckor att fylla vad beträffar t ex osäkerheter i beskrivningen av fragmentering av såväl smälta som enskilda droppar av smält material. Syftet med detta arbete är främst att underbygga en bättre förståelse av den inledande delen av en FCI genom att studera dels hur enskilda droppar av smält material deformeras och splittras och dels hur en stråle av smält material fragmenteras. Vi studerar särskilt vilka parametrar som mest påverkar den energi som frigörs vid ångexplosionen. Problemet studeras med numerisk analys med början i liten skala och sedan i full skala. Vi söker också uppskatta de laster som explosionen utsätter reaktorns komponenter för. En annan viktig fråga gäller kylbarheten hos den slaggansamling som bildas under reaktorhärden efter en FCI. Slagghögen förväntas ha en porös struktur och en del av avhandlingen redogör för experimentella försök som genomförts för att utvärdera kylbarheten i olika prototypiska slaggformationer. I avhandlingens inledning beskrivs de fysikaliska processerna under en FCI och kylningen av en slaggansamling. Det aktuella kunskapsläget på dessa områden presenteras också utgående från tidigare experimentella och teoretiska studier. Studierna i avhandlingen inleds med numerisk analys av hydrodynamiken för en enskild droppe smälta i en vattentank där VOF-metoden i CFD-programmet ANSYS FLUENT används. Denna grundläggande studie rör en enskild droppe under förstadiet till fragmentering och ångexplosion då droppen deformeras alltmer. Deformationen studeras ingående också med hänsyn tagen till inverkan av en tryckpuls. Inverkan av olika egenskaper hos materialet, som densitet, ytspänning och viskositet studeras också. Arbetet utvidgas sedan till en beskrivning i 3D för att undvika de begränsningar som finns i en 2D-simulering. Studierna av FCI utvidgas sedan till en analys av fragmentering av en stråle smälta i vatten. Detta är en kritisk del av förloppet då smälta och vatten blandas för att ge utgångstillståndet för ångexplosionen. Beräkningarna genomförs under antagande att kokning inte sker och med materialegenskaper som för Wood´s metall. Mönstret för fragmentering och uppsplittring studeras ingående för olika Weber-tal. Dessutom studeras effekten på strålens uppsplittringslängd av parametrar som diameter och hastighet för strålen samt densitet, ytspänning och viskositet hos materialet. Efter dessa grundläggande studier utvidgas arbetet till FCI-energier i reaktorskala. Här ligger tonvikten på utvärdering av osäkerheter i bestämningen av den inverkan explosionen har på omgivande konstruktioner och komponenter. Osäkerheterna inkluderar eventuell bristande noggrannhet hos såväl de viktiga parametrarna i FCI-processen som i själva beräkningarna. Den sista delen av arbetet handlar om experimentella undersökningar av slaggformationens kylbarhet som genomförts i uppställningen POMECO-HT vid avdelningen för kärnkraftsäkerhet på KTH. Vi vill bestämma effekten av formationens prototypiska egenskaper på kylbarheten. För detta ändamål konstruerades fyra olika formationer: två homogena, en med radiell variation i partikelstorlek och en med triangulär variation. Vi undersökte också hur förbättrad kylning kan uppnås genom att tillföra kylvatten underifrån respektive via ett fallrör (kylning genom naturlig cirkulation). I det avslutande kapitlet ges en sammanfattning av hela arbetet. / <p>QC 20150507</p>
70

Using uncertainty and sensitivity analysis to inform the design of net-zero energy vaccine warehouses

Pudleiner, David Burl 27 August 2014 (has links)
The vaccine cold chain is an integral part of the process of storing and distributing vaccines prior to administration. A key component of this cold chain for developing countries is the primary vaccine storage warehouse. As the starting point for the distribution of vaccines throughout the country, these buildings have a significant amount of refrigerated space and therefore consume large amounts of energy. Therefore, this thesis focuses on analyzing the relative importance of parameters for the design of an energy efficient primary vaccine storage warehouse with the end goal of achieving Net-Zero Energy operation. A total of 31 architectural design parameters, such as roof insulation U-Value and external wall thermal mass, along with 14 building control parameters, including evaporator coil defrost termination and thermostat set points, are examined. The analysis is conducted across five locations in the developing world with significant variations in climate conditions: Buenos Aires, Argentina; Tunis, Tunisia; Asuncion, Paraguay; Mombasa, Kenya; and Bangkok, Thailand. Variations in the parameters are examined through the implementation of a Monte Carlo-based global uncertainty and sensitivity analysis to a case study building layout. A regression-based sensitivity analysis is used to analyze both the main effects of each parameter as well as the interactions between parameter pairs. The results of this research indicate that for all climates examined, the building control parameters have a larger relative importance than the architectural design parameters in determining the warehouse energy consumption. This is due to the dominance of the most influential building control parameter examined, the Chilled Storage evaporator fan control strategy. The importance of building control parameters across all climates examined emphasizes the need for an integrated design method to ensure the delivery of an energy efficient primary vaccine warehouse.

Page generated in 0.1292 seconds