141 |
Detection of long-range dependence : applications in climatology and hydrologyRust, Henning January 2007 (has links)
It is desirable to reduce the potential threats that result from the
variability of nature, such as droughts or heat waves that lead to
food shortage, or the other extreme, floods that lead to severe
damage. To prevent such catastrophic events, it is necessary to
understand, and to be capable of characterising, nature's variability.
Typically one aims to describe the underlying dynamics of geophysical
records with differential equations. There are, however, situations
where this does not support the objectives, or is not feasible, e.g.,
when little is known about the system, or it is too complex for the
model parameters to be identified. In such situations it is beneficial
to regard certain influences as random, and describe them with
stochastic processes. In this thesis I focus on such a description
with linear stochastic processes of the FARIMA type and concentrate on
the detection of long-range dependence. Long-range dependent processes
show an algebraic (i.e. slow) decay of the autocorrelation
function. Detection of the latter is important with respect to,
e.g. trend tests and uncertainty analysis.
Aiming to provide a reliable and powerful strategy for the detection
of long-range dependence, I suggest a way of addressing the problem
which is somewhat different from standard approaches. Commonly used
methods are based either on investigating the asymptotic behaviour
(e.g., log-periodogram regression), or on finding a suitable
potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test
the fractional difference parameter d for compatibility with
zero. Here, I suggest to rephrase the problem as a model selection
task, i.e.comparing the most suitable long-range dependent and the
most suitable short-range dependent model. Approaching the task this
way requires a) a suitable class of long-range and short-range
dependent models along with suitable means for parameter estimation
and b) a reliable model selection strategy, capable of discriminating
also non-nested models. With the flexible FARIMA model class together
with the Whittle estimator the first requirement is
fulfilled. Standard model selection strategies, e.g., the
likelihood-ratio test, is for a comparison of non-nested models
frequently not powerful enough. Thus, I suggest to extend this
strategy with a simulation based model selection approach suitable for
such a direct comparison. The approach follows the procedure of
a statistical test, with the likelihood-ratio as the test
statistic. Its distribution is obtained via simulations using the two
models under consideration. For two simple models and different
parameter values, I investigate the reliability of p-value and power
estimates obtained from the simulated distributions. The result turned
out to be dependent on the model parameters. However, in many cases
the estimates allow an adequate model selection to be established.
An important feature of this approach is that it immediately reveals
the ability or inability to discriminate between the two models under
consideration.
Two applications, a trend detection problem in temperature records and
an uncertainty analysis for flood return level estimation, accentuate the
importance of having reliable methods at hand for the detection of
long-range dependence. In the case of trend detection, falsely
concluding long-range dependence implies an underestimation of a trend
and possibly leads to a delay of measures needed to take in order to
counteract the trend. Ignoring long-range dependence, although
present, leads to an underestimation of confidence intervals and thus
to an unjustified belief in safety, as it is the case for the
return level uncertainty analysis. A reliable detection of long-range
dependence is thus highly relevant in practical applications.
Examples related to extreme value analysis are not limited to
hydrological applications. The increased uncertainty of return level
estimates is a potentially problem for all records from autocorrelated
processes, an interesting examples in this respect is the assessment
of the maximum strength of wind gusts, which is important for
designing wind turbines. The detection of long-range dependence is
also a relevant problem in the exploration of financial market
volatility. With rephrasing the detection problem as a model
selection task and suggesting refined methods for model comparison,
this thesis contributes to the discussion on and development of
methods for the detection of long-range dependence. / Die potentiellen Gefahren und Auswirkungen der natürlicher
Klimavariabilitäten zu reduzieren ist ein wünschenswertes Ziel. Solche
Gefahren sind etwa Dürren und Hitzewellen, die zu Wasserknappheit
führen oder, das andere Extrem, Überflutungen, die einen erheblichen
Schaden an der Infrastruktur nach sich ziehen können. Um solche
katastrophalen Ereignisse zu vermeiden, ist es notwendig die Dynamik
der Natur zu verstehen und beschreiben zu können.
Typischerweise wird versucht die Dynamik geophysikalischer Datenreihen
mit Differentialgleichungssystemen zu
beschreiben. Es gibt allerdings Situationen in denen dieses Vorgehen
nicht zielführend oder technisch nicht möglich ist. Dieses sind
Situationen in denen wenig Wissen über das System vorliegt oder es zu
komplex ist um die Modellparameter zu identifizieren.
Hier ist es sinnvoll einige Einflüsse als zufällig zu
betrachten und mit Hilfe stochastischer Prozesse zu modellieren.
In dieser Arbeit wird eine solche Beschreibung mit linearen
stochastischen Prozessen der FARIMA-Klasse angestrebt. Besonderer
Fokus liegt auf der Detektion von langreichweitigen
Korrelationen. Langreichweitig korrelierte Prozesse sind solche mit
einer algebraisch, d.h. langsam, abfallenden
Autokorrelationsfunktion. Eine verläßliche Erkennung dieser Prozesse
ist relevant für Trenddetektion und Unsicherheitsanalysen.
Um eine verläßliche Strategie für die Detektion
langreichweitig korrelierter Prozesse zur Verfügung zu stellen, wird
in der Arbeit ein anderer als der Standardweg vorgeschlagen.
Gewöhnlich werden Methoden eingesetzt, die das
asymptotische Verhalten untersuchen, z.B. Regression im Periodogramm.
Oder aber es wird versucht ein passendes potentiell langreichweitig
korreliertes Modell zu finden, z.B. aus der FARIMA Klasse, und den
geschätzten fraktionalen Differenzierungsparameter d auf Verträglichkeit
mit dem trivialen Wert Null zu testen. In der Arbeit wird
vorgeschlagen das Problem der Detektion langreichweitiger
Korrelationen als Modellselektionsproblem umzuformulieren, d.h. das
beste kurzreichweitig und das beste langreichweitig
korrelierte Modell zu vergleichen. Diese Herangehensweise erfordert a)
eine geeignete Klasse von lang- und kurzreichweitig korrelierten
Prozessen und b) eine verläßliche Modellselektionsstrategie, auch für
nichtgenestete Modelle. Mit der flexiblen FARIMA-Klasse und dem
Whittleschen Ansatz zur Parameterschätzung ist die erste
Voraussetzung erfüllt. Hingegen sind standard Ansätze zur
Modellselektion, wie z.B. der Likelihood-Ratio-Test, für
nichtgenestete Modelle oft nicht trennscharf genug. Es wird daher
vorgeschlagen diese Strategie mit einem simulationsbasierten Ansatz zu
ergänzen, der insbesondere für die direkte Diskriminierung
nichtgenesteter Modelle geeignet ist. Der Ansatz folgt
einem statistischen Test mit dem Quotienten der Likelihood
als Teststatistik. Ihre Verteilung wird über
Simulationen mit den beiden zu unterscheidenden Modellen
ermittelt. Für zwei einfache Modelle und verschiedene Parameterwerte
wird die Verläßlichkeit der Schätzungen für p-Wert und Power
untersucht. Das Ergebnis hängt von den Modellparametern ab. Es konnte
jedoch in vielen Fällen eine adäquate Modellselektion etabliert
werden. Ein wichtige Eigenschaft dieser Strategie ist, dass
unmittelbar offengelegt wird, wie gut sich die betrachteten Modelle
unterscheiden lassen.
Zwei Anwendungen, die Trenddetektion in Temperaturzeitreihen und die
Unsicherheitsanalyse für Bemessungshochwasser, betonen den Bedarf an
verläßlichen Methoden für die Detektion langreichweitiger
Korrelationen. Im Falle der Trenddetektion führt ein fälschlicherweise
gezogener Schluß auf langreichweitige Korrelationen zu einer
Unterschätzung eines Trends, was wiederum zu einer möglicherweise
verzögerten Einleitung von Maßnahmen führt, die diesem entgegenwirken
sollen. Im Fall von Abflußzeitreihen führt die Nichtbeachtung von
vorliegenden langreichweitigen Korrelationen zu einer Unterschätzung
der Unsicherheit von Bemessungsgrößen. Eine verläßliche Detektion von
langreichweitig Korrelierten Prozesse ist somit von hoher Bedeutung in
der praktischen Zeitreihenanalyse. Beispiele mit Bezug zu extremem
Ereignissen beschränken sich nicht nur auf die Hochwasseranalyse. Eine
erhöhte Unsicherheit in der Bestimmung von extremen Ereignissen ist
ein potentielles Problem von allen autokorrelierten Prozessen. Ein
weiteres interessantes Beispiel ist hier die Abschätzung von maximalen
Windstärken in Böen, welche bei der Konstruktion von Windrädern eine
Rolle spielt. Mit der Umformulierung des Detektionsproblems als
Modellselektionsfrage und mit der Bereitstellung geeigneter
Modellselektionsstrategie trägt diese Arbeit zur Diskussion und
Entwicklung von Methoden im Bereich der Detektion von
langreichweitigen Korrelationen bei.
|
142 |
Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental ModelsRazavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance.
The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied.
One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified.
As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model.
Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.
|
143 |
Assessing Mold Risks in Buildings under UncertaintyMoon, Hyeun Jun 15 July 2005 (has links)
Microbial growth is a major cause of Indoor Air Quality (IAQ) problems. The implications of mold growth range from unacceptable musty smells and defacement of interior finishes, to structural damage and adverse health effects, not to mention lengthy litigation processes. Mold is likely to occur when a favorable combination of humidity, temperature, and substrate nutrient are maintained long enough. As many modern buildings use products that increase the likelihood of molds (e.g., paper and wood based products), reported cases have increased in recent years.
Despite decades of intensive research efforts to prevent mold, modern buildings continue to suffer from mold infestation. The main reason is that current prescriptive regulations focus on the control of relative humidity only. However, recent research has shown that mold occurrences are influenced by a multitude of parameters with complex physical interactions. The set of relevant building parameters includes physical properties of building components, aspects of building usage, certain materials, occupant behavior, cleaning regime, HVAC system components and their operation, and other. Mold occurs mostly as the unexpected result of an unforeseen combination of the uncertain building parameters.
Current deterministic mold assessment studies fail to give conclusive results. These simulations are based on idealizations of the building and its use, and therefore unable to capture the effect of the random, situational, and sometimes idiosyncratic nature of building use and operation.
The presented research takes a radically different approach, based on the assessment of the uncertainties of all parameters and their propagation through a mixed set of simulations using a Monte Carlo technique. This approach generates a mold risk distribution that reveals the probability of mold occurrence in selected trouble spots in a building. The approach has been tested on three building cases located in Miami and Atlanta. In all cases the new approach was able to show the circumstances under which the mold risk could increase substantially, leading to a set of clear specifications for remediation and, in for new designs, to A/E procurement methods that will significantly reduce any mold risk.
|
144 |
Particulate Modeling and Control Strategy of Atlanta, GeorgiaPark, Sun-kyoung 23 November 2005 (has links)
Particles reduce visibility, change climate, and affect human health. In 1997, the National Ambient Air Quality Standard (NAAQS) for PM2.5 (particles less than 2.5 mm) was promulgated. The annual mean PM2.5 mass concentrations in Atlanta, Georgia exceed the standard, and control is needed. The first goal of this study is to develop the control strategies of PM2.5 in Atlanta, Georgia. Based on the statistical analysis of measured data, from 22% to 40% of emission reductions are required to meet the NAAQS at 95% CI. The estimated control levels can be tested using the Community Multiscale Air Quality (CMAQ) model to better assess if the proposed levels will achieve sufficient reduction in PM2.5. The second goal of this study is to analyze various uncertainties residing in CMAQ. For the model to be used in such applications with confidence, it needs to be evaluated. The model performance is calculated by the relative agreement between volume-averaged predictions and point measurements. Up to 14% of the model error for PM2.5 mass is due to the different spatial scales of the two values. CMAQ predicts PM2.5 mass concentrations reasonably well, but CMAQ significantly underestimates PM2.5 number concentrations. Causes of the underestimation include that assumed inaccurate particle density and particle size of the primary emissions in CMAQ, in addition to the expression of the particle size with three lognormal distributions. Also, the strength and limitations of CMAQ in performing PM2.5 source apportionment are compared with those of the Chemical Mass Balance with Molecular Markers. Finally, the accuracy of emissions, one of the important inputs of CMAQ, is evaluated by the inverse modeling. Results show that base level emissions for CO and SO2 sources are relatively accurate, whereas NH3, NOx, PEC and PMFINE emissions are overestimated. The emission adjustment for POA and VOC emissions is significantly different among regions.
|
145 |
Back-calculating emission rates for ammonia and particulate matter from area sources using dispersion modelingPrice, Jacqueline Elaine 15 November 2004 (has links)
Engineering directly impacts current and future regulatory policy decisions. The foundation of air pollution control and air pollution dispersion modeling lies in the math, chemistry, and physics of the environment. Therefore, regulatory decision making must rely upon sound science and engineering as the core of appropriate policy making (objective analysis in lieu of subjective opinion). This research evaluated particulate matter and ammonia concentration data as well as two modeling methods, a backward Lagrangian stochastic model and a Gaussian plume dispersion model. This analysis assessed the uncertainty surrounding each sampling procedure in order to gain a better understanding of the uncertainty in the final emission rate calculation (a basis for federal regulation), and it assessed the differences between emission rates generated using two different dispersion models. First, this research evaluated the uncertainty encompassing the gravimetric sampling of particulate matter and the passive ammonia sampling technique at an animal feeding operation. Future research will be to further determine the wind velocity profile as well as determining the vertical temperature gradient during the modeling time period. This information will help quantify the uncertainty of the meteorological model inputs into the dispersion model, which will aid in understanding the propagated uncertainty in the dispersion modeling outputs. Next, an evaluation of the emission rates generated by both the Industrial Source Complex (Gaussian) model and the WindTrax (backward-Lagrangian stochastic) model revealed that the calculated emission concentrations from each model using the average emission rate generated by the model are extremely close in value. However, the average emission rates calculated by the models vary by a factor of 10. This is extremely troubling. In conclusion, current and future sources are regulated based on emission rate data from previous time periods. Emission factors are published for regulation of various sources, and these emission factors are derived based upon back-calculated model emission rates and site management practices. Thus, this factor of 10 ratio in the emission rates could prove troubling in terms of regulation if the model that the emission rate is back-calculated from is not used as the model to predict a future downwind pollutant concentration.
|
146 |
Risk-conscious design of off-grid solar energy housesHu, Huafen 16 November 2009 (has links)
Zero energy houses and (near) zero energy buildings are among the most ambitious targets of society moving towards an energy efficient built environment. The "zero" energy consumption is most often judged on a yearly basis and should thus be interpreted as yearly net zero energy. The fully self sustainable, i.e. off-grid, home poses a major challenge due to the dynamic nature of building load profiles, ambient weather condition and occupant needs. In current practice, the off-grid status is accomplishable only by relying on backup generators or utilizing a large energy storage system.
The research develops a risk based holistic system design method to guarantee a match between onsite sustainable energy generation and energy demand of systems and occupants. Energy self-sufficiency is the essential constraint that drives the design process. It starts with information collection of occupants' need in terms of life style, risk perception, and budget planning. These inputs are stated as probabilistic risk constraints that are applied during design evolution. Risk expressions are developed based on the relationships between power unavailability criteria and "damages" as perceived by occupants. A power reliability assessment algorithm is developed to aggregate the system underperformance causes and estimate all possible power availability outcomes of an off-grid house design. Based on these foundations, the design problem of an off-grid house is formulated as a stochastic programming problem with probabilistic constraints. The results show that inherent risks in weather patterns dominate the risk level of off-grid houses if current power unavailability criteria are used. It is concluded that a realistic and economic design of an off-grid house can only be achieved after an appropriate design weather file is developed for risk conscious design methods.
The second stage of the research deals with the potential risk mitigation when an intelligent energy management system is installed. A stochastic model based predictive controller is implemented to manage energy allocation to sub individual functions in the off-grid house during operation. The controller determines in real time the priority of energy consuming activities and functions. The re-evaluation of the risk indices show that the proposed controller helps occupants to reduce damages related to power unavailability, and increase thermal comfort performance of the house.
The research provides a risk oriented view on the energy self-sufficiency of off-grid solar houses. Uncertainty analysis is used to verify the match between onsite sustainable energy supply and demand under dynamic ambient conditions in a manner that reveals the risks induced by the fact that new technologies may not perform as well as expected. Furthermore, taking occupants' needs based on their risk perception as constraints in design evolution provides better guarantees for right sized system design.
|
147 |
Technoeconomic evaluation of flared natural gas reduction and energy recovery using gas-to-wire schemeAnosike, Nnamdi Benedict 11 1900 (has links)
Most mature oil reservoirs or fields tend to perform below expectations, owing to
high level of associated gas production. This creates a sub-optimal performance
of the oil production surface facilities; increasing oil production specific
operating cost. In many scenarios oil companies flare/vent this gas. In addition
to oil production constraints, associated gas flaring and venting consists an
environmental disasters and economic waste. Significant steps are now being
devised to utilise associated gas using different exploitation techniques. Most of
the technologies requires large associated gas throughput.
However, small-scale associated gas resources and non-associated natural gas
reserves (commonly referred to as stranded gas or marginal field) remains
largely unexploited. Thus, the objective of this thesis is to evaluate techno-
economic of gas turbine engines for onsite electric power generation called gas-
to-wire (GTW) using the small-scaled associated gas resources. The range of
stranded flared associated gas and non-associated gas reserves considered is
around 10 billion to 1 trillion standard cubic feet undergoing production decline.
The gas turbine engines considered for power plant in this study are based on
simple cycle or combustion turbines. Simple cycle choice of power-plant is
conceived to meet certain flexibility in power plant capacity factor and
availability during production decline. In addition, it represents the basic power
plant module cable of being developed into other power plant types in future to
meet different local energy requirements.
This study developed a novel gas-to-wire techno-economic and risk analysis
framework, with capability for probabilistic uncertainty analysis using Monte
Carlo simulation (MCS) method. It comprises an iterative calculation of the
probabilistic recoverable reserves with decline module and power plant
thermodynamic performance module enabled by Turbomatch (an in-house
code) and Gas Turb® software coupled with economic risk modules with
@Risk® commercial software. This algorithm is a useful tool for simulating the
interaction between disrupted gas production profiles induced by production
decline and its effect on power plant techno-economic performance over
associated gas utilization economic life. Furthermore, a divestment and make-
up fuel protocol is proposed for management of gas turbine engine units to
mitigate economical underperformance of power plant regime experienced due
to production decline.
The results show that utilization of associated gas for onsite power generation is
a promising technology for converting waste to energy. Though, associated gas
composition can be significant to gas turbine performance but a typical Nigerian
associated gas considered is as good as a regular natural gas. The majority of
capital investment risk is associated with production decline both natural and
manmade. Finally, the rate of capital investment returns decreases with smaller
reserves.
|
148 |
Robust Algorithms for Optimization of Chemical Processes in the Presence of Model-Plant MismatchMandur, Jasdeep Singh 12 June 2014 (has links)
Process models are always associated with uncertainty, due to either inaccurate model structure or inaccurate identification. If left unaccounted for, these uncertainties can significantly affect the model-based decision-making. This thesis addresses the problem of model-based optimization in the presence of uncertainties, especially due to model structure error. The optimal solution from standard optimization techniques is often associated with a certain degree of uncertainty and if the model-plant mismatch is very significant, this solution may have a significant bias with respect to the actual process optimum. Accordingly, in this thesis, we developed new strategies to reduce (1) the variability in the optimal solution and (2) the bias between the predicted and the true process optima.
Robust optimization is a well-established methodology where the variability in optimization objective is considered explicitly in the cost function, leading to a solution that is robust to model uncertainties. However, the reported robust formulations have few limitations especially in the context of nonlinear models. The standard technique to quantify the effect of model uncertainties is based on the linearization of underlying model that may not be valid if the noise in measurements is quite high. To address this limitation, uncertainty descriptions based on the Bayes’ Theorem are implemented in this work. Since for nonlinear models the resulting Bayesian uncertainty may have a non-standard form with no analytical solution, the propagation of this uncertainty onto the optimum may become computationally challenging using conventional Monte Carlo techniques. To this end, an approach based on Polynomial Chaos expansions is developed. It is shown in a simulated case study that this approach resulted in drastic reductions in the computational time when compared to a standard Monte Carlo sampling technique. The key advantage of PC expansions is that they provide analytical expressions for statistical moments even if the uncertainty in variables is non-standard. These expansions were also used to speed up the calculation of likelihood function within the Bayesian framework. Here, a methodology based on Multi-Resolution analysis is proposed to formulate the PC based approximated model with higher accuracy over the parameter space that is most likely based on the given measurements.
For the second objective, i.e. reducing the bias between the predicted and true process optima, an iterative optimization algorithm is developed which progressively corrects the model for structural error as the algorithm proceeds towards the true process optimum. The standard technique is to calibrate the model at some initial operating conditions and, then, use this model to search for an optimal solution. Since the identification and optimization objectives are solved independently, when there is a mismatch between the process and the model, the parameter estimates cannot satisfy these two objectives simultaneously. To this end, in the proposed methodology, corrections are added to the model in such a way that the updated parameter estimates reduce the conflict between the identification and optimization objectives. Unlike the standard estimation technique that minimizes only the prediction error at a given set of operating conditions, the proposed algorithm also includes the differences between the predicted and measured gradients of the optimization objective and/or constraints in the estimation. In the initial version of the algorithm, the proposed correction is based on the linearization of model outputs. Then, in the second part, the correction is extended by using a quadratic approximation of the model, which, for the given case study, resulted in much faster convergence as compared to the earlier version.
Finally, the methodologies mentioned above were combined to formulate a robust iterative optimization strategy that converges to the true process optimum with minimum variability in the search path. One of the major findings of this thesis is that the robust optimal solutions based on the Bayesian parametric uncertainty are much less conservative than their counterparts based on normally distributed parameters.
|
149 |
Uncertainty in the first principle model based condition monitoring of HVAC systemsBuswell, Richard A. January 2001 (has links)
Model based techniques for automated condition monitoring of HVAC systems have been under development for some years. Results from the application of these methods to systems installed in real buildings have highlighted robustness and sensitivity issues. The generation of false alarms has been identified as a principal factor affecting the potential usefulness of condition monitoring in HVAC applications. The robustness issue is a direct result of the uncertain measurements and the lack of experimental control that axe characteristic of HVAC systems. This thesis investigates the uncertainties associated with implementing a condition monitoring scheme based on simple first principles models in HVAC subsystems installed in real buildings. The uncertainties present in typical HVAC control system measurements are evaluated. A sensor validation methodology is developed and applied to a cooling coil subsystem installed in a real building. The uncertainty in steady-state analysis based on transient data is investigated. The uncertainties in the simplifications and assumptions associated with the derivation of simple first principles based models of heat-exchangers are established. A subsystem model is developed and calibrated to the test system. The relationship between the uncertainties in the calibration data and the parameter estimates are investigated. The uncertainties from all sources are evaluated and used to generate a robust indication of the subsystem condition. The sensitivity and robustness of the scheme is analysed based on faults implemented in the test system during summer, winter and spring conditions.
|
150 |
Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental ModelsRazavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance.
The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied.
One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified.
As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model.
Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.
|
Page generated in 0.0855 seconds