Spelling suggestions: "subject:"model calibration"" "subject:"godel calibration""
1 |
Analýza generátorů ekonomických scénářů (zejména úrokových měr) / Economic Scenario Generator Analysis (short rates)Šára, Michal January 2012 (has links)
The thesis is concerned with a detailed examination of the most familiar short-rate models.Furthermore,it contains some author's own derivations of formulas for prices of interest rate derivatives and some relationships between certain discretizations of these short-rate models. These formulas are then used for calibration of ceratain chosen models to the actual market data.All the calculations are performed in R using author's own functions,which are along with the other more involved derivations placed in the appendix.
|
2 |
AN AUTOMATIC CALIBRATION STRATEGY FOR 3D FE BRIDGE MODELSLIU, LEI 05 October 2004 (has links)
No description available.
|
3 |
Crop model review and sweet sorghum crop model parameter developmentPerkins, Seth A. January 1900 (has links)
Master of Science / Department of Biological and Agricultural Engineering / Kyle Douglas-Mankin / Opportunities for alternative biofuel feedstocks are widespread for a number of reasons: increased environmental and economic concerns over corn production and processing, limitations in the use of corn-based ethanol to 57 billion L (15 billion gal) by the Energy Independence and Security Act (US Congress, 2007), and target requirements of 136 billion L (36 billion gal) of renewable fuel production by 2022. The objective of this study was to select the most promising among currently available crop models that have the potential to model sweet sorghum biomass production in the central US, specifically Kansas, Oklahoma, and Texas, and to develop and test sweet sorghum crop parameters for this model.
Five crop models were selected (CropSyst, CERE-Sorghum, APSIM, ALMANAC, and SORKAM), and the models were compared based on ease of use, model support, and availability of inputs and outputs from sweet sorghum biomass data and literature. After reviewing the five models, ALMANAC was selected as the best suited for the development and testing of sweet sorghum crop parameters. The results of the model comparison show that more data are needed about sweet sorghum physiological development stages and specific growth/development factors before the other models reviewed in this study can be readily used for sweet sorghum crop modeling.
This study used a unique method to calibrate the sweet sorghum crop parameter development site. Ten years of crop performance data (Corn and Grain Sorghum) for Kansas Counties (Riley and Ellis) were used to select an optimum soil water (SW) estimation method (Saxton and Rawls, Ritchie et al., and a method that added 0.01 m m [superscript]-1 to the minimum SW value given in the SSURGO soil database) and evapotranspiration (ET) method (Penman-Montieth, Priestley-Taylor, and Hargraeves and Samani) combination for use in the sweet sorghum parameter development. ALMANAC general parameters for corn and grain sorghum were used for the calibration/selection of the SW/ET combination. Variations in the harvest indexes were used to simulate variations in geo-climate region grain yield. A step through comparison method was utilized to select the appropriate SW/ET combination. Once the SW/ET combination was selected the combination was used to develop the sweet sorghum crop parameters.
Two main conclusions can be drawn from the sweet sorghum crop parameter development study. First, the combination of Saxton and Rawls (2006) and Priestley-Taylor (1972) (SR-PT) methods has the potential for wide applicability in the US Central Plains for simulating grain yields using ALMANAC. Secondly, from the development of the sweet sorghum crop model parameters, ALMANAC modeled biomass yields with reasonable accuracy; differences from observed biomass values ranged from 0.89 to 1.76 Mg ha [superscript]-1 (2.8 to 9.8%) in Kansas (Riley County), Oklahoma (Texas County), and Texas (Hale County). Future research for sweet sorghum physiology, Radiation Use Efficiency/Vapor Pressure Deficit relationships, and weather data integration would be useful in improving sweet sorghum biomass modeling.
|
4 |
Calibration of Hydrologic Models Using Distributed Surrogate Model Optimization Techniques: A WATCLASS Case StudyKamali, Mahtab 17 February 2009 (has links)
This thesis presents a new approach to calibration of hydrologic
models using distributed computing framework. Distributed hydrologic
models are known to be very computationally intensive and difficult
to calibrate. To cope with the high computational cost of the
process a Surrogate Model Optimization (SMO) technique that is built
for distributed computing facilities is proposed. The proposed
method along with two analogous SMO methods are employed to
calibrate WATCLASS hydrologic model. This model has been developed
in University of Waterloo and is now a part of Environment Canada
MESH (Environment Canada community environmental modeling system
called Modèlisation Environmentale Communautaire (MEC) for
Surface Hydrology (SH)) systems.
SMO has the advantage of being less sensitive to "curse of
dimensionality" and very efficient for large scale and
computationally expensive models. In this technique, a mathematical
model is constructed based on a small set of simulated data from the
original expensive model. SMO technique follows an iterative
strategy which in each iteration the surrogate model map the region
of optimum more precisely.
A new comprehensive method based on a smooth regression model is
proposed for calibration of WATCLASS. This method has at least two
advantages over the previously proposed methods: a)it does not
require a large number of training data, b) it does not have many
model parameters and therefore its construction and validation
process is not demanding.
To evaluate the performance of
the proposed SMO method, it has been applied to five well-known test
functions and the results are compared to two other analogous SMO
methods. Since the performance of all SMOs are promising, two
instances of WATCLASS modeling Smoky River watershed are calibrated
using these three adopted SMOs and the resultant Nash-Sutcliffe
numbers are reported.
|
5 |
Calibration of Hydrologic Models Using Distributed Surrogate Model Optimization Techniques: A WATCLASS Case StudyKamali, Mahtab 17 February 2009 (has links)
This thesis presents a new approach to calibration of hydrologic
models using distributed computing framework. Distributed hydrologic
models are known to be very computationally intensive and difficult
to calibrate. To cope with the high computational cost of the
process a Surrogate Model Optimization (SMO) technique that is built
for distributed computing facilities is proposed. The proposed
method along with two analogous SMO methods are employed to
calibrate WATCLASS hydrologic model. This model has been developed
in University of Waterloo and is now a part of Environment Canada
MESH (Environment Canada community environmental modeling system
called Modèlisation Environmentale Communautaire (MEC) for
Surface Hydrology (SH)) systems.
SMO has the advantage of being less sensitive to "curse of
dimensionality" and very efficient for large scale and
computationally expensive models. In this technique, a mathematical
model is constructed based on a small set of simulated data from the
original expensive model. SMO technique follows an iterative
strategy which in each iteration the surrogate model map the region
of optimum more precisely.
A new comprehensive method based on a smooth regression model is
proposed for calibration of WATCLASS. This method has at least two
advantages over the previously proposed methods: a)it does not
require a large number of training data, b) it does not have many
model parameters and therefore its construction and validation
process is not demanding.
To evaluate the performance of
the proposed SMO method, it has been applied to five well-known test
functions and the results are compared to two other analogous SMO
methods. Since the performance of all SMOs are promising, two
instances of WATCLASS modeling Smoky River watershed are calibrated
using these three adopted SMOs and the resultant Nash-Sutcliffe
numbers are reported.
|
6 |
Fast methods for identifying high dimensional systems using observationsPlumlee, Matthew 08 June 2015 (has links)
This thesis proposes new analysis tools for simulation models in the presence of data. To achieve a representation close to reality, simulation models are typically endowed with a set of inputs, termed parameters, that represent several controllable, stochastic or unknown components of the system. Because these models often utilize computationally expensive procedures, even modern supercomputers require a nontrivial amount of time, money, and energy to run for complex systems. Existing statistical frameworks avoid repeated evaluations of deterministic models through an emulator, constructed by conducting an experiment on the code. In high dimensional scenarios, the traditional framework for emulator-based analysis can fail due to the computational burden of inference. This thesis proposes a new class of experiments where inference from half a million observations is possible in seconds versus the days required for the traditional technique. In a case study presented in this thesis, the parameter of interest is a function as opposed to a scalar or a set of scalars, meaning the problem exists in the high dimensional regime. This work develops a new modeling strategy to nonparametrically study the functional parameter using Bayesian inference.
Stochastic simulations are also investigated in the thesis. I describe the development of emulators through a framework termed quantile kriging, which allows for non-parametric representations of the stochastic behavior of the output whereas previous work has focused on normally distributed outputs. Furthermore, this work studied asymptotic properties of this methodology that yielded practical insights. Under certain regulatory conditions, there is the following result: By using an experiment that has the appropriate ratio of replications to sets of different inputs, we can achieve an optimal rate of convergence. Additionally, this method provided the basic tool for the study of defect patterns and a case study is explored.
|
7 |
Developing a procedure to identify parameters for calibration of a vissim modelMiller, David Michael 12 January 2009 (has links)
The calibration of microscopic traffic simulation models is an area of intense study; however, additional research is needed into how to select which parameters to calibrate. In this project a procedure was designed to eliminate the parameters unnecessary for calibration and select those which should be examined for a VISSIM model. The proposed iterative procedure consists of four phases: initial parameter selection, measures of effectiveness selection, Monte Carlo experiment, and sensitivity analysis and parameter elimination. The goal of the procedure is to experimentally determine which parameters have an effect on the selected measures of effectiveness and which do not. This is accomplished through the use of randomly generated parameter sets and subsequent analysis of the generated results. The second phase of the project involves a case study on implementing the proposed procedure on an existing VISSIM model of Cobb Parkway in Atlanta, Georgia. Each phase of the procedure is described in detail and justifications for each parameter selection or elimination are explained. For the case study the model is considered under both full traffic volumes and a reduced volume set representative of uncongested conditions.
|
8 |
Model applications on nitrogen and microplastic removal in novel wastewater treatmentElsayed, Ahmed January 2021 (has links)
Excessive release of nitrogen (e.g., ammonia and organic nitrogen) into natural water systems can cause serious environmental problems such as algal blooms and eutrophication in lakes and rivers, threating the aquatic life and ecosystem balance. Membrane aerated biofilm reactor (MABR) and anaerobic ammonia oxidation (Anammox) are new technologies for wastewater treatment with an emphasis on energy-efficient nitrification and denitrification. Microplastic (MP) is an emerging contaminant in wastewater and sludge treatment that has a negative effect on the environment and public health. For these relatively new technologies and contaminants, mathematical models can enhance our understanding of the removal mechanisms, such as reaction kinetics and mass transport. In this study, mathematical models were developed and utilized to simulate the removal of nitrogen and MP in biological reactions in wastewater treatment processes. Firstly, a comprehensive MABR model was developed and calibrated using a pilot-scale MABR operation data to estimate the important process parameters where it was found that biofilm thickness, liquid film thickness and C/N ratio are key parameters on nitrification and denitrification. Secondly, a mathematical model for Anammox process was developed and calibrated using previous experimental results to simulate the wastewater treatment using Anammox process, reflecting the importance of dissolved oxygen on the nitrogen removal using Anammox bacteria. Thirdly, a granule-based Anammox mathematical model was built and calibrated using other simulation results from previous Anammox studies, showing the significance of operational conditions (e.g., granule diameter and dissolved oxygen) on the success of Anammox enrichment process. Fourthly, an enzyme kinetic mathematical model was constructed and calibrated with lab-scale experiments to simulate the MP reduction using hydrolytic enzymes under various experimental conditions where it was found that anaerobic digesters can be an innovative solution for MP removal during the wastewater treatment processes. Based on the main findings in this study, it can be concluded that mathematical models calibrated with various experimental results are efficient tools for determining the important operational parameters on the nitrogen and MP removal and helping in the design and operation of large-scale removal applications. / Thesis / Doctor of Philosophy (PhD) / Nitrogen and microplastic (MP) are serious contaminants in wastewater that can cause critical environmental and public health problems. Nitrogen can cause algal blooms, threatening the aquatic ecosystem while MP can be ingested by the biota (e.g., fish and seabirds), causing serious damage in the food chain. Nitrogen removal in the conventional biological wastewater treatment is relatively expensive, requiring high energy cost and large footprint for the wastewater treatment facilities. MP removal is also difficult in the conventional wastewater and sludge treatment processes. Therefore, new technologies, including membrane aerated biofilm reactor (MABR), anaerobic ammonia oxidation (Anammox) and hydrolytic enzymes processes, are implemented to improve the nitrogen and MP removal with a reduced energy and resources consumption in wastewater and sludge treatment processes. Numerical models are considered as an efficient tool for better understanding of these novel technologies and the competitive biological reaction in these technologies coupled with accurate estimation of process rates of the reactions. In this thesis, different numerical models were developed and calibrated to estimate the important model parameters, assess the effect of operational conditions on the removal mechanisms and determine the dominant parameters on the removal of nitrogen and MP in the wastewater treatment processes. These numerical models can be used for better understanding of the removal mechanisms of nitrogen and MP, helping in the design and operation of removal systems and addressing novel technologies in large-scale nitrogen and MP removal applications.
|
9 |
Structural Dynamics Model Calibration and Validation of a Rectangular Steel Plate StructureKohli, Karan 24 October 2014 (has links)
No description available.
|
10 |
Stochastic models with random parameters for financial marketsIslyaev, Suren January 2014 (has links)
The aim of this thesis is a development of a new class of financial models with random parameters, which are computationally efficient and have the same level of performance as existing ones. In particular, this research is threefold. I have studied the evolution of storable commodity and commodity futures prices in time using a new random parameter model coupled with a Kalman filter. Such a combination allows one to forecast arbitrage-free futures prices and commodity spot prices one step ahead. Another direction of my research is a new volatility model, where the volatility is a random variable. The main advantage of this model is high calibration speed compared to the existing stochastic volatility models such as the Bates model or the Heston model. However, the performance of the new model is comparable to the latter. Comprehensive numerical studies demonstrate that the new model is a very competitive alternative to the Heston or the Bates model in terms of accuracy of matching option prices or computing hedging parameters. Finally, a new futures pricing model for electricity futures prices was developed. The new model has a random volatility parameter in its underlying process. The new model has less parameters, as compared to two-factor models for electricity commodity pricing with and without jumps. Numerical experiments with real data illustrate that it is quite competitive with the existing two-factor models in terms of pricing one step ahead futures prices, while being far simpler to calibrate. Further, a new heuristic for calibrating two-factor models was proposed. The new calibration procedure has two stages, offline and online. The offline stage calibrates parameters under a physical measure, while the online stage is used to calibrate the risk-neutrality parameters on each iteration of the particle filter. A particle filter was used to estimate the values of the underlying stochastic processes and to forecast futures prices one step ahead. The contributory material from two chapters of this thesis have been submitted to peer reviewed journals in terms of two papers: • Chapter 4: “A fast calibrating volatility model” has been submitted to the European Journal of Operational Research. • Chapter 5: “Electricity futures price models : calibration and forecasting” has been submitted to the European Journal of Operational Research.
|
Page generated in 0.1129 seconds