Spelling suggestions: "subject:"model parameters"" "subject:"godel parameters""
1 |
The econometrics of structural change: statistical analysis and forecasting in the context of the South African economyWesso, Gilbert R. January 1994 (has links)
Philosophiae Doctor - PhD / One of the assumptions of conventional regression analysis is I that the parameters are constant over all observations. It has often been suggested that this may not be a valid assumption to make, particularly if the econometric model is to be used for economic forecasting0 Apart from this it is also found that econometric models, in particular, are used to investigate the underlying interrelationships of the system under consideration in order to understand and to explain relevant phenomena in structural analysis. The pre-requisite of such use of
econometrics is that the regression parameters of the model is assumed to be constant over time or across different crosssectional
units.
|
2 |
Oceňování opcí / Option PricingMoravec, Radek January 2011 (has links)
Title: Option Pricing Author: Radek Moravec Department: Department of Probability and Mathematical Statistics Supervisor: doc. RNDr. Jan Hurt, CSc., Department of Probability and Mathematical Statistics In the present thesis we deal with European call option pricing using lattice approaches. We introduce a discrete market model and show a way how to find an arbitrage price of financial instruments on complete markets. It's equal to the discounted value of future expected cash flow. We present the binomial option pricing model and generalize it into multinomial model. We test the resulting formula on real market data obtained from NYSE and NASDAQ. We suggest a parameter estimate method which is based on time series of historical observations of daily close price. We compare calculated option prices with their real market value and try to explain the reasons of the differences. 1
|
3 |
An Evaluation of Conduit Conceptualizations and Model PerformanceHill, Melissa Estelle 08 April 2008 (has links)
The karst research community has known that traditional numerical groundwater flow codes ignore the non-Darcian, dual-permeability components of flow that can occur in karst aquifers. In this study, the potential limitations of using such tools are quantified by evaluating the relative performances of 3 groundwater flow models at a test-site near Weeki Wachee, Florida, in the dual-permeability Upper Floridan aquifer. MODFLOW-2005 and MODFLOW-2005 Conduit Flow Process (CFP), a Darcian/non-Darcian, dual-permeability groundwater flow code recently developed by the U.S. Geological Survey, are used in this study.
A monitoring program consisting of discharge measurements and high frequency data from 2 springs and monitoring wells penetrating the matrix and conduit networks of a karst aquifer was initiated to characterize the test-site and constrain new parameters introduced with MODFLOW-2005 CFP. The monitoring program spanned conditions prior to, during, and following convective and tropical storm activity, and a drought. Analytical estimates for Reynolds numbers, ranging from 105 to 106, suggest that turbulent flow occurs in portions of the underlying conduit network. The direction and magnitude of fluid exchange observed between the matrix and conduit network indicate the conduit network underlying the test-site drains the matrix. Head differences and observed responses in monitoring wells penetrating the matrix and conduit network indicate that the hydraulic conductivities between the 2 networks do not significantly differ from each other. A conceptual model for the spatial distribution of preferential flow pathways using multiple data types, including shallow recession limbs observed in discharge hydrographs indicate a slow responding aquifer with a high storage capacity, and a poorly integrated conduit drainage network with little to no point recharge.
Model performances were evaluated by comparing observed hydrographs for discharge and monitoring wells penetrating the matrix and conduit network following convective and tropical storm events, and drought conditions, to simulated values from transient simulations. Model statistics for 32 target wells and sensitivity analysis were included in the evaluation. The dual-permeability model using the MODFLOW-2005 CFP Mode 1 displayed the highest performance with improved matches ranging from 12 to 40% between simulated and observed discharges relative to the laminar and laminar/turbulent equivalent-continuum models.
|
4 |
Optimalizační modely a důchodová reforma / Reform of pension system and optimization modelsPracný, Jakub January 2010 (has links)
Pension reform is nowadays one of the most discuss economic topics among professional public. Almost every OECD country is pressured to make pension reform, because of rapid changes in demographic structure. This article is trying described basic options for pension reform. The main effort is to compare the parameters setting of these options. The first part is describing what optimization model is and how to solve it. The second part describes pension models and shows necessity for pension reform in the Czech Republic. The third part is optimization model for PAYGO system in the Czech Republic. The fourth part describes pension systems in OECD and Latin American countries. It also shows undertaken pension reforms in some of these countries. The fifth part defines theoretical approach to pension reforms by citing and summarizing articles form experts on pension systems. The sixth part is describing proposal for Czech pension reform. It is also comparing the setting of this proposal with previously described systems. It also shows influence of parameters on sustainability of system, revenues of participants and expenses of government. In conclusion, article also discusses the influence of pension reform on family relations. This part of article is mainly based on work of world famous economist Gary S. Becker.
|
5 |
Target localization using RSS measurements in wireless sensor networksLi, Zeyuan January 2018 (has links)
The subject of this thesis is the development of localization algorithms for target localization in wireless sensor networks using received signal strength (RSS) measurements or Quantized RSS (QRSS) measurements. In chapter 3 of the thesis, target localization using RSS measurements is investigated. Many existing works on RSS localization assumes that the shadowing components are uncorrelated. However, here, shadowing is assumed to be spatially correlated. It can be shown that localization accuracy can be improved with the consideration of correlation between pairs of RSS measurements. By linearizing the corresponding Maximum Likelihood (ML) objective function, a weighted least squares (WLS) algorithm is formulated to obtain the target location. An iterative technique based on Newtons method is utilized to give a solution. Numerical simulations show that the proposed algorithms achieves better performance than existing algorithms with reasonable complexity. In chapter 4, target localization with an unknown path loss model parameter is investigated. Most published work estimates location and these parameters jointly using iterative methods with a good initialization of path loss exponent (PLE). To avoid finding an initialization, a global optimization algorithm, particle swarm optimization (PSO) is employed to optimize the ML objective function. By combining PSO with a consensus algorithm, the centralized estimation problem is extended to a distributed version so that can be implemented in distributed WSN. Although suboptimal, the distributed approach is very suitable for implementation in real sensor networks, as it is scalable, robust against changing of network topology and requires only local communication. Numerical simulations show that the accuracy of centralized PSO can attain the Cramer Rao Lower Bound (CRLB). Also, as expected, there is some degradation in performance of the distributed PSO with respect to the centralized PSO. In chapter 5, a distributed gradient algorithm for RSS based target localization using only quantized data is proposed. The ML of the Quantized RSS is derived and PSO is used to provide an initial estimate for the gradient algorithm. A practical quantization threshold designer is presented for RSS data. To derive a distributed algorithm using only the quantized signal, the local estimate at each node is also quantized. The RSS measurements and the local estimate at each sensor node are quantized in different ways. By using a quantization elimination scheme, a quantized distributed gradient method is proposed. In the distributed algorithm, the quantization noise in the local estimate is gradually eliminated with each iteration. Simulations show that the performance of the centralized algorithm can reach the CRLB. The proposed distributed algorithm using a small number of bits can achieve the performance of the distributed gradient algorithm using unquantized data.
|
6 |
Dynamic HIV/AIDS parameter estimation with ApplicationsFilter, Ruben Arnold 13 June 2005 (has links)
This dissertation is primarily concerned with dynamic HIV/AIDS parameter estimation, set against the background of engineering, biology and medical science. The marriage of these seemingly divergent fields creates a dynamic research environment that is the source of many novel results and practical applications for people living with HIV/AIDS. A method is presented to extract model parameters for the three-dimensional HIV/AIDS model in situations where an orthodox LSQ method would fail. This method allows information from outside the dataset to be added to the cost functional so that parameters can be estimated even from sparse data. Estimates in literature were for at most two parameters per dataset, whereas the procedures described herein can estimate all six parameters. A standard table for data acquisition in hospitals and clinics is analyzed to show that the table would contain enough information to extract a suitable parameter estimate for the model. Comparison with a published experiment validates the method, and shows that it becomes increasingly hard to coordinate assumptions and implicit information when analyzing real data. Parameter variations during the course of HIV/AIDS are not well understood. The results show that parameters vary over time. The analysis of parameter variation is augmented with a novel two-stage approach of model identification for the six-dimensional model. In this context, the higher-dimensional models allow an explanation for the onset of AIDS from HIV without any variation in the model parameters. The developed estimation procedure was successfully used to analyze the data from forty four patients of Southern Africa in the HIVNET 28 vaccine readiness trial. The results are important to form a benchmark for the study of vaccination. The results show that after approximately 17 months from seroconversion, oscillations in viremia flattened to a log10 based median set point of 4:08, appearing no different from reported studies in subtype B HIV-1 infected male cohorts. Together with these main outcomes, an analysis of confidence intervals for set point, days to set point and the individual parameters is presented. When estimates for the HIVNET 28 cohort are combined, the data allows a meaningful first estimate of parameters of the three-dimensional HIV/AIDS model for patients from southern Africa. The theoretical basis is used to develop an application that allows medical practitioners to estimate the three-dimensional model parameters for HIV/AIDS patients. The program demands little background knowledge from the user, but for practitioners with experience in mathematical modeling, there is ample opportunity to fine-tune the procedures for special needs. / Dissertation (MEng)--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / Unrestricted
|
7 |
Realizace počítačových modelů vedení pro PLC / Implementation of computer models of lines for PLCMrákava, Petr January 2010 (has links)
The subject of this thesis is to become familiar with the different parameters describing the lines, and the possibility of modeling data and power models. The thesis also outlined the difference in the mechanical structure of different types of cables. The practical part focuses only on the power cables and measure their basic parameters. Then is created computer model which describes the behavior of cable lines at higher frequencies than are primarily intended. The final section is an experimental network created by the PLC for remote reading of electricity meters, and it measured different transmission properties.
|
8 |
Evaluating enhanced hydrological representations in Noah LSM over transition zones : an ensemble-based approach to model diagnosticsRosero Ramirez, Enrique Xavier 03 June 2010 (has links)
This work introduces diagnostic methods for land surface model (LSM) evaluation that enable developers to identify structural shortcomings in model parameterizations by evaluating model 'signatures' (characteristic temporal and spatial patterns of behavior) in feature, cost-function, and parameter spaces. The ensemble-based methods allow researchers to draw conclusions about hypotheses and model realism that are independent of parameter choice. I compare the performance and physical realism of three versions of Noah LSM (a benchmark standard version [STD], a dynamic-vegetation enhanced version [DV], and a groundwater-enabled one [GW]) in simulating high-frequency near-surface states and land-to-atmosphere fluxes in-situ and over a catchment at high-resolution in the U.S. Southern Great Plains, a transition zone between humid and arid climates. Only at more humid sites do the more conceptually realistic, hydrologically enhanced LSMs (DV and GW) ameliorate biases in the estimation of root-zone moisture change and evaporative fraction. Although the improved simulations support the hypothesis that groundwater and vegetation processes shape fluxes in transition zones, further assessment of the timing and partitioning of the energy and water cycles indicates improvements to the movement of water within the soil column are needed. Distributed STD and GW underestimate the contribution of baseflow and simulate too-flashy streamflow. This work challenges common practices and assumptions in LSM development and offers researchers more stringent model evaluation methods. I show that, because of equifinality, ad-hoc evaluation using single parameter sets provides insufficient information for choosing among competing parameterizations, for addressing hypotheses under uncertainty, or for guiding model development. Posterior distributions of physically meaningful parameters differ between models and sites, and relationships between parameters themselves change. 'Plug and play' of modules and partial calibration likely introduce error and should be re-examined. Even though LSMs are 'physically based,' model parameters are effective and scale-, site- and model-dependent. Parameters are not functions of soil or vegetation type alone: they likely depend in part on climate and cannot be assumed to be transferable between sites with similar physical characteristics. By helping bridge the gap between the model identification and model development, this research contributes to the continued improvement of our understanding and modeling of environmental processes. / text
|
9 |
Interpreting density enhancement of coronal mass ejectionsSmith, Kellen January 2019 (has links)
Coronal mass ejections (CMEs) are some of the extraterrestrialevents most impactful to earth. Eorts to model and predict theireects have seen new possibilities in the two most recent decades dueto multiple new spacecrafts providing a wider range of data than everbefore. Models of these events suer from a number of inaccuracies,one of them being the density ratio between the CME and the ambientsolar wind. Since the arrival time for potentially harmful disturbancespredicted by models has been proved to be highly sensitive to thisparameter we therefore take care to set it as accurately as possible.Traditionally this value is either set to a default, justied by denitionand theory, or set to the density ratio between the bulk if the ejectedgas and the surrounding medium. A proposition has been made tomeasure density enhancement dierently, using a reference point at theshock wave preceding the CME for each event. This method strives toimprove arrival time predictions and was in this paper tested for onecoronal mass ejection event. Two runs if the model WSA-ENLIL+Conewas made; one with the default value of density enhancement, onewith a value determined through the revised method using coronographdata. Running the model with the revised value improved the predictedarrival time by moving it forwards in time by 4h, which was still tooearly. Other input data into the model run was then discussed as apossible cause of the remaining inaccuracy. / Koronamassutkastningar är ett av solfenomenen som påverkar jorden mest.Nya rymdfarkoster med instrument som satts i arbete de senaste två decenniernahar gett data som gjort det möjligt att modellera och förutse dessaevent till en högre precision än någonsin. Alla dessa modeller lider av någonform av felkälla, en av vilka är kvoten mellan densitet för massutkastningenoch den omgivande miljön. Eftersom förutsedda ankomsstider för potentielltskadliga störningar har visat sig vara särskilt känsliga för denna parameterså tar vi särskild hänsyn till att ange den så precist som möjligt. Vanligtvissätts detta värde till ett fast standardvärde, som anges av dess denitionoch bakomliggande teori, eller till kvoten mellan utkastningens bulk ochomgivningen. Ett förslag har dock lagts fram om att omdeniera parametern.Denna metod strävar efter att förbättra förutsedda ankomsttider ochhar i denna text testats för en koronamassutkastning. Två körningar avmodellen WSA-ENLIL+Cone gjordes; en med defaultvärdet för densitetsratiot,en med värdet satt genom mätning av empirisk cononagrafdata enligtden föreslagna metoden. Att köra modellen med den nya parametern förbättrade den förutsedda ankomsttiden genom att ytta den framåt i tidenmed 4 timmar, vilket fortfarande är för tidigt. Andra inputdata i modellendiskuterades då som möjliga orsaker till den återstående diskrepansen.
|
10 |
Failure Mechanism Analysis and Life Prediction Based on Atmospheric Plasma-Sprayed and Electron Beam-Physical Vapor Deposition Thermal Barrier CoatingsZhang, Bochun January 2017 (has links)
Using experimentally measured temperature-process-dependent model parameters, the failure analysis and life prediction were conducted for Atmospheric Plasma Sprayed Thermal Barrier Coatings (APS-TBCs) and electron beam physical vapor deposition thermal barrier coatings (EB-PVD TBCs) with Pt-modified -NiAl bond coats deposited on Ni-base single crystal superalloys. For APS-TBC system, a residual stress model for the top coat of APS-TBC was proposed and then applied to life prediction. The capability of the life model was demonstrated using temperature-dependent model parameters. Using existing life data, a comparison of fitting approaches of life model parameters was performed. The role of the residual stresses distributed at each individual coating layer was explored and their interplay on the coating’s delamination was analyzed. For EB-PVD TBCs, based on failure mechanism analysis, two newly analytical stress models from the valley position of top coat and ridge of bond coat were proposed describing stress levels generated as consequence of the coefficient of thermal expansion (CTE) mismatch between each layers. The thermal stress within TGO was evaluated based on composite material theory, where effective parameters were calculated. The lifetime prediction of EB-PVD TBCs was conducted given that the failure analysis and life model were applied to two failure modes A and B identified experimentally for thermal cyclic process. The global wavelength related to interface rumpling and its radius curvature were identified as essential parameters in life evaluation, and the life results for failure mode A were verified by existing burner rig test data. For failure mode B, the crack growth rate along the topcoat/TGO interface was calculated using the experimentally measured average interfacial fracture toughness.
|
Page generated in 0.0793 seconds