Spelling suggestions: "subject:"metaparameter calibration"" "subject:"afterparameter calibration""
1 |
Parameter Calibration for the Tidal Model by the Global Search of the Genetic AlgorithmChung, Shih-Chiang 12 September 2006 (has links)
The current study has applied the Genetic Algorithm (GA) for the boundary parameters calibration in the hydrodynamic-based tidal model. The objective is to minimize the deviation between the estimated results acquired from the simulation model and the real tidal data along Taiwan coast. The manual trial-error has been widely used in the past, but such approach is inefficient due to the complexity posed by the tremendous amounts of parameters. Fortunately, with the modern computer capability, some automatic searching processes, in particular GA, can be implemented to handle the large data set and reduce the human subjectivity when conducting the calibration. Besides, owing to the efficient evolution procedures, GA can find better solutions in a shorter time compared to the manual approach. Based on the preliminary experiments of the current study, the integration of GA with the hydrodynamic-based tidal model can improve the accuracy of simulation.
|
2 |
A Comparative study of Simulated Annealing Algorithms and Genetic Algorithms on Parameters Calibration for Tidal ModelHung, Yi-ting 13 July 2009 (has links)
The manual trial and error has been widely used in the past, but such approach is inefficient. In recent years, many heuristic algorithms used in a wide range of applications have been developed. These algorithms have more efficiency than traditional ones, because they can locate the best solution. Every algorithm has its own niche application in different problems.
In this study, the boundary parameters of the hydrodynamic-based tidal model are calibrated by using the Simulated Annealing algorithms (SA). The objective is to minimize the deviation between the estimated results acquired from the simulation model and the real tidal data along Taiwan coast. Based on the real physics distribution of the boundary parameters, we aimed to minimize the sum of each station¡¦s root mean square error (RMSE). Genetic Algorithms (GAs) and Simulated Annealing Algorithms on parameters calibration for tidal model are compared under the same condition. GAs is superior on solving the problems mentioned above while both algorithms showed improved results. By setting the initial solution derived from GAs, the solving efficiency of SA can be improved in this study.
|
3 |
A comparison of fixed item parameter calibration methods and reporting score scales in the development of an item poolChen, Keyu 01 August 2019 (has links)
The purposes of the study were to compare the relative performances of three fixed item parameter calibration methods (FIPC) in item and ability parameter estimation and to examine how the ability estimates obtained from these different methods affect interpretations using reported scales of different lengths.
Through a simulation design, the study was divided into two stages. The first stage was the calibration stage, where the parameters of pretest items were estimated. This stage investigated the accuracy of item parameter estimates and the recovery of the underlying ability distributions for different sample sizes, different numbers of pretest items, and different types of ability distributions under the three-parameter logistic model (3PL). The second stage was the operational stage, where the estimated parameters of the pretest items were put on operational forms and were used to score examinees. The second stage investigated the effect of item parameter estimation had on the ability estimation and reported scores for the new test forms.
It was found that the item parameters estimated from the three FIPC methods showed subtle differences, but the results of the DeMars method were closer to those of the separate calibration with linking method than to the FIPC with simple-prior update and FIPC with iterative prior update methods, while the FIPC with simple-prior update and FIPC with iterative prior update methods performed similarly. Regarding the experimental factors that were manipulated in the simulation, the study found that the sample size influenced the estimation of item parameters. The effect of the number of pretest items on estimation of item parameters was strong but ambiguous, likely because the effect was confounded by changes of both the number of the pretest items and the characteristics of the pretest items among the item sets. The effect of ability distributions on estimation of item parameters was not as evident as the effect of the other two factors.
After the pretest items were calibrated, the parameter estimates of these items were put into operational use. The abilities of the examinees were then estimated based on the examinees’ response to the existing operational items and the new items (previously called pretest items), of which the item parameters were estimated under different conditions. This study found that there were high correlations between the ability estimates and the true abilities of the examinees when forms containing pretest items calibrated using any of the three FIPC methods. The results suggested that all three FIPC methods were similarly competent in estimating parameters of the items, leading to satisfying determination of the examinees’ abilities. When considering the scale scores, because the estimated abilities were very similar, there were small differences among the scaled scores on the same scale; the relative frequency of examinees classified into performance categories and the classification consistency index also showed the interpretation of reported scores across scales were similar.
The study provided a comprehensive comparison on the use of FIPC methods in parameter estimation. It was hoped that this study would help the practitioners choose among the methods according to the needs of the testing programs. When ability estimates were linearly transformed into scale scores, the lengths of scales did not affect the statistical properties of scores, however, they may impact how the scores are subjectively perceived by stakeholders and therefore should be carefully selected.
|
4 |
The Robustness of Rasch True Score Preequating to Violations of Model Assumptions Under Equivalent and Nonequivalent PopulationsGianopulos, Garron 22 October 2008 (has links)
This study examined the feasibility of using Rasch true score preequating under violated model assumptions and nonequivalent populations. Dichotomous item responses were simulated using a compensatory two dimensional (2D) three parameter logistic (3PL) Item Response Theory (IRT) model. The Rasch model was used to calibrate difficulty parameters using two methods: Fixed Parameter Calibration (FPC) and separate calibration with the Stocking and Lord linking (SCSL) method. A criterion equating function was defined by equating true scores calculated with the generated 2D 3PL IRT item and ability parameters, using random groups equipercentile equating. True score preequating to FPC and SCSL calibrated item banks was compared to identity and Levine's linear true score equating, in terms of equating bias and bootstrap standard errors of equating (SEE) (Kolen & Brennan, 2004). Results showed preequating was robust to simulated 2D 3PL data and to nonequivalent item discriminations, however, true score equating was not robust to guessing and to the interaction of guessing and nonequivalent item discriminations. Equating bias due to guessing was most marked at the low end of the score scale. Equating an easier new form to a more difficult base form produced negative bias. Nonequivalent item discriminations interacted with guessing to magnify the bias and to extend the range of the bias toward the middle of the score distribution. Very easy forms relative to the ability of the examinees also produced substantial error at the low end of the score scale. Accumulating item parameter error in the item bank increased the SEE across five forms. Rasch true score preequating produced less equating error than Levine's true score linear equating in all simulated conditions. FPC with Bigsteps performed as well as separate calibration with the Stocking and Lord linking method. These results support earlier findings, suggesting that Rasch true score preequating can be used in the presence of guessing if accuracy is required near the mean of the score distribution, but not if accuracy is required with very low or high scores.
|
5 |
A Meta-Heuristic Algorithm for Vehicle Routing Problem with InterdictionHinrichsen Picand, Carlos 11 1900 (has links)
This thesis addresses the Vehicle Routing Problem with Interdiction (VRPI), an extension of the classic Vehicle Routing Problem (VRP) that incorporates the risk of route interdiction due to events such as natural disasters, armed conflicts, and infrastructural failures, among others. These interdictions introduce uncertainty and complexity into logistics planning, requiring innovative approaches to the routing process. This research employs both exact methods, using the CPLEX solver, and heuristic methods, particularly using the Greedy Randomized Adaptive Search Procedure (GRASP), to solve VRPI with different instance sizes.
This research’s key contributions include successfully implementing the GRASP algorithm on large-scale benchmark instances, representing a significant advancement over prior implementations that focused on smaller, randomly generated instances. A flexible framework was also developed to adapt the GRASP methodology for different VRP variants, including the Capacitated Vehicle Routing Problem (CVRP) and Split Delivery Vehicle Routing Problem (SDVRP), with and without interdiction.
A feasibility analysis for small instances was developed using CPLEX, highlighting the sensitivity of VRPI solutions to interdiction probabilities, particularly in scenarios with tight capacity constraints. The findings of this analysis are extended to large instances.
Additionally, a 3-fold logic was incorporated in the GRASP implementation—focused on minimizing cost, minimizing interdiction, and minimizing demand—proved crit- ical in facing the VRPI challenges, and provided high-quality solutions with reduced computational effort. Including the minimum demand logic in GRASP was instrumen- tal during the implementation and numerical experimentation for large benchmark in- stances.
The implications of this thesis are significant for operational research (OR), particularly in high-risk environments where route interdictions can occur. Future research directions include generating more diverse benchmark instances for VRPI, exploring the impact of variability in interdiction probabilities on solution quality and computational time, and applying exact methods like dynamic programming to solve large VRP instances. / Thesis / Master of Science (MSc)
|
6 |
Option pricing models: A comparison between models with constant and stochastic volatilities as well as discontinuity jumpsPaulin, Carl, Lindström, Maja January 2020 (has links)
The purpose of this thesis is to compare option pricing models. We have investigated the constant volatility models Black-Scholes-Merton (BSM) and Merton’s Jump Diffusion (MJD) as well as the stochastic volatility models Heston and Bates. The data used were option prices from Microsoft, Advanced Micro Devices Inc, Walt Disney Company, and the S&P 500 index. The data was then divided into training and testing sets, where the training data was used for parameter calibration for each model, and the testing data was used for testing the model prices against prices observed on the market. Calibration of the parameters for each model were carried out using the nonlinear least-squares method. By using the calibrated parameters the price was calculated using the method of Carr and Madan. Generally it was found that the stochastic volatility models, Heston and Bates, replicated the market option prices better than both the constant volatility models, MJD and BSM for most data sets. The mean average relative percentage error for Heston and Bates was found to be 2.26% and 2.17%, respectively. Merton and BSM had a mean average relative percentage error of 6.90% and 5.45%, respectively. We therefore suggest that a stochastic volatility model is to be preferred over a constant volatility model for pricing options. / Syftet med denna tes är att jämföra prissättningsmodeller för optioner. Vi har undersökt de konstanta volatilitetsmodellerna Black-Scholes-Merton (BSM) och Merton’s Jump Diffusion (MJD) samt de stokastiska volatilitetsmodellerna Heston och Bates. Datat vi använt är optionspriser från Microsoft, Advanced Micro Devices Inc, Walt Disney Company och S&P 500 indexet. Datat delades upp i en träningsmängd och en test- mängd. Träningsdatat användes för parameterkalibrering med hänsyn till varje modell. Testdatat användes för att jämföra modellpriser med priser som observerats på mark- naden. Parameterkalibreringen för varje modell utfördes genom att använda den icke- linjära minsta-kvadratmetoden. Med hjälp av de kalibrerade parametrarna kunde priset räknas ut genom att använda Carr och Madan-metoden. Vi kunde se att de stokastiska volatilitetsmodellerna, Heston och Bates, replikerade marknadens optionspriser bättre än båda de konstanta volatilitetsmodellerna, MJD och BSM för de flesta dataseten. Medelvärdet av det relativa medelvärdesfelet i procent för Heston och Bates beräknades till 2.26% respektive 2.17%. För Merton och BSM beräknades medelvärdet av det relativa medelvärdesfelet i procent till 6.90% respektive 5.45%. Vi anser därför att en stokastisk volatilitetsmodell är att föredra framför en konstant volatilitetsmodell för att prissätta optioner.
|
7 |
[en] ALGORITHMS FOR INTEGRATION AND CALIBRATION OF MULTISURFACE ELASTOPLASTIC MODELS / [pt] ALGORITMOS PARA INTEGRAÇÃO E CALIBRAÇÃO DE MODELOS ELASTOPLÁSTICOS COM MÚLTIPLAS SUPERFÍCIES DE PLASTIFICAÇÃORAFAEL OTAVIO ALVES ABREU 26 May 2020 (has links)
[pt] A representação do comportamento de materiais elastoplásticos a partir de modelos com múltiplas superfícies de plastificação é uma alternativa para representar o comportamento de materiais como concreto, rochas e solos, que apresentam diferentes tipos de resposta não linear, a depender do estado de tensão atuante. No entanto, o emprego desses modelos requer a definição de muitos parâmetros que, por vezes, não possuem significado físico. Além disso, a implementação de modelos elastoplásticos com multiplas envoltórias traz complexidades adicionais. O emprego desse tipo de modelo requer um esquema
robusto de integração das equações de evolução das variáveis plásticas. Nessa pespectiva, é apresentado um algoritmo de mapeamento de retorno baseado em um método de otimização sem restrições, o método de Newton-Raphson com busca unidimensional. Propõe-se uma expressão para o tensor constitutivo elastoplástico consistente para modelos com múltiplas superfícies de plastificação. Adicionalmente, apresenta-se uma metodologia de calibração dos parâmetros de tais modelos a partir da solução de um problema de otimização, solucionado via algoritmo genético. Para melhor compreender os parâmetros envolvidos nesse algoritmo, desenvolve-se um estudo paramétrico, solucionando uma série de problemas de otimização global. A robustez e eficácia dos algoritmos são avaliadas por meio de aplicações, dentre elas algumas disponíveis na literatura, num modelo constitutivo idealizado para concreto, rochas e solos: Cap Model. Por fim, calibrase tal modelo, considerando dados experimentais disponíveis na literatura. Assim,
este trabalho tem como objetivo contribuir para viabilizar o emprego de modelos elastoplásticos complexos em problemas de engenharia. / [en] Elastoplastic models with multiple plastic surfaces is an alternative to represent the nonlinear behavior of materials such as concrete, rocks and soils. The nonlinear response of these materials depends highly on the stress state. However, these models require the definition of many parameters which do not always have physical meaning. In addition, the implementation of elastoplastic models represented by multiple plastic surfaces brings additional complexities to the analysis. The use of this type of model requires a robust numerical integration scheme of the elastoplastic evolution equations. This work presents two
contributions. The first contribution is a robust return mapping algorithm for a multisurface plasticity model in general stress space known as closest point projection algorithm. The return mapping algorithm is based on a numerical method for unconstrained optimization. In this scenario, it is adopted the Newton-Raphson
method with line search. A consistent tangent modulus for multisurface plasticity is also proposed. The second contribution is a methodology for parameter calibration. This methodology is formulated as an optimization problem, with the solution obtained through a genetic algorithm. A parametric study is developed in order to better undestand specific parameters of the algorithm, solving global optimization problems. Robustness and effectiveness of the proposed algorithm are evaluated through numerical examples applied to a constitutive model used for modelling concrete, rocks and soils: Cap Model. Applications available in the literature are analysed. Lastly, the parameters of this model are calibrated using experimental data avaliable in the literature. Thus, this work aims at improving the feasibility of the use of complex elastoplastic models in engineering problems.
|
8 |
Gesteinsmechanische Versuche und petrophysikalische Untersuchungen – Laborergebnisse und numerische SimulationenBaumgarten, Lars 26 May 2016 (has links) (PDF)
Dreiaxiale Druckprüfungen können als Einstufenversuche, als Mehrstufenversuche oder als Versuche mit kontinuierlichen Bruchzuständen ausgeführt werden. Bei der Anwendung der Mehrstufentechnik ergeben sich insbesondere Fragestellungen hinsichtlich der richtigen Wahl des Umschaltpunktes und des optimalen Verlaufs des Spannungspfades zwischen den einzelnen Versuchsstufen. Fraglich beim Versuch mit kontinuierlichen Bruchzuständen bleibt, ob im Versuchsverlauf tatsächlich Spannungszustände erfasst werden, welche die Höchstfestigkeit des untersuchten Materials repräsentieren. Die Dissertation greift diese Fragestellungen auf, ermöglicht den Einstieg in die beschriebene Thematik und schafft die Voraussetzungen, die zur Lösung der aufgeführten Problemstellungen notwendig sind. Auf der Grundlage einer umfangreichen Datenbasis gesteinsmechanischer und petrophysikalischer Kennwerte wurde ein numerisches Modell entwickelt, welches das Spannungs-Verformungs-, Festigkeits- und Bruchverhalten eines Sandsteins im direkten Zug- und im einaxialen Druckversuch sowie in dreiaxialen Druckprüfungen zufriedenstellend wiedergibt. Das Festigkeitsverhalten des entwickelten Modells wurde in Mehrstufentests mit unterschiedlichen Spannungspfaden analysiert und mit den entsprechenden Laborbefunden verglichen.
|
9 |
Highway Development Decision-Making Under Uncertainty: Analysis, Critique and AdvancementEl-Khatib, Mayar January 2010 (has links)
While decision-making under uncertainty is a major universal problem, its implications in the field of transportation systems are especially enormous; where the benefits of right decisions are tremendous, the consequences of wrong ones are potentially disastrous.
In the realm of highway systems, decisions related to the highway configuration (number of lanes, right of way, etc.) need to incorporate both the traffic demand and land price uncertainties. In the literature, these uncertainties have generally been modeled using the Geometric Brownian Motion (GBM) process, which has been used extensively in modeling many other real life phenomena. But few scholars, including those who used the GBM in highway configuration decisions, have offered any rigorous justification for the use of this model.
This thesis attempts to offer a detailed analysis of various aspects of transportation systems in relation to decision-making. It reveals some general insights as well as a new concept that extends the notion of opportunity cost to situations where wrong decisions could be made. Claiming deficiency of the GBM model, it also introduces a new formulation that utilizes a large and flexible parametric family of jump models (i.e., Lévy processes). To validate this claim, data related to traffic demand and land prices were collected and analyzed to reveal that their distributions, heavy-tailed and asymmetric, do not match well with the GBM model. As a remedy, this research used the Merton, Kou, and negative inverse Gaussian Lévy processes as possible alternatives.
Though the results show indifference in relation to final decisions among the models, mathematically, they improve the precision of uncertainty models and the decision-making process. This furthers the quest for optimality in highway projects and beyond.
|
10 |
Highway Development Decision-Making Under Uncertainty: Analysis, Critique and AdvancementEl-Khatib, Mayar January 2010 (has links)
While decision-making under uncertainty is a major universal problem, its implications in the field of transportation systems are especially enormous; where the benefits of right decisions are tremendous, the consequences of wrong ones are potentially disastrous.
In the realm of highway systems, decisions related to the highway configuration (number of lanes, right of way, etc.) need to incorporate both the traffic demand and land price uncertainties. In the literature, these uncertainties have generally been modeled using the Geometric Brownian Motion (GBM) process, which has been used extensively in modeling many other real life phenomena. But few scholars, including those who used the GBM in highway configuration decisions, have offered any rigorous justification for the use of this model.
This thesis attempts to offer a detailed analysis of various aspects of transportation systems in relation to decision-making. It reveals some general insights as well as a new concept that extends the notion of opportunity cost to situations where wrong decisions could be made. Claiming deficiency of the GBM model, it also introduces a new formulation that utilizes a large and flexible parametric family of jump models (i.e., Lévy processes). To validate this claim, data related to traffic demand and land prices were collected and analyzed to reveal that their distributions, heavy-tailed and asymmetric, do not match well with the GBM model. As a remedy, this research used the Merton, Kou, and negative inverse Gaussian Lévy processes as possible alternatives.
Though the results show indifference in relation to final decisions among the models, mathematically, they improve the precision of uncertainty models and the decision-making process. This furthers the quest for optimality in highway projects and beyond.
|
Page generated in 0.1073 seconds